Advances in myoelectric interfaces have increased the use of wearable prosthetics including robotic arms. Although promising results have been achieved with pattern recognition-based control schemes, control robustness requires improvement to increase user acceptance of prosthetic hands. The aim of this study was to quantify the performance of stacked sparse autoencoders (SSAE), an emerging deep learning technique used to improve myoelectric control and to compare multiday surface electromyography (sEMG) and intramuscular (iEMG) recordings. Ten able-bodied and six amputee subjects with average ages of 24.5 and 34.5 years, respectively, were evaluated using offline classification error as the performance matric. Surface and intramuscular EMG were concurrently recorded while each subject performed 11 hand motions. Performance of SSAE was compared with that of linear discriminant analysis (LDA) classifier. Within-day analysis showed that SSAE (1.38 ± 1.38%) outperformed LDA (8.09 ± 4.53%) using both the sEMG and iEMG data from both able-bodied and amputee subjects (p < 0.001). In the between-day analysis, SSAE outperformed LDA (7.19 ± 9.55% vs. 22.25 ± 11.09%) using both sEMG and iEMG data from both able-bodied and amputee subjects. No significant difference in performance was observed for within-day and pairs of days with eight-fold validation when using iEMG and sEMG with SSAE, whereas sEMG outperformed iEMG (p < 0.001) in between-day analysis both with two-fold and seven-fold validation schemes. The results obtained in this study imply that SSAE can significantly improve the performance of pattern recognition-based myoelectric control scheme and has the strength to extract deep information hidden in the EMG data.