Authors: Thaneshwar Kumar Sahu, Dr. Pankaj Kumar Mishra, Dr. Saurabh Gupta
Abstract: Surface electromyography sEMG for short is one of those tools that sounds intimidating but really just means sticking sensors on the skin to pick up tiny electrical signals from your muscles. In our case, we tried using it to recognize static hand gestures, pulling data from the UCI EMG dataset. That set is surprisingly rich 36 people, each performing a bunch of defined gestures, all recorded across eight muscle channels. Now, instead of throwing raw signals straight into a model (which usually doesn’t go well), we built a feature extraction setup that digs into both time and frequency domains. Things like integrated EMG, waveform length, and zero-crossing rate sit alongside spectral peaks and Hjorth parameters. These aren’t flashy features, but together they give a surprisingly detailed picture of muscle activity when run through sliding windows. One headache was class imbalance some gestures were just less frequent than others. We tried resampling and scaling tricks to smooth that out before training a deep neural network. The payoff was solid: about 94% macro-average accuracy across all gesture types. Still, I wouldn’t claim perfection. Results like these depend a lot on preprocessing choices, and there’s always the worry that performance might slip outside controlled datasets. To peek inside the black box a little, we leaned on visualization t-SNE plots to see how gestures clustered, correlation heatmaps, even multiclass ROC curves. These don’t solve the interpretability issue entirely, but they help. So, the main takeaway, at least for me, is that blending old-school feature engineering with deep nets can work really well here. Pure deep learning often gets all the attention, but sometimes the more traditional signal processing tricks quietly carry the load.
International Journal of Science, Engineering and Technology