A Low-Power Embedded System for Real-Time sEMG based Event...

4
A Low-Power Embedded System for Real-Time sEMG based Event-Driven Gesture Recognition Andrea Mongardi * , Paolo Motto Ros , Fabio Rossi * , Massimo Ruo Roch * , Maurizio Martina * , Danilo Demarchi * * Dipartimento di Elettronica e Telecomunicazioni, Politecnico di Torino, Torino, Italy, Email: [email protected] Electronic Design Laboratory, Istituto Italiano di Tecnologia, Genova, Italy, Email: [email protected] Abstract—Latency and power consumption management are present-day hot topics for wearable devices and IoT applications. This paper presents the implementation of a low power system for hand movement recognition, based on surface ElectroMyo- Graphic (sEMG) signals, suitable for human-machine interface. Every time the sEMG signal crosses a predefined threshold an event is generated, implementing the Average Threshold Crossing (ATC) technique directly on-boards on the subject arm. Resulting quasi-digital signals, averaged over a fixed time window, are sent to an ARM Cortex-M4F processor which implements a fully- connected Neural Network (NN) able to recognize six different gestures from only three input channels. Dataset creation has involved 25 healthy people, each one performing five movements within five repeated sessions. The NN has been trained using the per-subject holdout validation method, obtaining an accuracy of 96.34%. With a maximum latency of 8.5 ms and an average power consumption of only 0.8 mW at full recognition rate, the proposed NN implementation shows promising results. Index Terms—energy efficiency, embedded machine learning, edge computing, event-driven, surface electromyography I. I NTRODUCTION In the last decade important results regarding sensors, embedded systems and wireless communication have made wearable technologies considerably popular, with lots of peo- ple daily using smart devices, like smart watches and smart glasses, as well as health devices [1]. This huge success has made researchers focus on new paradigms in order to continuously reduce power consumption while increasing computational capability. Wearable devices have also a main role in gesture recog- nition [2], in particular recognizing hand gestures for in- creasingly accurate human-machine interface. A well designed system, with a suitable set of movements, can easily give to the user an accurate and fast way to control different applications, from robotic arms [3] to mobile apps and videogames [4]. A common approach is to acquire the sEMG signals from the forearm (by applying electrodes on the skin). The signals provide information about the activation of the musculoskele- tal system, directly reflecting the execution of a particular movement. In state-of-the-art works (e.g., [5], [6]), electrodes are usually placed in multiple positions in order to acquire different muscular activities, extract a set of features and use them to train a machine learning algorithm that predicts the movement. Though, especially for wearable applications, this approach requires either lots of computational resources to extract desired features on the acquisition nodes or high power consumption to send all the raw data to the processing unit. A different feature extraction approach is based on the event- driven acquisition of the sEMG signal, where the bio-signal is amplified and directly compared with a threshold. These Threshold Crossing (TC) events generate a quasi-digital sig- nal, which contains the muscle activity information in its time- domain property. Since it is a digital signal, this information can be directly interpreted by digital electronics, allowing a relaxation of the requirements of hardware and software resources [7]. Among different sEMG features, the average of the TC events over a predefined time window can be used as muscle activation indicator due to its strong correlation with the muscle force [8]. In this scenario, considering also the reduced dimension of obtained information and the possibility to perform this process directly on-board, the ATC technique considerably reduces computational and power constraints [9], [10]. This paper proposes a complete low-power system for the hand gesture recognition based on the event-driven processing of the sEMG signal, which fully exploits the ATC edge- computing advantages. The TC signals from different mus- cles are generated and processed directly on the subject forearm, employing full-custom sEMG acquisition channels and AmbiqMicro Apollo2 MicroController Unit (MCU). In particular, it is equipped with an embedded ARM Cortex- M4F microprocessor (μP) which has been chosen considering its really low-power configurable mode that perfectly fits our needs. A fully-connected Neural Network (NN) is then applied to the TC events obtained from three different muscles in order to distinguish among five active gestures: Wrist Extension (referred as Ext) Wrist Flexion (referred as Flex) Radial Deviation (referred as Rad) Ulnar Deviation (referred as Uln) Grasp and one idle state, added in post processing during the training phase to characterize the rest condition of the hand. II. SYSTEM ARCHITECTURE The gesture recognition system is mainly composed of three Analog Front-Ends (AFEs) for sEMG processing and one Apollo2 MCU to classify the data. Each acquisition channel

Transcript of A Low-Power Embedded System for Real-Time sEMG based Event...

Page 1: A Low-Power Embedded System for Real-Time sEMG based Event ...personal.delen.polito.it/...HandGestureNN.pdf · hand gesture recognition based on the event-driven processing of the

A Low-Power Embedded System for Real-TimesEMG based Event-Driven Gesture Recognition

Andrea Mongardi∗, Paolo Motto Ros†, Fabio Rossi∗, Massimo Ruo Roch∗, Maurizio Martina∗, Danilo Demarchi∗∗Dipartimento di Elettronica e Telecomunicazioni, Politecnico di Torino, Torino, Italy, Email: [email protected]

†Electronic Design Laboratory, Istituto Italiano di Tecnologia, Genova, Italy, Email: [email protected]

Abstract—Latency and power consumption management arepresent-day hot topics for wearable devices and IoT applications.This paper presents the implementation of a low power systemfor hand movement recognition, based on surface ElectroMyo-Graphic (sEMG) signals, suitable for human-machine interface.Every time the sEMG signal crosses a predefined threshold anevent is generated, implementing the Average Threshold Crossing(ATC) technique directly on-boards on the subject arm. Resultingquasi-digital signals, averaged over a fixed time window, are sentto an ARM Cortex-M4F processor which implements a fully-connected Neural Network (NN) able to recognize six differentgestures from only three input channels. Dataset creation hasinvolved 25 healthy people, each one performing five movementswithin five repeated sessions. The NN has been trained usingthe per-subject holdout validation method, obtaining an accuracyof 96.34%. With a maximum latency of 8.5 ms and an averagepower consumption of only 0.8 mW at full recognition rate, theproposed NN implementation shows promising results.

Index Terms—energy efficiency, embedded machine learning,edge computing, event-driven, surface electromyography

I. INTRODUCTION

In the last decade important results regarding sensors,embedded systems and wireless communication have madewearable technologies considerably popular, with lots of peo-ple daily using smart devices, like smart watches and smartglasses, as well as health devices [1]. This huge successhas made researchers focus on new paradigms in orderto continuously reduce power consumption while increasingcomputational capability.Wearable devices have also a main role in gesture recog-nition [2], in particular recognizing hand gestures for in-creasingly accurate human-machine interface. A well designedsystem, with a suitable set of movements, can easily give to theuser an accurate and fast way to control different applications,from robotic arms [3] to mobile apps and videogames [4].A common approach is to acquire the sEMG signals fromthe forearm (by applying electrodes on the skin). The signalsprovide information about the activation of the musculoskele-tal system, directly reflecting the execution of a particularmovement. In state-of-the-art works (e.g., [5], [6]), electrodesare usually placed in multiple positions in order to acquiredifferent muscular activities, extract a set of features and usethem to train a machine learning algorithm that predicts themovement. Though, especially for wearable applications, thisapproach requires either lots of computational resources to

extract desired features on the acquisition nodes or high powerconsumption to send all the raw data to the processing unit.A different feature extraction approach is based on the event-driven acquisition of the sEMG signal, where the bio-signalis amplified and directly compared with a threshold. TheseThreshold Crossing (TC) events generate a quasi-digital sig-nal, which contains the muscle activity information in its time-domain property. Since it is a digital signal, this informationcan be directly interpreted by digital electronics, allowinga relaxation of the requirements of hardware and softwareresources [7]. Among different sEMG features, the average ofthe TC events over a predefined time window can be used asmuscle activation indicator due to its strong correlation withthe muscle force [8]. In this scenario, considering also thereduced dimension of obtained information and the possibilityto perform this process directly on-board, the ATC techniqueconsiderably reduces computational and power constraints [9],[10].This paper proposes a complete low-power system for thehand gesture recognition based on the event-driven processingof the sEMG signal, which fully exploits the ATC edge-computing advantages. The TC signals from different mus-cles are generated and processed directly on the subjectforearm, employing full-custom sEMG acquisition channelsand AmbiqMicro Apollo2 MicroController Unit (MCU). Inparticular, it is equipped with an embedded ARM Cortex-M4F microprocessor (µP) which has been chosen consideringits really low-power configurable mode that perfectly fits ourneeds. A fully-connected Neural Network (NN) is then appliedto the TC events obtained from three different muscles in orderto distinguish among five active gestures:

• Wrist Extension (referred as Ext)• Wrist Flexion (referred as Flex)• Radial Deviation (referred as Rad)• Ulnar Deviation (referred as Uln)• Grasp

and one idle state, added in post processing during the trainingphase to characterize the rest condition of the hand.

II. SYSTEM ARCHITECTURE

The gesture recognition system is mainly composed of threeAnalog Front-Ends (AFEs) for sEMG processing and oneApollo2 MCU to classify the data. Each acquisition channel

Page 2: A Low-Power Embedded System for Real-Time sEMG based Event ...personal.delen.polito.it/...HandGestureNN.pdf · hand gesture recognition based on the event-driven processing of the

Apollo2MCU

AFEs DACThr

TC

Fig. 1: Structure of the system and wearability.

includes seven different stages, entirely made with off-the-shelf components, obtained slightly improving our previouswork [8]. The raw sEMG signals are acquired from the forearmusing the pre-gelled Ag/AgCl H124SG Covidien electrodes,with a 24 mm circular shape. The sEMG signals acquiredfrom the two paired electrodes pass through an overvoltageprotection and are then decoupled from the input using avoltage follower. The two signals are then differential high-pass filtered to reduce movement artifacts and other possiblelow frequency noise contributions, using a simple RC circuitwith cutoff frequency of 34 Hz. Next stage, an instrumentationamplifier with a gain of 922, differentially amplifies the twosEMG signals, referencing them to half of the supply voltage(also used for the reference electrode) in order to retain the fullpeak-to-peak input waveform. This stage has a low-pass filteron its negative feedback, with a cutoff frequency of 10 Hz,which further rejects low frequency noise. The signal spectrumis then low-pass filtered at 398 Hz to remove high frequencycomponents, not needed for this application. In the final stage,the desired TC events are generated by means of a voltagecomparator, configured with a 30 mV hysteresis which ensuresa stable commutation between the two states. The neededthreshold is provided by an external configurable Digital-to-Analog Converter (DAC). The resulting quasi-digital signalis sent to the Apollo2 MCU which, using a simple interruptconfiguration, counts the events occurring during each timewindow. The averaging period has been set to a value of130 ms which has been showed to be a good trade-off betweenlatency and accuracy of the system [8].

III. DATA ACQUISITION

Gestures to be performed during acquisition phase havebeen chosen taking inspiration by previous works [11], [12],adapted considering repeatability and straight-forward imple-mentation of future applications, such as robotic prosthesis orunmanned vehicle control. These works usually classify up to5-6 movements, using from 3 to 8 acquisition channels. In oursystem we decided to keep as low as possible the number ofacquisition channels, using only 3 couples of electrodes plusone reference electrode. Considering the constraint on the 3inputs, we reasonably set up our classifier goal to 6 output

Extensor Carpi Ulnaris

Abductor Pollicis Longus

Reference

(a) Lateral forearm.

Palmaris Longus

(b) Medial forearm.

Fig. 2: Electrodes placement on forearm muscles.

classes, corresponding to the gestures described in Sec. I.Electrodes have been placed according to the forearm biome-chanics of each of the above described gestures, in a way thatcombining their activation could result in an effective classi-fier: abductor pollicis longus and extensor carpi ulnaris havebeen selected in the lateral section of the forearm (Fig. 2a),while in the medial section palmaris longus has been chosen(Fig. 2b); the reference electrode has been placed on the hand,near the wrist, in a bony electrical-neutral area. Accurateplacement of the electrodes is crucially important to obtaina high accuracy from the classifier; hence, using standardsEMG electrodes could bring to an accuracy degradation upto 30% when training and testing are performed in differentconditions [13], [14]. In our application, aiming to keep lowthe computational effort on the µP, we chose to minimize theproblem with an initial calibration phase, instead of using arrayof electrodes [15] that would considerably increase powerconsumption as well.For the data acquisition phase 25 healthy subjects, 16 malesand 9 females (aged 23 to 37 years old), have been enrolled,being sufficiently informed about risks and benefits, accordingto the local bio-ethical committee regulations. Volunteers havebeen instructed to execute the desired movements in sequence,maintaining each gesture for few seconds and then repeatingthe entire loop five times. Two different acquisition protocolshave been used, one for creating the training dataset and theother one for system validation. An initial calibration phase,common to both protocols, was structured as follows: afterelectrodes placement, the subject executes the five movementsfor 6.5 s each (50 windows of 130 ms); obtained data wereplotted on a MATLAB® 3D graph and visually observed; ifnecessary, small adjustments on the electrodes position wereperformed.

A. Training Protocol

Twenty people have been assigned to the training phase.After the initial calibration phase, each individual performedone movement at a time, starting from the rest position andgoing towards the desired gesture, then back to idle state anddoing the movement again. Each movement acquisition lasted13 s (100 windows of 130 ms) and between a movement andanother a short rest phase of 10 s has been observed; then,after ending the fifth movement, a longer rest phase of about1 min has been scheduled, in order to prevent fatigue on active

Page 3: A Low-Power Embedded System for Real-Time sEMG based Event ...personal.delen.polito.it/...HandGestureNN.pdf · hand gesture recognition based on the event-driven processing of the

muscles of the forearm and on muscles of the arm which haveto contrast gravity during session.

B. Testing Protocol

To obtain a realistic measurement of statistical performanceof the classifier, system validation has been performed in realtime directly on the Apollo2 board, with five new volunteers.The entire set of six gestures had to be performed to define acomplete accuracy context; each gesture had to be maintainedsteady for the entire acquisition window, that in this case lasted5.2 s (40 windows of 130 ms). As in training phase, short andlong rest phases have been attended between each gesture.

IV. CLASSIFIER IMPLEMENTATION

Fig. 3: Example of training dataset.

Data obtained from training acquisitions were collected anduploaded on MATLAB® environment to be processed. Sincethe main goal of this work is to minimize power consumptionand computational effort of the µP, we chose the simplestimplementation concerning classification algorithm too. Infact, the small dimensions of the TC matrices allowed usto perform the training implementing a fully-connected NNwith a simple back-propagation algorithm [16]. The NN hasbeen configured with 2 hidden layers made of 26 neuronseach, which has the lowest error cost and resulted in a goodtrade-off between run-time performance and accuracy of theclassifier. Datasets from different people were kept separatedto avoid over-fitting (in Fig. 3 is shown, as example, onedataset) and a simple per-subject holdout validation techniquewas used: considering the group of 20 people, 15 of them wereincluded in the training set while the others in the validationset. Once the training was completed, the obtained parametersmatrices were inserted back in the Apollo2 MCU for the on-line application.The testing phase was performed directly on-line on theApollo2 MCU board, with the help of the five volunteersnot involved in the training process. The NN has been im-plemented at a low computational level with direct matricesmultiplications using the ARM library CMSIS-DSP [17],which provides basic specific math functions to enhance low-power capability of the ARM Cortex-M4F. Each person has

TABLE I: Validation Confusion Matrix

PredictedExt Flex Rad Uln Grasp Idle

Actual

Ext 992 0 4 4 0 0Flex 0 883 67 0 42 8Rad 0 29 913 12 40 6Uln 180 6 0 749 27 38

Grasp 0 60 11 35 804 90Idle 0 0 0 0 0 1000

TABLE II: Statistical Results

Accuracy(%) Precision(%) Recall(%) F1-score(%)

Ext 96.87 84.64 99.20 91.34Flex 96.47 90.29 88.30 89.28Rad 97.18 91.76 91.30 91.53Uln 94.97 93.63 74.90 83.22

Grasp 94.92 88.06 88.40 84.06Idle 97.63 87.57 100.00 93.37

Avg. 96.34 89.32 89.02 88.80

been involved in different days and under stochastic conditionsto prove robustness of the system. Moreover, subjects hadno clue, performing the task, whether the gesture has beenclassified correctly or not, so they were not able to correcttheir posture during acquisition. The confusion matrix of theoutput values of the classifier is displayed in Table I, withthe effective movements on the vertical axis and the predictedmovements on the horizontal axis. Values on the main diagonalof the matrix should be 1000, being data acquired from 5people, performing in 5 sessions, each one containing 40 timewindows. Obtained values of accuracy (96%), precision, recalland F1-score, reported in Table II, are comparable, if nothigher, with respect to others works [18], [19]. As we can see,Uln and Grasp are the two worse performing movements, butwith an accurate placement of all the electrodes and a preciseinitial calibration we obtained acceptable results for both ofthem.

V. LATENCY AND POWER CONSUMPTION

The Cortex-M4F has been configured with a clock fre-quency of 24 MHz, with cache and buck converters enabled.Latency measurements have been made directly using one ofthe embedded timers; forward propagation of data through theNN lasts 8.5 ms. The overall latency of the system, comprisingboth the NN computation time and two consecutive timewindows (a delay slot has been implemented for robustnessreasons), results in 268.5 ms, still under common constraintsof real-time applications.Power consumption of acquisition boards and µP has beentaken into account, obtaining very satisfactory results, thanksto power management embedded in the ARM Cortex-M4Fprocessor, combined with a very lightweight computation.Regarding each AFE, power consumption was already mea-

Page 4: A Low-Power Embedded System for Real-Time sEMG based Event ...personal.delen.polito.it/...HandGestureNN.pdf · hand gesture recognition based on the event-driven processing of the

TABLE III: Comparison with existent EMG-based classifier

Work Features Processing Classifier Accuracy (%) Channels # Gestures # Power (mW) Latency (ms)

[20] multiple PC RBF1 NN 66 8 6 n.a. n.a.[15] n.a. PC HD2 90-96 64 5 n.a. 500[11] ATC/others PC SVM3 93 3 5 20.2 160[6] DWT4 on-board SVM3 94 4 5 5.1 290

This Paper ATC on-board NN 96 3 6 2.9 268.51Radial Basis Functions, 2High Dimensional, 3Support Vector Machines, 4Discrete Wavelet Transform

sured in [8] but, since the improvements include some morecomponents, we obtained a slightly greater value of 0.71 mW.Then, we measured the current absorption of Apollo2 MCU,obtaining an idle power consumption of 0.70 mW and an activevalue of 2.05 mW, that results in an average power consump-tion of 0.80 mW, taking into account the duty cycle of 6.5%.Obtained results are slightly different with respects to [6],which uses the same MCU, possibly because of the differentfeatures extraction, which may bring to a different behavior ofthe µP. In fact, we measured a higher power consumption indeep sleep mode, probably because of the continuously activetimer needed to implement the ATC window. However, despitethose differences, a lower average value is reached, thanks tothe short active phase and the low computational cost of theTC-based NN. Table III shows an overall comparison withsimilar works: with the lowest number of acquisition channels,the system is able to recognize the highest number of gestures,with the highest accuracy, while showing the lowest powerconsumption.

VI. CONCLUSION

In this paper we presented a system for real-time handgestures recognition, using event-driven sEMG processing andclassifying data on the Apollo2 MCU. The system is ableto classify 6 different movements using only 3 input fea-tures, with an accuracy of 96.34%. The achieved performancetogether with a total power consumption of 2.9 mW and asystem latency of 268.5 ms makes our current implementationcompetitive with recent literature. Future improvements of ourdevice will include approximate computing methods to furtherreduce power consumption, as well as software routines whichimplement on-board reinforcement learning, in order to avoidthe manual-visual initial calibration.

REFERENCES

[1] F. Conti, D. Palossi, R. Andri, M. Magno, and L. Benini, “Acceleratedvisual context classification on a low-power smartwatch,” IEEE Trans-actions on Human-Machine Systems, vol. 47, no. 1, pp. 19–30, 2016.

[2] S. Mitra and T. Acharya, “Gesture recognition: A survey,” IEEE Trans-actions on Systems, Man, and Cybernetics, Part C (Applications andReviews), vol. 37, no. 3, pp. 311–324, 2007.

[3] P. Shenoy, K. J. Miller, B. Crawford, and R. P. Rao, “Online electromyo-graphic control of a robotic prosthesis,” IEEE transactions on biomedicalengineering, vol. 55, no. 3, pp. 1128–1135, 2008.

[4] X. Zhang, X. Chen, W.-h. Wang, J.-h. Yang, V. Lantz, and K.-q. Wang,“Hand gesture recognition and virtual game control based on 3D ac-celerometer and EMG sensors,” in Proceedings of the 14th internationalconference on Intelligent user interfaces, pp. 401–406, ACM, 2009.

[5] M. R. Ahsan, M. I. Ibrahimy, and O. O. Khalifa, “Electromygraphy(EMG) signal based hand gesture recognition using artificial neuralnetwork (ANN),” in 2011 4th International Conference On Mechatronics(ICOM), pp. 1–6, IEEE, 2011.

[6] V. Kartsch, S. Benatti, M. Mancini, M. Magno, and L. Benini, “Smartwearable wristband for EMG based gesture recognition powered by solarenergy harvester,” in 2018 IEEE International Symposium on Circuitsand Systems (ISCAS), pp. 1–5, IEEE, 2018.

[7] M. Crepaldi, M. Paleari, A. Bonanno, A. Sanginario, P. Ariano, D. H.Tran, and D. Demarchi, “A quasi-digital radio system for muscle forcetransmission based on event-driven IR-UWB,” in 2012 IEEE BiomedicalCircuits and Systems Conference (BioCAS), pp. 116–119, IEEE, 2012.

[8] D. A. Fernandez Guzman, S. Sapienza, B. Sereni, and P. Motto Ros,“Very low power event-based surface EMG acquisition system with off-the-shelf components,” in 2017 IEEE Biomedical Circuits and SystemsConference (BioCAS), pp. 1–4, IEEE, 2017.

[9] S. Sapienza, M. Crepaldi, P. Motto Ros, A. Bonanno, and D. Demarchi,“On integration and validation of a very low complexity ATC UWBsystem for muscle force transmission,” IEEE transactions on biomedicalcircuits and systems, vol. 10, no. 2, pp. 497–506, 2015.

[10] P. Motto Ros, M. Paleari, N. Celadon, A. Sanginario, A. Bonanno,M. Crepaldi, P. Ariano, and D. Demarchi, “A wireless address-eventrepresentation system for ATC-based multi-channel force wireless trans-mission,” in 5th IEEE International Workshop on Advances in Sensorsand Interfaces IWASI, pp. 51–56, IEEE, 2013.

[11] S. Sapienza, P. Motto Ros, D. A. Fernandez Guzman, F. Rossi, R. Ter-racciano, E. Cordedda, and D. Demarchi, “On-line event-driven handgesture recognition based on surface electromyographic signals,” in 2018IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5, IEEE, 2018.

[12] B. Crawford, K. Miller, P. Shenoy, and R. Rao, “Real-time classificationof electromyographic signals for robotic control,” in AAAI, vol. 5,pp. 523–528, 2005.

[13] F. Palermo, M. Cognolato, A. Gijsberts, H. Muller, B. Caputo, andM. Atzori, “Repeatability of grasp recognition for robotic hand pros-thesis control based on sEMG data,” in 2017 International Conferenceon Rehabilitation Robotics (ICORR), pp. 1154–1159, IEEE, 2017.

[14] S. Benatti, E. Farella, E. Gruppioni, and L. Benini, “Analysis ofrobust implementation of an EMG pattern recognition based control,” inBIOSIGNALS, pp. 45–54, 2014.

[15] A. Moin, A. Zhou, A. Rahimi, S. Benatti, A. Menon, S. Tamakloe,J. Ting, N. Yamamoto, Y. Khan, F. Burghardt, et al., “An EMG gesturerecognition system with flexible high-density sensors and brain-inspiredhigh-dimensional classifier,” in 2018 IEEE International Symposium onCircuits and Systems (ISCAS), pp. 1–5, IEEE, 2018.

[16] A. Ng, “Machine Learning by Stanford University.”https://www.coursera.org/learn/machine-learning/home/welcome, 2018.

[17] ARM, “CMSIS Library.” https://developer.arm.com/tools-and-software/embedded/cmsis, 2019.

[18] S. Benatti, F. Casamassima, B. Milosevic, E. Farella, P. Schonle,S. Fateh, T. Burger, Q. Huang, and L. Benini, “A versatile embeddedplatform for EMG acquisition and gesture recognition,” IEEE transac-tions on biomedical circuits and systems, vol. 9, no. 5, pp. 620–630,2015.

[19] A. D. Chan and K. B. Englehart, “Continuous myoelectric control forpowered prostheses using hidden Markov models,” IEEE Transactionson biomedical engineering, vol. 52, no. 1, pp. 121–124, 2004.

[20] T. Phienthrakul, “Armband gesture recognition on electromyographysignal for virtual control,” in 2018 10th International Conference onKnowledge and Smart Technology (KST), pp. 149–153, IEEE, 2018.