[IEEE 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER) - San Diego, CA, USA...

4
Resting State Detection for Gating Movement of a Neural Prosthesis Steven B. Suway 1,2,* , Rex N. Tien 2,3 , S. Morgan Jeffries 4 , Zohny Zohny 5,, Samuel T. Clanton 5,6,, Angus J. C. McMorland 4,§ and Meel Velliste 4,7 Abstract— The motor cortex is a promising source of sig- nals for driving prosthetic devices to aid movement-impaired individuals. Successful studies in this area have limited subject control to specific task periods. However, real-world applica- tions will require continuous control, which is problematic when the subject does not intend to produce movement. We have previously shown that patterns of neural activity during rest are different from those during movement. Consequently, the scheme used for kinematic decoding is not appropriate during intended rest, resulting instead in unwanted movement because traditional decoders report non-zero velocity during a resting state. We further showed that the population pattern during rest, dubbed the idle state, is robustly distinguishable from the pattern during movement. Here, we demonstrate that an idle state classifier can be combined with a movement decoder to allow for continuous, naturalistic prosthetic control that allows movement only when the subject intends it. I. I NTRODUCTION Primate motor cortex is known to host a variety of signals related to volitional arm movement, including direction of reach, hand position, speed, and velocity, among others [1]– [5]. Several efforts have been successful in modeling and decoding these signals to control prosthetic devices aimed at restoring motor function to impaired individuals [6]– [10]. These experiments are typically limited to controlled task trial periods, suppressing output and ignoring neural activity between trials. Additionally, there is no established method for determining the subject’s intention to move. We recently observed that single unit tuning functions change during periods of rest in brain control experiments. As a consequence, while traditional tuning models do reliably decode reach kinematics, they don’t account for rest periods and thus predict non-zero velocity when the subject desires to rest the arm passively. They can therefore be considered valid only during periods of active motor output [11]. Although this is acceptable in a lab setting where prostheses are not controlled continuously, clinically useful applications will 1 Center for Neuroscience, University of Pittsburgh 2 Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh 3 Department of Bioengineering, University of Pittsburgh 4 Systems Neuroscience Institute, University of Pittsburgh 5 School of Medicine, University of Pittsburgh 6 The Robotics Institute, Carnegie Mellon University 7 Department of Neurobiology, University of Pittsburgh * Corresponding Author. University of Pittsburgh, E1440 BSTWR, 200 Lothrop Street, Pittsburgh, PA 15213-2536, [email protected] Current Affiliation: Department of Neurosurgery, Washington University in St. Louis Current Affiliation: Department of Physical Medicine and Rehabilita- tion, Northwestern University § Current Affiliation: Sport and Exercise Science, University of Auckland need to account for this discrepancy; an effective device should only move when the subject wills it. During resting behavioral states, we observed a novel pat- tern of population activity that diverges from the assumptions of our traditional tuning models in a highly characteristic fashion. Importantly, this novel pattern is also distinct from that observed during actively maintained hold periods, which are kinematically similar to periods of rest. Offline analyses of ensemble activity are able to distinguish the rest state with near-perfect accuracy [11]. Given the robustness of this phenomenon, we next asked if such a classifier could be combined with our standard model to prevent unintended prosthetic movement during online brain control. Here, we report the results of a three-dimensional brain-controlled reaching task in which kinematics were decoded continu- ously and a rest state classifier gated prosthetic output. We show that unintended movement was suppressed and, further, monkeys were able to willfully engage the arm by switching to the active state after sitting unengaged and idle. II. METHODS Four male monkeys (Macaca mulatta, identified as F, G, H and L) performed a three-dimensional (3D) reaching task using either their actual hand (hand control, HC) or an an- thropomorphic robotic arm under brain control (BC). Neural signals were recorded using 96-electrode arrays (Blackrock Microsystems) implanted in the arm area of the primary mo- tor cortex. Raw signals were amplified and filtered online and single units were manually sorted prior to recording using an RZ2 system (Tucker-Davis Technologies). Firing rates were estimated from spike counts observed in 30 ms bins, low- pass filtered with a 15-sample FIR filter (exponential with a sample-to-sample decay constant of 0.95). Hand position during HC sessions was optically tracked (Optotrak 3020, Northern Digital) at 100 samples per second, downsampled to 30 ms bins to match the sampling of firing rates. Position of the robotic arm was sampled at the same rate and low-pass filtered using a Butterworth filter with a cutoff frequency of 5 Hz. Monkeys performed slightly different versions of the task, which was developed progressively as we characterized the idle state. Our training and behavioral paradigms were therefore separated into three distinct phases. A. Phase I Monkeys F and G (and initially monkey H) started each daily session by performing BC reaching. Trials were initi- ated by the monkey pushing a button placed in front of him with his own hand, which triggered a target presentation. 6th Annual International IEEE EMBS Conference on Neural Engineering San Diego, California, 6 - 8 November, 2013 978-1-4673-1969-0/13/$31.00 ©2013 IEEE 665

Transcript of [IEEE 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER) - San Diego, CA, USA...

Page 1: [IEEE 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER) - San Diego, CA, USA (2013.11.6-2013.11.8)] 2013 6th International IEEE/EMBS Conference on Neural Engineering

Resting State Detection for Gating Movement of a Neural Prosthesis

Steven B. Suway1,2,∗, Rex N. Tien2,3, S. Morgan Jeffries4, Zohny Zohny5,†, Samuel T. Clanton5,6,‡,Angus J. C. McMorland4,§ and Meel Velliste4,7

Abstract— The motor cortex is a promising source of sig-nals for driving prosthetic devices to aid movement-impairedindividuals. Successful studies in this area have limited subjectcontrol to specific task periods. However, real-world applica-tions will require continuous control, which is problematic whenthe subject does not intend to produce movement. We havepreviously shown that patterns of neural activity during restare different from those during movement. Consequently, thescheme used for kinematic decoding is not appropriate duringintended rest, resulting instead in unwanted movement becausetraditional decoders report non-zero velocity during a restingstate. We further showed that the population pattern duringrest, dubbed the idle state, is robustly distinguishable from thepattern during movement. Here, we demonstrate that an idlestate classifier can be combined with a movement decoder toallow for continuous, naturalistic prosthetic control that allowsmovement only when the subject intends it.

I. INTRODUCTION

Primate motor cortex is known to host a variety of signalsrelated to volitional arm movement, including direction ofreach, hand position, speed, and velocity, among others [1]–[5]. Several efforts have been successful in modeling anddecoding these signals to control prosthetic devices aimedat restoring motor function to impaired individuals [6]–[10]. These experiments are typically limited to controlledtask trial periods, suppressing output and ignoring neuralactivity between trials. Additionally, there is no establishedmethod for determining the subject’s intention to move. Werecently observed that single unit tuning functions changeduring periods of rest in brain control experiments. As aconsequence, while traditional tuning models do reliablydecode reach kinematics, they don’t account for rest periodsand thus predict non-zero velocity when the subject desires torest the arm passively. They can therefore be considered validonly during periods of active motor output [11]. Althoughthis is acceptable in a lab setting where prostheses are notcontrolled continuously, clinically useful applications will

1Center for Neuroscience, University of Pittsburgh2Center for the Neural Basis of Cognition, Carnegie Mellon University

and University of Pittsburgh3Department of Bioengineering, University of Pittsburgh4Systems Neuroscience Institute, University of Pittsburgh5School of Medicine, University of Pittsburgh6The Robotics Institute, Carnegie Mellon University7Department of Neurobiology, University of Pittsburgh∗Corresponding Author. University of Pittsburgh, E1440 BSTWR, 200

Lothrop Street, Pittsburgh, PA 15213-2536, [email protected]†Current Affiliation: Department of Neurosurgery, Washington University

in St. Louis‡Current Affiliation: Department of Physical Medicine and Rehabilita-

tion, Northwestern University§Current Affiliation: Sport and Exercise Science, University of Auckland

need to account for this discrepancy; an effective deviceshould only move when the subject wills it.

During resting behavioral states, we observed a novel pat-tern of population activity that diverges from the assumptionsof our traditional tuning models in a highly characteristicfashion. Importantly, this novel pattern is also distinct fromthat observed during actively maintained hold periods, whichare kinematically similar to periods of rest. Offline analysesof ensemble activity are able to distinguish the rest statewith near-perfect accuracy [11]. Given the robustness of thisphenomenon, we next asked if such a classifier could becombined with our standard model to prevent unintendedprosthetic movement during online brain control. Here, wereport the results of a three-dimensional brain-controlledreaching task in which kinematics were decoded continu-ously and a rest state classifier gated prosthetic output. Weshow that unintended movement was suppressed and, further,monkeys were able to willfully engage the arm by switchingto the active state after sitting unengaged and idle.

II. METHODS

Four male monkeys (Macaca mulatta, identified as F, G,H and L) performed a three-dimensional (3D) reaching taskusing either their actual hand (hand control, HC) or an an-thropomorphic robotic arm under brain control (BC). Neuralsignals were recorded using 96-electrode arrays (BlackrockMicrosystems) implanted in the arm area of the primary mo-tor cortex. Raw signals were amplified and filtered online andsingle units were manually sorted prior to recording using anRZ2 system (Tucker-Davis Technologies). Firing rates wereestimated from spike counts observed in 30 ms bins, low-pass filtered with a 15-sample FIR filter (exponential witha sample-to-sample decay constant of 0.95). Hand positionduring HC sessions was optically tracked (Optotrak 3020,Northern Digital) at 100 samples per second, downsampledto 30 ms bins to match the sampling of firing rates. Positionof the robotic arm was sampled at the same rate and low-passfiltered using a Butterworth filter with a cutoff frequency of5 Hz. Monkeys performed slightly different versions of thetask, which was developed progressively as we characterizedthe idle state. Our training and behavioral paradigms weretherefore separated into three distinct phases.

A. Phase I

Monkeys F and G (and initially monkey H) started eachdaily session by performing BC reaching. Trials were initi-ated by the monkey pushing a button placed in front of himwith his own hand, which triggered a target presentation.

6th Annual International IEEE EMBS Conference on Neural EngineeringSan Diego, California, 6 - 8 November, 2013

978-1-4673-1969-0/13/$31.00 ©2013 IEEE 665

Page 2: [IEEE 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER) - San Diego, CA, USA (2013.11.6-2013.11.8)] 2013 6th International IEEE/EMBS Conference on Neural Engineering

The monkey then moved a robotic arm (WAM arm, BarrettTechnologies) under BC to the target, holding the finalposition for a variable length of time to earn a drink of water.To control the arm, a 3D velocity signal was extracted onlinefrom recorded units as previously described [8], [12].

Training data for the idle state classifier were manuallylabeled. Towards the beginning of each session, there wasa period of recording with no target presentation and theexperimenter labeled neural samples correspondingly as idleby manually pushing a button when the animal sat still andappeared inattentive to the workspace. Then, during the earlypart of the brain-controlled reaching session (approximatelythe first 40 to 100 trials), the experimenter labeled neuralsamples as active by pushing a different button during reachperiods as long as the monkey appeared engaged in movingthe arm. A classifier based on linear discriminant analysis(LDA) was then trained using those labels to distinguishthe two neural states. The inputs to the LDA were simplythe firing rates, which were square-root transformed priorto classifier training to make their distribution more normal[13], [14]. The state classifier was cascaded with the kine-matic decoder and used to gate the velocity command sent tothe arm. Successful reaches were thus contingent upon theclassifier identifying an active population state; movementwas immediately halted when the classifier identified an idlestate. If the monkey did not resume an active state andcomplete a reach within a certain period of time, the trialwas aborted and a reward was not delivered.

At the end of each daily session, the monkeys performeda short (about 40 to 100 trials) reaching task with their ownhand in which they reached and grasped a handle that wasmanually presented by the experimenter. A water reward wasdelivered for each successful grasp. Data from this HC taskwere used to compare the idle and active states during BCvs. HC.

B. Phase II

Experiments in this phase were performed with monkeyH, and the BC task was similar to the Phase I task. Thedifference was how the LDA classifier was trained. Whilein Phase I training data were collected during the BC taskitself, in this phase it was done in a separate HC task. TheHC-trained LDA classifier was then used to gate movementin the BC task.

Each day therefore began with a HC session used toidentify periods of rest and movement. To initiate a trial,the monkey held his hand steady in a small region of spacefor a variable length of time, typically about 500 ms. Asmall, bar-shaped target was then presented by a VS-6577Gindustrial robot (DENSO International America) at one of 6locations. The monkey made reaches to the targets and heldtheir position for a variable length of time, typically about500 ms. Accurate reaches were rewarded with a drink ofwater. Hand speed was calculated from the sampled positionto identify instances of rest and movement. Neural sampleswere labeled active during successful reach periods whenthe integrated path length of hand movement over an 11-

sample window exceeded 2 mm. Task breaks were imposedby not presenting the target for periods of about 20 s to allowadequate sampling of resting behavior. Monkey H rested hisarm such that the optical marker on his hand was not visibleto the tracking system, so idle periods were labeled manuallyby the experimenter pushing a button when the monkey satstill with his arm on an arm rest.

C. Phase III

This phase utilized monkey L and followed the samegeneral procedure as in Phase II, except that labeling ofactive and idle samples was fully automated based on handtracking. Labeling was therefore more accurate. Neural sam-ples were labeled active as in Phase II. Unlike Phase II,idle samples were labeled when the windowed path lengthwas below 1 mm. This was possible because the opticaltracker was repositioned in this phase to better capture themarker in rest periods. To initiate a trial, the monkey simplyrested his hand on the primate chair, eliminating the needto impose task breaks to collect idle state samples. Wepreviously demonstrated that population activity during holdperiods corresponds to an active neural state despite theirassociated rest-like kinematics [11]. Rest state labels weretherefore never assigned during hold periods. An additionaldifference in this phase was that rather than simply reachingto targets, the monkey reached to a drink tube presented atvarious locations and brought it to his mouth to receive areward.

III. RESULTS

Initial results from Phase I experiments were promisingand our procedure seemed to work well with monkey F [15].Based on our observations, idle state classification during BCappeared to be coupled to the monkey’s level of engagementwith the task. When he was attentive to the arm and targets,the classifier reliably reported an active state. Movementhalted when he became distracted or unmotivated. Whenwe repeated the same procedures with monkeys G and H,however, these observations were far more ambiguous anddid not convince us that our classifier was distinguishingstates faithfully. The primary obstacle we identified was lackof ground truth in BC experiments. The animals’ intentionsin BC were difficult to infer, and so we struggled to assessaccuracy of state labelling. We thus began investigation inHC experiments, where intentions were overtly manifestedin subjects’ movements. We found that when monkeys usedtheir own hands for reaching, the distinction between idleand active states was extremely robust [11].

Importantly, monkey F had extensive prior training in otherHC tasks in previous experiments, while monkeys G andH did not. We wondered if this could have influenced theneural state during BC, and thus sought to compare stateclassification between HC and BC sessions. Fig. 1 shows therelationship between two different sets of LDA scores: theone that was used for gating movement in the BC task (BC-LDA) versus one trained offline using labels from HC data(HC-LDA). Positive LDA scores correspond to active state;

666

Page 3: [IEEE 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER) - San Diego, CA, USA (2013.11.6-2013.11.8)] 2013 6th International IEEE/EMBS Conference on Neural Engineering

Fig. 1. Two-dimensional histogram counts of HC-LDA scores vs. BC-LDA scores from neural data during reaching movements in a phase I BCexperiment. Scores are shown for successful (top) and unsuccessful (bottom)reaches for monkeys F and G. Color indicates the relative frequency ofobserving a given pair of binned HC-LDA and BC-LDA values.

negative scores correspond to idle state. For monkey F, in thecase of successful reaches there is a large cluster of samplesin the upper right quadrant, indicating that both HC-LDA andBC-LDA detected an active state. A positive BC-LDA wasrequired during these periods by definition of the BC task,but a positive HC-LDA was not necessary. The fact that itdid match indicates that monkey F’s pattern of neural activitywas very similar in both HC and BC active task periods.Unsuccessful reaches are easily explained by the absence ofa consistent BC active state; samples are distributed acrossall quadrants rather than clustering in the active region. Thesame analysis for monkey G was markedly different: HC-LDA did not consistently detect an active state during hissuccessful BC reaches as the distribution of scores spansboth positive and negative values. This led us to believe that,1) monkey G did not utilize the same strategy as F duringactive BC, and 2) LDA calibration in HC was an importantstep in training monkeys to produce robustly distinguishableBC active and idle states. We therefore revised the trainingand task protocol as described above (Methods, Phase II).

The accuracy of idle/active classification in HC was pre-viously established [11], but to confirm this in the presentcontext, HC data were analyzed from 19 monkey H sessions(phase II) and 4 monkey L sessions (phase III) comprisingan average of 34 and 208 simultaneously recorded units,respectively. Five-fold cross-validation was performed foreach HC session to evaluate classifier performance. Aver-aged across sessions and states, classification accuracy was90.4 ± 2.5% and 96.2 ± 3.5% for monkeys H and L,respectively (mean ± 2 SE). The full confusion matricesfor each monkey are given in Table I. These classificationaccuracies are high considering that instantaneous 30 mssamples were classified rather than trials or long periods ofaveraged data. The lower accuracy for monkey H (comparedto L) can be explained by the fact that idle state labeling wasdone manually in phase II. Manual labeling is error-prone,and therefore the lower accuracy likely reflects mis-labelingrather than mis-classification.

TABLE IHC STATE DETECTION CONFUSION MATRICES

LDA PredictionIdle Active

Monkey H: Actual Idle 84.4± 3.5% 15.6± 3.5%Active 3.5± 1.5% 96.5± 1.5%

Monkey L: Actual Idle 98.0± 1.3% 2.0± 1.3%Active 5.6± 5.7% 94.4± 5.7%

Because ground truth is not knowable in monkey BC, idleand active states must be inferred and so explicit accuracymeasures are not possible to compute for those sessions. Wetherefore relied on several alternative assessments of stateclassification sucess during BC for monkeys H and L inphases II and III. First, the monkeys’ levels of engagementwere often obvious from observation during the task. Forexample, monkey H was sometimes spontaneously distractedby the operation of the reward delivery system mountedbehind him, causing him to turn his head. Idle state clas-sification was tightly coupled to his looking away from theworkspace; active state classification resumed when his gazereturned to the target. This is visualized by compositing shortsequences of video frames from the task into single images.Fig. 2A shows the first part of a reach that was interruptedby this type of distraction. In this panel, the monkey wasactive and so the arm moved continuously. This contrastswith the sequence shown in Fig. 2B, which is compositedfrom several frames while the monkey was looking away,which caused the arm to remain stationary. Fig. 3 showsthe position of the arm during a similar trial, drawn in blackwhen the monkey was idle and red when active. The monkeyinitiated a reach, but then became distracted and idle beforecompleting the trial, causing the arm to cease its movement.When he became active again, he was able to complete thereach and acquire the target. If instead the arm were allowedto drift during the idle period, the monkeys performancewould have decreased because he would have needed makecorrective movements. This phenomenon was not due tosome idiosyncratic pattern of activity associated with turningthe head: monkeys occasionally became unmotivated andstopped attending to the arm and targets for minutes at a timewhile still looking in the general direction of the workspace.During these periods, the classifier reliably reported an idleneural state. Conversely, when monkeys decided to resumeworking, they could apparently do so willfully as the systemconsistently detected the active state and permitted move-ment when the monkeys directed attention back to the task.

IV. DISCUSSION

We demonstrated that idle state classification can besuccessfully applied during a brain control task to preventunintended movement of a prosthetic arm. This significantlyextends our previous finding that idle state classification ishighly robust in offline analyses based on hand kinematics.The most successful experiments were those for which theclassifier used in BC was trained on neural signals acquiredduring HC. In this paradigm, monkeys in BC were requiredto maintain an active state by HC definitions. This suggeststhat the natural neural correlates of rest and movement do

667

Page 4: [IEEE 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER) - San Diego, CA, USA (2013.11.6-2013.11.8)] 2013 6th International IEEE/EMBS Conference on Neural Engineering

Fig. 2. Composited images of multiple video frames during a BC trialshowing idle state detection for monkey H. Monkey initiated a reach to theorange object in active state (A), then became distracted by the rewardsystem and switched to idle state (B). The blurriness of the arm in Ademonstrates its movement during this period. The sharpness of the armimage in B shows that it remained stationary during this period.

0 0.6 1.2 1.8 2.4 3.0 3.6

Time (s)

0

0.1

0.2

0.3

0.4

0.5

0.6

Dis

tan

ce

to

Ta

rge

t (m

)

A

B

C

E

D

Monkey H

Active

Idle

Fig. 3. Position of robotic arm during a BC trial with idle detection.Idle portions are rendered in black, active portions in red. A, Start positionof arm. Monkey began trial in idle state, then initiated a reach. B, Brieffalse idle detection, arm kept moving due to inertia. C, Monkey becamedistracted and switched to idle state, causing arm to stop mid-reach. D,Monkey returned his attention to the task, switched to active state, andresumed the reach. E, Close enough to target for successful trial completion.

not necessarily change when the subject controls a prostheticdevice instead of his own hand. The technique works bestwhen it is based on the notion that the two states arenatural modes of motor control. From this perspective, theclassification is highly intuitive for the subject, making useof physiology already associated with periods of rest andmovement. The fact that HC calibration was required forrobust BC idle classification is probably a reflection of theinability of monkeys to take verbal task instructions. There islittle to suggest that this calibration depends on the sensoryfeedback that comes with HC; the idle neurophysiologypreceeds kinematics in HC tasks [11]. It is therefore likelythat human patients lacking use of their own limbs willnonetheless be able to utilize such a classifier to continuouslycontrol a prosthetic, though efficacy of calibration basedon verbal instruction remains to be demonstrated. This isessential since traditional decoders report non-zero velocityduring the idle state, which would cause unintended and

potentially dangerous output [11]. We conclude that idle stateclassification is a valuable and effective addition to neuralprosthetic systems, enabling continuous control by utilizingnaturalistic and intuitive physiological modes.

ACKNOWLEDGEMENTS

Data collected at the MotorLab at the University of Pittsburgh. Fundingsupport contributed by DARPA W911NF-06-1-0053 and N66001-10-C-4056, and JHU-APL 972352. Support for S.M.J. contributed by NIHT32 HD049307. Support for Z.Z. contributed by NIH R25MH054318-15.Support for S.T.C. contributed by NIH F30N060530.

REFERENCES

[1] A. P. Georgopoulos, J. F. Kalaska, R. Caminiti, and J. T. Massey, “Onthe relations between the direction of two-dimensional arm movementsand cell discharge in primate motor cortex.” J Neurosci, vol. 2, no. 11,pp. 1527–1537, Nov 1982.

[2] A. P. Georgopoulos, R. E. Kettner, and A. B. Schwartz, “Primatemotor cortex and free arm movements to visual targets in three-dimensional space. ii. coding of the direction of movement by aneuronal population.” J Neurosci, vol. 8, no. 8, pp. 2928–2937, Aug1988.

[3] R. E. Kettner, A. B. Schwartz, and A. P. Georgopoulos, “Primate motorcortex and free arm movements to visual targets in three-dimensionalspace. iii. positional gradients and population coding of movementdirection from various movement origins.” J Neurosci, vol. 8, no. 8,pp. 2938–2947, Aug 1988.

[4] D. W. Moran and A. B. Schwartz, “Motor cortical representation ofspeed and direction during reaching.” J Neurophysiol, vol. 82, no. 5,pp. 2676–2692, Nov 1999.

[5] W. Wang, S. S. Chan, D. A. Heldman, and D. W. Moran, “Motorcortical representation of position and velocity during reaching.” JNeurophysiol, vol. 97, no. 6, pp. 4258–4270, Jun 2007.

[6] D. M. Taylor, S. I. H. Tillery, and A. B. Schwartz, “Direct corticalcontrol of 3d neuroprosthetic devices.” Science, vol. 296, no. 5574,pp. 1829–1832, Jun 2002.

[7] L. R. Hochberg, M. D. Serruya, G. M. Friehs, J. A. Mukand,M. Saleh, A. H. Caplan, A. Branner, D. Chen, R. D. Penn, and J. P.Donoghue, “Neuronal ensemble control of prosthetic devices by ahuman with tetraplegia.” Nature, vol. 442, no. 7099, pp. 164–171,Jul 2006. [Online]. Available: http://dx.doi.org/10.1038/nature04970

[8] M. Velliste, S. Perel, M. C. Spalding, A. S. Whitford, and A. B.Schwartz, “Cortical control of a prosthetic arm for self-feeding.”Nature, vol. 453, no. 7198, pp. 1098–1101, Jun 2008.

[9] L. R. Hochberg, D. Bacher, B. Jarosiewicz, N. Y. Masse, J. D. Simeral,J. Vogel, S. Haddadin, J. Liu, S. S. Cash, P. van der Smagt, andJ. P. Donoghue, “Reach and grasp by people with tetraplegia usinga neurally controlled robotic arm.” Nature, vol. 485, no. 7398, pp.372–375, May 2012.

[10] J. L. Collinger, B. Wodlinger, J. E. Downey, W. Wang, E. C. Tyler-Kabara, D. J. Weber, A. J. C. McMorland, M. Velliste, M. L. Boninger,and A. B. Schwartz, “High-performance neuroprosthetic control by anindividual with tetraplegia.” Lancet, vol. 381, no. 9866, pp. 557–564,Feb 2013. [Online]. Available: http://dx.doi.org/10.1016/S0140-6736(12)61816-9

[11] M. Velliste, S. Kennedy, A. B. Schwartz, A. S. Whitford, J. Sohn, andA. J. C. McMorland, “Motor cortical correlates of arm resting in thecontext of a reaching task, and implications for prosthetic control,” InPreparation.

[12] S. Clanton, “Brain-computer interface control of an anthropomorphicrobotic arm,” Ph.D. dissertation, Carnegie Mellon University, 2011.

[13] A. P. Georgopoulos, J. T. Lurito, M. Petrides, A. B. Schwartz, and J. T.Massey, “Mental rotation of the neuronal population vector.” Science,vol. 243, no. 4888, pp. 234–236, Jan 1989.

[14] A. P. Georgopoulos and J. Ashe, “One motor cortex, two differentviews.” Nat Neurosci, vol. 3, no. 10, pp. 963; author reply 964–963;author reply 965, Oct 2000.

[15] M. Velliste, Z. Zohny, S. Clanton, S. Jeffries, A. J. C. McMorland,J. Sohn, G. Fraser, and A. B. Schwartz, “Toward robust continuousdecoding for prosthetic arm control.” Program No. 20.9. 2010 Neu-roscience Meeting Planner. San Diego, CA: Society for Neuroscience,2010. Online.

668