TAs: Andy Gray and Nick Cox BADR { Project Final Report · 2014-12-09 · EEG sensor on the...

41
University of Florida Department of Electrical and Computer Engineering EEL 4665/5666 – Intelligent Machines Design Laboratory Instructors: A. Antonio Arroyo, PhD, and Eric M. Schwartz, PhD TAs: Andy Gray and Nick Cox BADR – Project Final Report Islam S. Badreldin Dec 09, 2014

Transcript of TAs: Andy Gray and Nick Cox BADR { Project Final Report · 2014-12-09 · EEG sensor on the...

University of FloridaDepartment of Electrical and Computer EngineeringEEL 4665/5666 – Intelligent Machines Design Laboratory

Instructors: A. Antonio Arroyo, PhD, and Eric M. Schwartz, PhDTAs: Andy Gray and Nick Cox

BADR – Project Final Report

Islam S. Badreldin

Dec 09, 2014

Contents

List of Figures 3

Abstract 4

1 Executive Summary 6

2 Introduction 7

3 Integrated System and Robot Design 83.1 Behaviors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2 Mobile Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.3 Sensors and Actuators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.3.1 Obstacle Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.3.2 Kinematic Control and Obstacle Avoidance . . . . . . . . . . . . . 11

3.4 Special Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.4.1 Wireless Data Acquisition and User Interface . . . . . . . . . . . . 123.4.2 UDP-to-Serial Packet Design . . . . . . . . . . . . . . . . . . . . . . 133.4.3 Kinematic Extraction Algorithm . . . . . . . . . . . . . . . . . . . 14

3.5 Board Design Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.6 Bill of Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4 Experimental Layout and Results 174.1 Brain-control Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.2 Side Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.2.1 Soldering Custom Perfboards . . . . . . . . . . . . . . . . . . . . . 184.2.2 Hacking the EEG Headset . . . . . . . . . . . . . . . . . . . . . . . 194.2.3 Tethered Connection with the EEG Headset . . . . . . . . . . . . . 20

5 Conclusion and Future Work 22

Bibliography 23

Appendix 25

2

List of Figures

3.1 Conceptual diagram of system design . . . . . . . . . . . . . . . . . . . . . 83.2 Two different views of BADR. . . . . . . . . . . . . . . . . . . . . . . . . . 93.3 The Magician Chassis from SparkFun . . . . . . . . . . . . . . . . . . . . . 103.4 TGAM1 module from NeuroSky . . . . . . . . . . . . . . . . . . . . . . . . 133.5 Arduino Leonardo board with Atmel’s ATmega32u4 . . . . . . . . . . . . . 15

4.1 Experimental setup for the goal-directed task. . . . . . . . . . . . . . . . . 184.2 Hacking the EEG Headset . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.3 Raw packets captured from the TGAM-based EEG Headset . . . . . . . . 204.4 Raw EEG captured from the TGAM-based EEG Headset . . . . . . . . . . 21

3

Abstract

Brain–Machine Interfaces (BMIs) are used to extract information directly from invasiveor non-invasive brain signals. BMIs hold the promise for effective neuromotor prosthesesfor patients with motor disabilities. Over the last few decades, research in neuromotorBMIs have witnessed tremendous progress with clinical trials enabling tetraplegic to controlrobotic arms using invasive brain signals. Non-invasive brain signals have been also used toextract a finite set of discrete commands to control robotic platforms, and are being adoptedfor continuous kinematic control in the current state-of-the-art research. In this project,I build a Brain-actuated Directional Robot (BADR) that is a miniature prototype of abrain-actuated wheelchair using non-invasive brain signals and the principles of CognitiveNeural Prosthetics.

4

Acknowledgment

I would like to thank my PhD advisor, Dr. Karim Oweiss, for his continued mentorshipand inspiration in the field of Brain–Machine Interfaces. I would also like to thank thecourse instructors and teaching assistants for their time, effort, and practical guidance inthis course project. Many thanks to my wife and colleague, Ms. May Mansy, for sharingher MacBook Air and the EEG sensor with me for use in this project. Finally, manythanks to my former colleague from Cairo University, Mr. Muhammad Hilal, for extensivediscussions on mobile app development for iOS devices.

5

1 Executive Summary

My goal in this project is to design and implement a low-cost Brain-Actuated DirectionalRobot (BADR). This is motivated by Cognitive Neural Prosthetics applications such asbrain-controlled wheelchairs. To this end, I chose the mobile platform to be a two-wheeledplatform stabilized by a caster. I chose the processing board for the robotic platform to bea low-cost Arduino Leonardo board that is based on an ATmega32u4 controller of adequatecomputational power. High-level processing and extraction of desired kinematics were doneon a smart computing device, e.g. an iOS-based device. I designed BADR to support twomodes of kinematic control: real-time kinematic control and cruise control. In the former,high-level kinematic commands are communicated to the mobile platform which followsthese commands, whereas in the latter, the mobile platform maintains a commanded speedin its current direction. The high-level kinematic commands are specified in terms of linearvelocity and rotational velocity. Linear velocity moves the mobile platform forward andbackward at varying speeds, whereas rotational velocity steers it left and right. Thesekinematic commands specify the motion of a unicycle kinematic model. Using a forwardkinematic model, the unicycle movement parameters are transformed to differential-drivekinematics that are used to actuate the two wheels of the robotic platform using withfeedback control. Additionally, the mobile platform is equipped with a ‘skirt’ of ultrasonicproximity sensors. These sensors are utilized for real-time obstacle avoidance to enablesafe navigation in the two modes supported by BADR.

To enable wider use of BADR, I chose a non-invasive brain signal -the electroencephalo-gram (EEG)- to derive high-level kinematics. I ‘hacked’ a single-electrode EEG headsetfrom a neuro-gaming toy to get access to its serial communication packets and read theEEG-based signals. I chose the attention signal as a cognitive signal that can be volitionallymodulated and mapped it to a kinematic signal. I developed a user interface and wirelesscommunication software (Wi-Fi and Bluetooth Low Energy) in Apple’s Swift language tocontrol the robot. The complete robot bill of materials met the low-cost goal of ∼$200.

Towards the end of the project, I designed a ‘goal-directed’ brain-control experimentto showcase that volitional control of the cognitive state is a possible kinematic controlmechanism to actuate BADR. Video recordings of trials from this experiment are postedon the Internet.

6

2 Introduction

I come from a strong software background. Throughout my career, I have been mainlyworking with algorithmic design, analysis of signals and systems, and designing software.Working with hardware has always been a background hobby for me, but I never gotinvolved with a serious hardware project. This course project is a clear opportunity forme to change this! During the recent year of my graduate study, I have been workingwith the analysis of neural signals and the design of Brain–Machine Interfaces (BMIs),particularly on how to extract from the invasive brain signals some kinematic variablesthat are suitable to actuate a robot [1, 2]. However, I never got the chance to build arobot myself and experiment with actually driving a robot using the extracted kinematicvariables. Consequently, my goal for this course project is to design and implement a Brain-Actuated Directional Robot (BADR)1. The robotic platform is a mobile differential-driveplatform that can change both direction and speed of movement (two degrees-of-freedom).BADR is designed as a miniature prototype of a brain-actuated wheelchair targeted for low-cost design and production so that it can be suitable for use in the third-world countries.

I designed the robotic wheelchair to have its own degree of autonomy, such as obstacleavoidance and maintaining its direction and speed (a cruise-control mode). My personalexperience in BMIs have been with invasive brain electrodes which require brain surgery toimplant them. However, for this project I will be working with non-invasive dry-electrodebrain sensors to enable a wider use of the device. Non-invasive brain signals such as theElectroencephalogram (EEG) have been successfully used in discrete communication pros-theses, such as striking a key on a virtual keyboard [3]. The EEG signals lend themselveswell to discrete classification. For example, a discrete classification approach have beensuccessfully applied, by utilizing the principle of shared control, to drive a wheelchair [4, 5].On the other hand, using invasive brain signals, it was possible over the last few decadesto decode continuous kinematic trajectories to drive robotic platforms [6, 7, 8, 9, 10]. Morerecently, non-invasive brain signals are being slowly adopted to do the same thing for con-tinuous brain-actuated kinematic control of robotic platforms, and this is still a hot topicin non-invasive brain research with very few success stories [11, 12]. In this project, I at-tempt to carry out a real-time decoding experiment where kinematic trajectories extractedfrom human EEG are used to actuate a miniature prototype of a robotic wheelchair. Thisexperiment can be classified as a non-invasive Cognitive Neural Prosthetics experiment [13].

1BADR is a backronym of the Arabic word badr, which literally translates to full moon, a symbol ofbeauty and tranquility in the Arabic and Islamic traditions

7

3 Integrated System and Robot Design

I designed BADR to be highly modular. Figure 3.1 shows the system design diagram, andFigure 3.2 shows two views of the completed mobile platform. The sensors and actuatorsare hosted on the chassis, powered by the Arduino Leonardo board, and propelled by twoDC brushed motors. The mobile platform encompasses different submodules: (i) a closed-loop kinematic control submodule, (ii) an obstacle-avoidance submodule, and (iii) a Wi-Ficommunication submodule. This autonomous robotic platform receives kinematic com-mands over Wi-Fi from a smart personal device, which ‘decodes’ the kinematic commandsby post-processing EEG-based signals received over Bluetooth Low Energy (LE) from a‘hacked’ EEG headset. In the next sections, I give detailed account on the system design.

Figure 3.1: A conceptual diagram of BADR system design. Data from the EEG headset arestreamed wirelessly over Bluetooth LE to a smart personal device programmedin Swift that processes the data and sends kinematic commands to the roboticmobile platform over Wi-Fi. The robotic platform hosts ultrasonic sensors, amotor driver, a Wi-Fi module, and a wheel encoder. (All pictures and logosare copyright of their respective owners.)

8

Figure 3.2: Two different views of BADR.

3.1 Behaviors

The robot exhibits two main behaviors: Real-time kinematic control, and cruise-control.Additionally, basic ‘obstacle avoidance’ behavior is incorporated along with the two mainbehaviors to enable safe navigation for the robotic wheelchair.

In the real-time kinematic mode, kinematic commands are ‘decoded’ in real-time froma wirelessly communicated electroencephalogram- (EEG-) based signal. A dry-electrodeEEG sensor on the user’s head continuously streams EEG-based data to a personal mobilecomputing platform such as a smart-phone using wireless communication (Bluetooth LE).The kinematic commands are extracted from the EEG on the mobile computing platform.Then, they are wirelessly communicated in real-time to the control board on the roboticmobile platform, which responds by executing these commands using low-level control viamapping the high-level kinematic variables to low-level angular velocities to actuate thetwo motors and differentially drive the two wheels. The high-level kinematic variablesare linear velocity and rotational velocity. The linear velocity controls the translational(forward/backward) motion of the wheelchair, whereas the rotational velocity controls theangular motion of the wheelchair. Pure rotational velocity commands should ideally resultin pure rotational motion while pure linear velocity commands should ideally result in puretranslational motion.

In the cruise control mode, the robotic wheelchair attempts to maintain a commandedlinear speed in its current direction. With obstacle avoidance, the robotic wheelchair is ableto autonomously do simple maneuvers while attempting to maintain the desired speed.

9

Figure 3.3: The Magician Chassis from SparkFun. Photo courtesy of SparkFun and licensedunder Creative Commons Attribution-NonCommercial-Share Alike 3.0.

BADR makes use of two levels of processing: a low-level control board that directlyinteracts with the sensors and actuators on the mobile platform and provides autonomousbehavior to the robotic platform, and a high-level board that is responsible for interfacingwith the EEG sensor and for streaming kinematic commands to the low-level board.

3.2 Mobile Platform

The robotic mobile platform is a miniature version of a wheelchair-like platform. It hastwo wheels for propelling and one caster for stability. I conducted an online survey forthe options of constructing the robotic platform. I considered both the option of manualassembly from discrete parts (wheels, motors, . . . etc.) and the option of ready-assembledrobotic kits. A third option was found to strike the balance in the price-effort trade-offbetween these two alternatives. This third option is a manually-assembled robotic chassis,the RedBot’s Magician Chassis from SparkFun (see Figure 3.3). It comes with a chassisthat resembles to some extent a wheelchair chassis, two wheels, a spherical caster, twogeared DC brushed motors with optional wheel encoders, and an optional bump sensor.The chassis is also ready for mounting different boards and sensors on it.

3.3 Sensors and Actuators

Ultrasonic (US) proximity sensors [14] in the form of a sensor ‘skirt’ are used for obstacleavoidance. Two sensors are oriented forward facing, and two more are oriented sideways.These sensors, integrated with the actuation and obstacle avoidance sub-systems, providesafe navigation to the robotic wheelchair user.

The actuators are two brushed geared DC motors to differentially drive the two wheelsof the robotic wheelchair. The Magician Chassis provides 16-tooth rotary gears attached

10

to the wheels and a rotary encoder board to provide real-time feedback about the mo-tors/wheels state and to enable proportional-integral-plus-derivative (PID) control of themotor movement. The motor driver is an H-bridge integrated circuit (SN754410) that ismanually soldered on the mobile platform using an custom Arduino Protoshield Bare PCBfrom SparkFun.

3.3.1 Obstacle Detection

After testing each US sensor individually and getting familiar with its measurement tech-nique, I coded a Finite State Machine (FSM), in the form of a struct, for each sensor,keeping track of its triggering status, echo status, and all the timing. One FSM is instanti-ated for each sensor, and the measurements are done every 60 milliseconds. I qualitativelytested the sensors by streaming their readings over the USB port to my Arduino PC andconfirming the measurements by placing my hand in proximity of each sensor or multiplesensors.

Each US sensor reading is appended to a circular buffer of size four. I implementedthe circular buffer and integrated it with each sensor’s FSM using C arrays. Furtherprocessing is needed to reject the high-amplitude noise present in the US sensor readings.I implemented a median filter that takes in the last four readings of each US sensor fromthe circular buffer, sorts them on the fly using insertion sort [15], and the middle tworeadings are averaged and taken as the current US sensor obstacle distance measurement.

3.3.2 Kinematic Control and Obstacle Avoidance

I implemented the robot’s kinematic model as explained here. High level velocity com-mands are specified using a unicycle kinematic model [16], where one degree of freedom, r,specifies the desired linear motion in revolutions per second, and the other degree of free-dom, ω, specifies the desired rotational motion in rotations per second. These commandsare then converted using a forward unicycle-to-differential-drive kinematic model [17] intothe desired revolutions per second of the two wheel motors. That is,

vl = r − 1

2ω · L

R(3.1)

vr = r +1

2ω · L

R(3.2)

where vl and vr are respectively the velocities of the left and right wheels, L is the distancebetween the two wheel, and R is the radius of a wheel.

This kinematic model was implemented in floating point single precision in Arduino, andcalculated every 300 milliseconds.

For obstacle avoidance, I did automatic fusion of the four sensors’ measurements intoa high-level kinematic command that drives the robot. Each sensor measurement (x in

cm) is transformed into a number between 0 and 1 using a transformation y = 1 − x/5x/5+1

.

11

The transformation is tuned such that the robot gives an early response with any obstaclewithin 5 cm of the sensors. Each sensor gives a two-degree-of-freedom kinematic commandto direct the robot away from the measured obstacle. For simplicity of vector addition, aforward-facing sensor adds the transformed obstacle distance, y, purely in the backwarddirection to rb, while a side-facing sensor adds y in purely the opposite side direction toωb. The final obstacle avoidance vector is calculated as the vector sum of all the sensorcontributions every 100 milliseconds. This final obstacle avoidance vector is then fused withthe desired kinematic control command vector (r0, ω0) dictated by the high-level board.The blended kinematic control command vector (r0 +rb, ω0 +ωb) is then transformed usingthe forward kinematic model (3.1) to the set points to actuate the motors.

The set points for the motors’ speeds are maintained using PID control. A rising or fallingedge from the wheel encoder triggers an interrupt, where its service routine increments a‘tick’ counter maintained for each wheel. Every 40 milliseconds, the derivative (first-orderdifference) of a wheel encoder’s ‘ticks’ is taken as a feedback signal for that wheel’s PIDcontroller. The PID controller compares this signal to the desired set point and adjusts themotors’ PWM values in order to follow the calculated kinematic commands. PID controlwas implemented using the Arduino PID Library for its particular robust implementationthat takes into consideration practical details of the control algorithm [18].

3.4 Special Sensor

For this project, I used a non-invasive wireless dry-electrode EEG sensor. After doinga market survey, I found that one of the most economical options is to ‘hack’ the EEGsensing board on popular neuro-gaming toys. This class of toys make use of NeuroSky’sThinkGear ASIC Module (TGAM, see Figure 3.4) [19]. It is possible to solder wires intothis module to have access to its Tx and Rx pins in order to be able to serially communicatewith it. I connected a serial-to-wireless Bluetooth LE module (HM-10) to these pins tointerface with the TGAM module and read out EEG-based signals. These signals are thenpost-processed in real-time on the high-level board, and kinematic variables are extractedusing an arbitrary mapping from the EEG signal space to the kinematic task space. Thissetup is a variation on a Cognitive Neural Prosthetics design [6].

3.4.1 Wireless Data Acquisition and User Interface

To interface with the special sensor, I developed a simple user interface (UI) that actsas a ‘dashboard’ to receive data from the sensor and interface with the robotic platform.The UI scans for Bluetooth LE peripherals using Apple’s CoreBluetooth framework anddisplays them in a list using Cocoa Bindings. First, I managed to complete the wirelessdata acquisition from EEG TGAM sensor. It was necessary to use the Core Bluetoothsoftware from Apple to interface with the Bluetooth LE HM-10 module. Starting off themanufacturer’s example iOS 7 code for the HM-10 module [20], I replaced the iOS UI

12

Figure 3.4: TGAM1 module from NeuroSky. Photo courtesy of the Flickr user JoshDiMauro and and licensed under Creative Commons Attribution-Share Alike3.0 – https://www.flickr.com/photos/jazzmasterson/6345897204/

elements written in objective-C with Swift code written for OS X Cocoa, while re-usingthe Core Bluetooth code that is portable between iOS and OS X. Next, I wrote a packetparser in Swift by first capturing actual packets from the TGAM module under differentoperating conditions, and then constructing a Swift playground to parse these packetswith reference to NeuroSky’s Think Gear Serial Stream Guide [21]. I made use of Swift’senumerations, and the reduce() functionality to quickly calculate summations. Withpacket parsing, I was able to extract the following data from EEG TGAM sensor: PoorSignal Indicator, ATTENTION, MEDITATION, and 8 values of EEG power bands.

I designed the UI to allow me to experiment with both manual control and brain controlof the robotic platform. I can select the kinematic vector, (r0, ω0), to be commandedto the robot manually, and the UI is also capable of automatically forwarding calculatedkinematics from the EEG-based signals and custom commands to the robot over Wi-Fiusingcustom-designed User Datagram Protocol (UDP) packets.

3.4.2 UDP-to-Serial Packet Design

I designed the robotic platform to be capable of receiving and parsing custom-designedUDP packets that dictate the desired kinematic vector, (r0, ω0), and also turn on/off theobstacle avoidance mode. The packet format is [0xAA, 0xAA, LIN, ROT, CHK] for akinematic packet and [0xAA, 0x55, CMD, ARG, CHK] for a command packet. In bothcases, the first two bytes are header bytes to synchronize the packet start in the serialstream. In kinematic packets, the LIN byte is a signed char indicating r0, the desiredlinear movement speed and direction (0 to +/- 100%), the ROT byte is a signed char

13

indicating ω0, the desired rotational movement speed and direction (0 to +/- 100%), andthe CHK byte is the checksum calculated as the unsigned sum of LIN and ROT modulo 256. Incommand packets, the CMD byte is a char indicating the command type, and the ARG byteis the command’s argument, and the CHK byte is the checksum calculated as the unsignedsum of CMD and ARG modulo 256. To turn obstacle avoidance on/off, the command isCMD=’o’ with ARG=0 for off and ARG=1 for on.

I configured the RN-XV WiFly module to work in Access Point mode, providing a self-sustained Wi-Fi network to the smart personal device, and to receive UDP packets andcovert them to serial packets to the Arduino. I implemented a packet parser in C onthe Arduino that buffers and parses these UDP data and command packets as soon asthey arrive and then updates the relevant variables on the Arduino. The packet parseris capable of validating the incoming packets, dropping invalid ones, and seeking in thestream to start reading a new valid packet.

3.4.3 Kinematic Extraction Algorithm

The TGAM-based EEG sensor provides a special EEG-based signal known as the AT-TENTION signal using a proprietary algorithm developed by NeuroSky that quantita-tively measures the cognitive state of attention on a scale of (0-100) [21]. Though thealgorithm is proprietary, this signal can be conceptualized as an arbitrary transformationon the EEG signal that outputs another discrete-time signal, z(k), with a one-second res-olution. After experimenting with this signal, I decided to use it in a non-biomimeticapproach to kinematic extraction, unlike biomimetic BMIs that make use of explicit EEG-to-kinematic transformations designed using Supervised Machine Learning approaches [12].My approach is a special case of Cognitive Neural Prosthetics [6], where the BMI user isrequired to volitionally control the cognitive attention state to steer the robotic platformin real-time. I assigned the kinematics extracted from the ATTENTION signal, z(k), tothe rotational velocity variable, ω0(k).

The algorithm proceeds as follows. The mean and variance of the ATTENTION signal,z(k), are continuously estimated online. The mean is subtracted from z(k) to make itzero-centered, and the variance is used to bound the ‘noise’ in this signal. In other words,since the moments of volitional control of attention to extreme values are limited, the‘baseline’ attention measured by the signal z(k) is captured quantitatively in terms of itsmean and variance. Whenever z(k) is within the vicinity of this baseline activity, theoutput kinematic signal is zero. That is

ω0(k) =

{0 if (z(k) −mz(k))2 ≤ s2z(k)α× (z(k) −mz(k)) if (z(k) −mz(k))2 > s2z(k)

(3.3)

where mz(k) is the online estimated mean of z(k), s2z(k) is the online estimated variance ofz(k), and α is an arbitrary gain that sets the dynamic range of rotational velocities of therobotic platform. The value of α has to be chosen such that the range of ω0(k) is between[-100%,100%].

14

Figure 3.5: Arduino Leonardo board with Atmel’s ATmega32u4. Photo courtesy of theArduino community and and licensed under Creative Commons Attribution-Share Alike 3.0.

3.5 Board Design Choices

For the low-level board, I chose an Arduino Leonardo (see Figure 3.5) board based onAtmel’s ATmega32u4 microcontroller unit (MCU). This MCU has enough peripherals tointerface with the low-level sensors and actuators, and have enough computational power todo real-time control. The high-level board is a mobile computing platform such as the user’ssmart-phone. With the recent unveiling of the new smart watch from Apple and the newSwift programming language, Apple’s programming environment is an attractive option forpersonalized app development with Apple’s hardware providing adequate computationalpower for non-trivial tasks (e.g. the A7 and A8 microprocessor series and the S1 integratedchip). The Swift language provides excellent easy-to-program syntax combined with thecomputational power of compiled native code. This project’s high-level code is implementedin Swift to run on Mac OS X. However, the core code is cross-platform and can be made torun on an iOS device such as an iPhone with minimal changes. I utilized interoperabilitybetween Objective-C and Swift as necessary.

3.6 Bill of Materials

I intentionally designed this project on a low budget. The Bill of Materials total cost,excluding the mobile computing device, is ∼$200.

1. Mobile computing platform (provided by the user, price not included)

2. Arduino Leonardo Board (∼$25)

3. Arduino Protoshield Bare PCB (∼$5)

4. Dry-electrode EEG sensor based on TGAM from NeuroSky (provided as a hackedversion of the MindFlex toy, ∼$60 for one headset)

15

5. Bluetooth LE module (SunFounder Bluetooth HM-10 V4.0, ∼$10)

6. Wi-Fi wireless communication module for infrastructure-less networking to communi-cate with the low-level board (RN-VX WiFly Module with XBee Explorer RegulatedBoard, ∼$45 both)

7. Four ultrasonic proximity sensors (HC-SR04, ∼$25 total)

8. The Magician Chassis (from SparkFun), including brushed DC motors, wheels, andwheel encoder board (∼$15)

9. Batteries, Cables, Perfboards, LEDs, Pin Headers, Switches, . . . , etc. (∼$15)

16

4 Experimental Layout and Results

4.1 Brain-control Experiment

The initial conception of the idea of this project stemmed from my need to experience a BMIexperiment myself. The idea of volitional control over cognitive states always fascinatedme, and I designed a simple cognitive control experiment to showcase the possibility ofmapping an abstract signal, such as the attention signal provided by the EEG TGAMsensor, into a kinematic signal that actuates a mobile robotic platform. The robot startsin cruise-control mode by maintaining a pure linear velocity r0. Then, using brain-control,the subject -myself!- should add a rotational component ω0 to steer the robot left andright. The experiment consisted of daily sessions that lasted about 30 mins. In eachsession, self-paced goal-directed trials were attempted. Using recycled cardboard, I setupa ‘walled’ area of 100×75 inches (see Figure 4.1) where the robot can move under braincontrol, and the task is to move the robot from an initial arbitrary location to a ‘target’location indicated by light-weight plastic balls. The target’s approximate ‘diameter’ was7 inches. Before each session starts, sensor calibration is achieved by letting the subjectlook at the robot while it is not moving for a couple of minutes. This allows the onlinemean and variance estimation to work while the subject is in a ‘quiescent’ state to reliablyestimate the baseline of the attention signal without volitional control.

I attempted this experiment at first with the kinematic extraction algorithm in subsec-tion 3.4.3. I was able to exercise volitional modulation over the attention signal. However,the time lag at which this signal ‘responded’ to my volitional control was too slow. Thisresulted in over-steering the robot to an angle that is different from my true intention, andI was never able to drive the robot to the ‘target’ location using brain control.

To remedy this problem and to be able to continue the experiment before the end ofthe semester, I had to add heuristics to make the algorithm ‘work’. First, a temporalsmoothing rule was added, where the signal ω0(k) was artificially returned to zero onesecond after it attained a non-zero value. To be able to push this signal to non-zero valuesonce more, it was required first to drive it back to the zero range by volitionally bringingthe attention signal back to its baseline range, (z(k) − mz(k))2 ≤ s2z(k). This heuristichelped remedy the prolonged over-steering problem. Another heuristic was added to makethe rotational movement to a particular angle more reproducible, by artificially ‘clamping’positive and negative ω0(k) values to a predefined rotational speed (e.g. 20%). By addingthese two heuristics, the algorithm became effectively equivalent to an adaptive classifier

17

Figure 4.1: Experimental setup for the goal-directed task.

that transforms the attention signal z(k) into three different values of ω0(k) as

ω0(k) =

{0 if (z(k) −mz(k))2 ≤ s2z(k)β × sgn(z(k) −mz(k)) if (z(k) −mz(k))2 > s2z(k)

(4.1)

where β is a preset rotational speed (e.g. 20%).At the time of writing of this report, I have done two brain-control sessions, and I was

able to hit the target 6 and 8 times on the first and second session, respectively.

4.2 Side Experiments

4.2.1 Soldering Custom Perfboards

For someone like me, soldering is definitely an exciting experiment. The last time I didsome soldering was almost eight years ago! Throughout this project, I soldered customperfboards for various submodules in the robot, and I also soldered a socket for the H-bridgechip on the Arduino Protoshield and routed all the wires to the Arduino pins. Yet, themost challenging task for my soldering skills was hacking into the MindFlex EEG headsetto access the Tx and Rx pins. For all my soldering experiments, I used the multimeter

18

Figure 4.2: Hacking the EEG Headset

to do continuity checks on the soldered boards to make sure everything is connected asexpected and that there are no fatal short-circuits such as a Vcc-Gnd short-circuit.

4.2.2 Hacking the EEG Headset

I soldered four DuPont male-to-female cables to the Tx, Rx, V+, and GND pins of theTGAM module [19]. The pins were very close to each other and I had to be extremelycareful in order not to short circuit neighboring pins. I gained access to the TGAM1module by opening the plastic casing of one MindFlex Duel headset and hacking it. Thishack is becoming more and more well known in the EEG neurohacking community [22]and provides a very low-cost single-electrode EEG headset. (And a neurogame to enjoy inyour free time!)

To get the cables out of the headset without stressing the solder points, I used a Dremelrotary tool to make a rectangle-shaped opening in the plastic casing. I was experimentingwith different options so the rectangle size ended up being a lot bigger than actually needed,but this can be corrected using duct tape. To further reduce the stress on the cables, Iglued supporting foam that would get in contact with the head and protect the cables.The results are shown in Figure 4.2.

19

Figure 4.3: Raw packets captured from the TGAM-based EEG Headset

4.2.3 Tethered Connection with the EEG Headset

A quick way to test the EEG headset was to use the Arduino Leonardo board directly tocommunicate serially with TGAM (tethered setup). I connected the Tx of TGAM to theRx of Leonardo and vice versa, and I gave them a common GND. I did not use level shiftersat that time, though ideally one should do so as the two boards operate at different voltagelevels (3.3V and 5V). The Leonardo board can communicate with the PC using virtual serialport over USB, freeing up the ATmega microcontroller’s hardware serial port. Hence, Iused the Leonardo simply as a ’tunnel’ to route the serial packets of TGAM to the hostPC. Actually its role is a little more than just this as I will explain in the next paragraph.The first test I did was to start the hardware serial (Serial1) at 9600 baud, read the TGAMpackets, and immediately send them to the host PC at 57600 baud. I used a serial terminal(X-CTU) to look at the packets. The TGAM on MindFlex’s headset is hardwired to operateat 9600 baud (0x01 mode) sending a packet every second. This packet contains four piecesof information: the POOR SIGNAL QUALITY indicator (0-255, with 0 meaning excellentsignal quality), NeuroSky’s proprietary ATTENTION and MEDITATION values (0-100,percentage), and the EEG power spectrum in different frequency bands [21]. The MindFlextoys probably use ATTENTION values to control the game. I looked at the received serialpackets in X-CTU and I made sure that I can parse them visually by referring to NeuroSky’sTGAM communication protocol specifications [21].

Next, I attempted to access the raw EEG signal from this sensor. There are two optionsto do so: either to change the hardwired configuration of the TGAM on the MindFlex

20

Figure 4.4: Raw EEG captured from the TGAM-based EEG Headset with the BrainFlexJava Software

headset such that it immediately boots into the desired 0x02 mode [19], or to reconfigurethe TGAM after booting by sending a 0x02 serial command [21], closing the Leonardo serialport (Serial1), and then opening it again at 57600 baud. Given my moderate solderingskills, I chose the second option, and it worked. I confirmed I can receive serial packetsin X-CTU as shown in Figure 4.3. TGAM sends 512 notch-filtered EEG samples persecond [21]. Each sample (10 bits represented in 2 bytes) is ’wrapped’ in one completeserial packet, making the overall packet size 8 bytes. The RAW EEG samples have a codeof 0x80 [21]. While the large packet size represents a large overhead to receive just onesample, it also means you can more easily recover from packet loss as you’d only be losingone sample per packet. It should be noted too that sending the 0x02 serial commandmust be done at the correct baud rate at which the module is currently operating (9600in my case). Failure to do so renders the TGAM module in a non-usable state until it ispower-cycled. In that state, the module will keep sending packets with 0x82 code. Thiscode, however, is not documented in the official documentation from NeuroSky [21].

The final conclusive step for this experiment was to parse the incoming serial packets,extract the raw EEG samples, and visualize them. For this step, I used the third-partyBrainFlex Java software [23]. The software, in the default “MindWave Mobile” mode,parses the TGAM packets, extracts all the information they carry, and displays them. MyArduino code worked seamlessly with this software. For the first time, I was looking at myown EEG recorded and displayed in real-time! My raw EEG signal is shown in Figure 4.4.

21

5 Conclusion and Future Work

In this course project, I showcased a proof-of-concept low-cost implementation of a cogni-tive BMI with the intended use as a brain-actuated wheelchair. I had a first-hand experi-ence with BMIs for actuating mobile robotic platforms and I experimented with volitionallycontrolling my cognitive state. I came up with an arbitrary mapping, from the attentionsignal to a rotational velocity signal, that suppresses baseline attentional variations andpropagates volitionally modulated variations. I had to add heuristics to make the mappingwork with limited training time. Through this experience, I realized the high cognitive loadon BMI users to achieve brain-control. Additonally, the robot construction work took muchmore time than I originally expected, leaving very little time for further experimentation.

In the future, several extensions can be added. The high-level code can be ported to runon an iOS-based device. Direct processing of the raw EEG signal from the TGAM sensorcan be investigated to provide brain-control signals that are less cognitively demanding.With further training, it may be possible to brain-control linear velocity in addition torotational velocity. More neurofeedback signals can be added, such as a sound signalthat changes frequency in proportion to the extracted kinematics. Finally, more advancedsensors such as accelerometers and localization sensors can be added to the robotic platformto enable finer control and automatic navigation.

22

Bibliography

[1] A. B. Schwartz, D. M. Taylor, and S. I. H. Tillery, “Extraction algorithms for corticalcontrol of arm prosthetics,” vol. 11, no. 6, pp. 701–708, Dec. 2001.

[2] J. Becedas, “Brain-machine interfaces: Basis and advances,” vol. 42, no. 6, pp. 825–836, Nov. 2012.

[3] B. M. Yu, G. Santhanam, M. Sahani, and K. V. Shenoy, “Chapter 7 - neural decodingfor motor and communication prostheses,” in Statistical Signal Processing for Neuro-science and Neurotechnology, K. G. Oweiss, Ed. Oxford: Academic Press, 2010, pp.219–263.

[4] F. Galan, M. Nuttin, E. Lew, P. Ferrez, G. Vanacker, J. Philips, and J. del R. Millan,“A brain-actuated wheelchair: Asynchronous and non-invasive brain–computer inter-faces for continuous control of robots,” Clinical Neurophysiology, vol. 119, no. 9, pp.2159–2169, 2008.

[5] A. C. Lopes, G. Pires, and U. Nunes, “Assisted navigation for a brain-actuated in-telligent wheelchair,” Robotics and Autonomous Systems, vol. 61, no. 3, pp. 245–258,2013.

[6] R. A. Andersen, E. J. Hwang, and G. H. Mulliken, “Cognitive neural prosthetics,”vol. 61, no. 1, pp. 169–190.

[7] B. M. Yu, G. Santhanam, M. Sahani, and K. V. Shenoy, “Chapter 7 - neural decodingfor motor and communication prostheses,” in Statistical Signal Processing for Neuro-science and Neurotechnology, 1st ed., K. G. Oweiss, Ed. Oxford: Academic Press,2010, pp. 219–263.

[8] L. R. Hochberg, D. Bacher, B. Jarosiewicz, N. Y. Masse, J. D. Simeral, J. Vogel,S. Haddadin, J. Liu, S. S. Cash, P. van der Smagt, and J. P. Donoghue, “Reach andgrasp by people with tetraplegia using a neurally controlled robotic arm,” vol. 485,no. 7398, pp. 372–375.

[9] J. L. Collinger, B. Wodlinger, J. E. Downey, W. Wang, E. C. Tyler-Kabara, D. J.Weber, A. J. McMorland, M. Velliste, M. L. Boninger, and A. B. Schwartz, “High-performance neuroprosthetic control by an individual with tetraplegia,” vol. 381, no.9866, pp. 557–564.

23

[10] J. M. Carmena, “Advances in neuroprosthetic learning and control,” vol. 11, no. 5, p.e1001561.

[11] J. d. R. Millan and J. Carmena, “Invasive or noninvasive: Understanding brain-machine interface technology [conversations in BME],” vol. 29, no. 1, pp. 16–22, Jan.2010.

[12] J. Wolpaw and E. Wolpaw, Brain-Computer Interfaces: Principles and Practice. Ox-ford University Press, USA, 2012.

[13] R. A. Andersen, E. J. Hwang, and G. H. Mulliken, “Cognitive neural prosthetics,”Annual review of psychology, vol. 61, p. 169, 2010.

[14] “HC-SR04 data sheet,” http://elecfreaks.com/store/download/HC-SR04.pdf, ac-cessed: 2014-10-13.

[15] T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, et al., Introduction to algorithms.MIT press Cambridge, 2001, vol. 2.

[16] “Planning algorithms – a simple unicycle,” http://planning.cs.uiuc.edu/node660.html,accessed: 2014-10-13.

[17] “Planning algorithms – a differential drive,” http://planning.cs.uiuc.edu/node659.html, accessed: 2014-10-13.

[18] “Improving the beginner’s PID,” http://brettbeauregard.com/blog/2011/04/improving-the-beginners-pid-introduction/, accessed: 2014-12-01.

[19] TGAM1 Spec Sheet, NeuroSky, Mar. 2010. [Online]. Available: http://wearcam.org/ece1766/neurosky eeg brainwave chip and board tgam1.pdf

[20] “HMSoft iOS 7 example code,” http://www.jnhuamao.cn/HMSoft ios7.zip, accessed:2014-10-27.

[21] “NeuroSky – ThinkGear serial stream guide,” http://developer.neurosky.com/docs/doku.php?id=thinkgear communications protocol, accessed: 2014-09-28.

[22] “Frontier nerds – brain hack,” http://frontiernerds.com/brain-hack, accessed: 2014-09-28.

[23] “BrainFlex Java software,” https://github.com/arpruss/brainflex/releases, accessed:2014-09-28.

24

Appendix

The project’s website is currently located at https://sites.google.com/site/badr5666eel/home/. The code snippets listed below are released into the public domain.

Differential Motor Drive Test Code// this constant won’t change:

const int motorRightFwd = 9;

const int motorRightBwd = 10;

const int motorLeftFwd = 5;

const int motorLeftBwd = 6;

const int ledPin = 13; // the pin that the LED is attached to

const float desired_rps=-1; // [-1,1] and 1/-1 is maximum speed forward/backward, 0 is no linear motion

float desired_w=-0.25; // rev/sec

const float half_LoverR=0.5*4/1.25; //1/2*L/R

float angle_r = ( + half_LoverR * desired_w);

float angle_l = ( - half_LoverR * desired_w);

float v_r = desired_rps+angle_r; // rev/sec

float v_l = desired_rps+angle_l;

bool v_r_direction = v_r > 0;

bool v_l_direction = v_l > 0;

unsigned char v_r_command = (unsigned char) (abs(v_r)>1?255:abs(v_r)*255.0);

unsigned char v_l_command = (unsigned char) (abs(v_l)>1?255:abs(v_l)*255.0);

void driveMotor(bool rightMotor,unsigned char v_command,bool directionFwd) {

int motorFwd,motorBwd;

if(rightMotor) {

motorFwd = motorRightFwd;

motorBwd = motorRightBwd;

} else {

motorFwd = motorLeftFwd;

motorBwd = motorLeftBwd;

}

if(directionFwd) {

digitalWrite(motorBwd,LOW);

analogWrite(motorFwd,v_command);

} else {

digitalWrite(motorFwd,LOW);

analogWrite(motorBwd,v_command);

}

}

void setup() {

pinMode(motorRightFwd,OUTPUT);

digitalWrite(motorRightFwd,LOW);

pinMode(motorRightBwd,OUTPUT);

digitalWrite(motorRightBwd,LOW);

pinMode(motorLeftFwd,OUTPUT);

digitalWrite(motorLeftFwd,LOW);

25

pinMode(motorLeftBwd,OUTPUT);

digitalWrite(motorLeftBwd,LOW);

pinMode(ledPin,OUTPUT);

digitalWrite(ledPin,LOW);

/*Serial.begin(57600);

while (!Serial) {

; // wait for serial port to connect. Needed for Leonardo only

}

Serial.print("right"); Serial.print(v_r_direction?"+":"-"); Serial.print(v_r); Serial.println(v_r_command);

Serial.print("left"); Serial.print(v_l_direction?"+":"-"); Serial.print(v_l); Serial.println(v_l_command);

*/

}

void loop() {

digitalWrite(ledPin,HIGH);

// driveMotor(true,150,true);//analogWrite(motorRightFwd,150);

// driveMotor(false,150,false);//analogWrite(motorLeftBwd,150);

driveMotor(true,v_r_command,v_r_direction);

driveMotor(false,v_l_command,v_l_direction);

}

HC-SR04 Ultrasonic Sesnor Test Code// this constant won’t change:

const int trigPin = 2;

const int echoPin = 4;

const int ledPin = 13; // the pin that the LED is attached to

void setup() {

pinMode(trigPin,OUTPUT);

pinMode(echoPin,INPUT);

pinMode(ledPin,OUTPUT);

digitalWrite(ledPin,LOW);

Serial.begin(57600);

while (!Serial) {

; // wait for serial port to connect. Needed for Leonardo only

}

Serial.println("in");

digitalWrite(trigPin,LOW);

}

void loop() {

digitalWrite(ledPin,HIGH);

digitalWrite(trigPin,HIGH);

delayMicroseconds(10);

digitalWrite(trigPin,LOW);

unsigned long duration = pulseIn(echoPin,HIGH,3000UL);

if(duration > 0) {

unsigned long dist = duration/58;

Serial.println(dist);

}

}

26

TGAM Arduino Tunneling Codechar buf[64];

void setup() {

pinMode(13, OUTPUT);

digitalWrite(13, LOW); // turn the LED off by making the voltage LOW

//Initialize serial and wait for port to open:

Serial.begin(57600);

while (!Serial) {

; // wait for serial port to connect. Needed for Leonardo only

}

// prints title with ending line break

Serial.println("Initializaing Serial1"); Serial.flush();

Serial1.begin(9600);

while(Serial1.available() < 1) {

; // wait for TGAM1 to send data at 9600

}

Serial.println("Switching to Raw.."); Serial.flush();

Serial1.write(0x02); Serial1.flush();

Serial1.end();

Serial1.begin(57600);

Serial.println(" Done!");

}

void loop() {

digitalWrite(13, HIGH); // turn the LED on

if(Serial1.available()) {

int len = Serial1.available();

len = Serial1.readBytes(buf,len);

Serial.write(buf,len);

}

/*if (Serial1.available()) {

int inByte = Serial1.read();

Serial.write(inByte);

}*/

}

Final Arduino Code#include <PID_v1.h>

//#define SERIALON

const int motorRightFwd = 11;

const int motorRightBwd = 10;

const int motorLeftFwd = 6;

const int motorLeftBwd = 5;

const int ledPin = 13; // the pin that the LED is attached to

volatile unsigned long rotLeft = 0; // for Left Wheel ISR

volatile unsigned long rotRight = 0; // for Right Wheel ISR

unsigned long lastRotRight=0;

unsigned long lastRotLeft=0;

double rSetpoint=0, rInput=0, rOutput;

double lSetpoint=0, lInput=0, lOutput;

const int sT = 40; // sampling Time for PID control

27

PID rPID(&rInput, &rOutput, &rSetpoint,2,0.2,0, DIRECT);

PID lPID(&lInput, &lOutput, &lSetpoint,2,0.2,0, DIRECT);

bool obstacleOn=true;

float transformUS(unsigned long transitTime) { // response at ~2 cm

return 1.0-(transitTime/58/5/(float)(transitTime/58/5+1));

}

float desired_rps=0; // [-1,1] and 1/-1 is maximum speed forward/backward, 0 is no linear motion

float desired_w=0; // rev/sec

const float half_LoverR=0.5*4/1.25; //1/2*L/R

const unsigned long sonarTimeout = 3000; // usec

const int sonar_sz = 4;

struct sonarState {

unsigned long pulseUsec;

unsigned long echoUsec;

int s;

int idx; // circular buffer index

};

struct sonar {

int trig;

int echo;

unsigned long transTimeUsec[sonar_sz];

struct sonarState state;

} sonarFR, sonarFL, sonarSR, sonarSL;

// median filter US readings

unsigned long mfiltUSReadings(struct sonar * snr) {

bool isOdd = sonar_sz/2 != 0;

int sortIdx = isOdd ? (sonar_sz+1)>>1 : (sonar_sz+2)>>1;

unsigned long tmpArray[sonar_sz];

memcpy(tmpArray, snr->transTimeUsec, sizeof(tmpArray));

// insertion sort until sortIdx

for(int i=1; i<= sortIdx; ++i ) {

unsigned long val = tmpArray[i];

int j=i;

while(j>0 && tmpArray[j-1]>val) {

tmpArray[j] = tmpArray[j-1];

--j;

}

tmpArray[j] = val;

}

return isOdd ? tmpArray[sortIdx-1] : (tmpArray[sortIdx-1]+tmpArray[sortIdx-2])>>1;

}

const int packetLength = 5;

struct WifiSerial {

unsigned char bytesRead, bytes[packetLength];

/*unsigned char h1,h2;

char rps,w;

unsigned char chksum;*/

} wifiSerial;

void driveMotor(bool rightMotor,unsigned char v_command,bool directionFwd) {

int motorFwd,motorBwd;

if(rightMotor) {

28

motorFwd = motorRightFwd;

motorBwd = motorRightBwd;

} else {

motorFwd = motorLeftFwd;

motorBwd = motorLeftBwd;

}

if(directionFwd) {

digitalWrite(motorBwd,LOW);

analogWrite(motorFwd,v_command);

} else {

digitalWrite(motorFwd,LOW);

analogWrite(motorBwd,v_command);

}

}

void setup() {

pinMode(motorRightFwd,OUTPUT);

digitalWrite(motorRightFwd,LOW);

pinMode(motorRightBwd,OUTPUT);

digitalWrite(motorRightBwd,LOW);

pinMode(motorLeftFwd,OUTPUT);

digitalWrite(motorLeftFwd,LOW);

pinMode(motorLeftBwd,OUTPUT);

digitalWrite(motorLeftBwd,LOW);

pinMode(ledPin,OUTPUT);

digitalWrite(ledPin,LOW);

// Setup Sonar Struct & Pins

sonarFR.trig=9; sonarFR.echo=12; pinMode(sonarFR.echo,INPUT); pinMode(sonarFR.trig,OUTPUT);

memset(sonarFR.transTimeUsec,0,sizeof(sonarFR.transTimeUsec));

digitalWrite(sonarFR.trig,LOW); sonarFR.state.pulseUsec=0; sonarFR.state.echoUsec=0; sonarFR.state.s=0; sonarFR.state.idx=0;

sonarFL.trig=A2; sonarFL.echo=A3; pinMode(sonarFL.echo,INPUT); pinMode(sonarFL.trig,OUTPUT);

memset(sonarFL.transTimeUsec,0,sizeof(sonarFL.transTimeUsec));

digitalWrite(sonarFL.trig,LOW); sonarFL.state.pulseUsec=0; sonarFL.state.echoUsec=0; sonarFL.state.s=0; sonarFL.state.idx=0;

sonarSR.trig=4; sonarSR.echo=8; pinMode(sonarSR.echo,INPUT); pinMode(sonarSR.trig,OUTPUT);

memset(sonarSR.transTimeUsec,0,sizeof(sonarSR.transTimeUsec));

digitalWrite(sonarSR.trig,LOW); sonarSR.state.pulseUsec=0; sonarSR.state.echoUsec=0; sonarSR.state.s=0; sonarSR.state.idx=0;

sonarSL.trig=A0; sonarSL.echo=A1; pinMode(sonarSL.echo,INPUT); pinMode(sonarSL.trig,OUTPUT);

memset(sonarSL.transTimeUsec,0,sizeof(sonarSL.transTimeUsec));

digitalWrite(sonarSL.trig,LOW); sonarSL.state.pulseUsec=0; sonarSL.state.echoUsec=0; sonarSL.state.s=0; sonarSL.state.idx=0;

wifiSerial.bytesRead = 0;

// Setup WiFly Serial

Serial1.begin(9600);

// PID

attachInterrupt(0, incLeft, CHANGE);

attachInterrupt(1, incRight, CHANGE);

rPID.SetMode(AUTOMATIC);

rPID.SetSampleTime(sT);

lPID.SetMode(AUTOMATIC);

lPID.SetSampleTime(sT);

#ifdef SERIALON

// Setup Debug Serial

Serial.begin(57600);

while (!Serial) {

; // wait for serial port to connect. Needed for Leonardo only

}

29

Serial.println("in");

#endif

}

void incLeft() {

++rotLeft;

}

void incRight() {

++rotRight;

}

void triggerSonar(struct sonar * snr) {

if(snr->state.s==0) { // s=not trig not listen

digitalWrite(snr->trig,HIGH);

snr->state.s = 1; // s=trig not listen

snr->state.pulseUsec = micros();

}

}

void processSonar(struct sonar * snr) {

if(snr->state.s == 1 && (micros() - snr->state.pulseUsec >= 10)) {

digitalWrite(snr->trig,LOW);

snr->state.s = 2;// s=trig and listen

}

if(snr->state.s==2) {

if(micros() - snr->state.pulseUsec >= sonarTimeout) {

snr->state.s = 0;

snr->transTimeUsec[snr->state.idx++]=0;

snr->state.idx = snr->state.idx % sonar_sz;

} else {

if(digitalRead(snr->echo)) {

snr->state.s=3; snr->state.echoUsec = micros();

}

}

}

if(snr->state.s==3) {

if(micros() - snr->state.pulseUsec >= sonarTimeout) {

snr->state.s = 0;

snr->transTimeUsec[snr->state.idx++]=0;

snr->state.idx = snr->state.idx % sonar_sz;

} else {

if(!digitalRead(snr->echo)) {

snr->state.s=0;

snr->transTimeUsec[snr->state.idx++] = micros() - snr->state.echoUsec;

snr->state.idx = snr->state.idx % sonar_sz;

}

}

}

}

void calcNewCommand(float* rps,float* w) {

float y=desired_w, x=desired_rps;

if(obstacleOn) {

unsigned long tVa1;

tVa1=mfiltUSReadings(&sonarFL);

if(tVa1) { x -= transformUS(tVa1)*0.5; }

tVa1=mfiltUSReadings(&sonarFR);

if(tVa1) { x -= transformUS(tVa1)*0.5; }

tVa1=mfiltUSReadings(&sonarSL);

if(tVa1) { y -= transformUS(tVa1)*0.5; }

30

tVa1=mfiltUSReadings(&sonarSR);

if(tVa1) { y += transformUS(tVa1)*0.5; }

}

*w=y;

*rps=x;

#ifdef SERIALON

Serial.print("(");

Serial.print(*w); Serial.print(","); Serial.print(*rps);

Serial.println(")");

#endif

}

void processSerial() {

if(wifiSerial.bytesRead==0) {

// look for \xAA header

int len = Serial1.available();

if (len<1) return;

#ifdef SERIALON

Serial.print("lh-"); Serial.println(wifiSerial.bytesRead);

#endif

for(int i=0; i<len; ++i) {

if(Serial1.read() == 0xAA) {

wifiSerial.bytesRead = 1;

wifiSerial.bytes[0] = 0xAA;

#ifdef SERIALON

Serial.println("h1");

#endif

break;

}

}

}

if(wifiSerial.bytesRead != 0 && wifiSerial.bytesRead < packetLength) {

int len = Serial1.available();

if(len < packetLength-wifiSerial.bytesRead) {

#ifdef SERIALON

Serial.print("rt");Serial.print(wifiSerial.bytesRead);

#endif

return; // wait until we get one complete packet

}

int toRead = (packetLength-wifiSerial.bytesRead);

int startAt = wifiSerial.bytesRead;

for(int i=0; i<toRead; ++i) {

wifiSerial.bytes[startAt+i] = Serial1.read();

++wifiSerial.bytesRead;

}

#ifdef SERIALON

Serial.print("pc-"); Serial.println(wifiSerial.bytesRead);

#endif

}

if(wifiSerial.bytesRead == packetLength) {

// packet validity

bool valid = true;

valid=valid&&(wifiSerial.bytes[0] == 0xAA);

valid=valid&&(wifiSerial.bytes[1] == 0xAA || wifiSerial.bytes[1] == 0x55);

valid=valid&&( (unsigned char)(wifiSerial.bytes[2] + wifiSerial.bytes[3]) == wifiSerial.bytes[4]);

if(!valid) {

#ifdef SERIALON

Serial.println("nv");

#endif

31

bool found=false; int k;

#ifdef SERIALON

for(k=0; k<packetLength; ++k) {// dump inva1id packet

Serial.println(wifiSerial.bytes[k]);

}

#endif

// a1ttempt to recover a partial packet

for(k=1; k<packetLength; ++k) {

#ifdef SERIALON

Serial.println(wifiSerial.bytes[k]);

#endif

if( wifiSerial.bytes[k] == 0xAA ) {

found=true;

#ifdef SERIALON

Serial.println("sv");

#endif

break;

}

}

if(found) {

// copy over a partial packet

#ifdef SERIALON

Serial.println("fv");

#endif

for(int i=0; i+k<packetLength; ++i) {

#ifdef SERIALON

Serial.println(wifiSerial.bytes[i+k]);

#endif

wifiSerial.bytes[i] = wifiSerial.bytes[i+k];

}

wifiSerial.bytesRead=packetLength-k;

return;

}

// not a va1id packet and could not recover a partial packet

wifiSerial.bytesRead=0;

#ifdef SERIALON

Serial.println("xxlpxx");

#endif

return;

}

else { // a va1id packet

if(wifiSerial.bytes[1] == 0xAA) {// data packet

#ifdef SERIALON

Serial.println("dp");

#endif

desired_rps = (char)wifiSerial.bytes[2]/100.0f;

desired_w = (char)wifiSerial.bytes[3]/100.0f;

wifiSerial.bytesRead=0;

} else if(wifiSerial.bytes[1] == 0x55) { // command packet

#ifdef SERIALON

Serial.println("cp");

#endif

switch(wifiSerial.bytes[2]) {

case ’o’: // obstacle avoidance

#ifdef SERIALON

Serial.println("oc");

#endif

obstacleOn = (wifiSerial.bytes[3] != 0);

break;

32

default:

#ifdef SERIALON

Serial.println("xxupxx");

#endif

break;

}

wifiSerial.bytesRead=0;

} else {

#ifdef SERIALON

Serial.println("xxup2xx");

#endif

wifiSerial.bytesRead=0;

}

}

}

#ifdef SERIALON

Serial.print("xxmp1xx-"); Serial.println(wifiSerial.bytesRead);

#endif

}

unsigned long lasT1=0,lasT2=0,lasT3=0,lasT4=0;

bool v_r_direction,v_l_direction;

void loop() {

float actual_w, actual_rps;

digitalWrite(ledPin,HIGH);

unsigned long loopT = millis();

if(loopT - lasT1 >= 60) { // Trigger a new measurement cycle every 60 msec

lasT1 = loopT;

triggerSonar(&sonarFR); triggerSonar(&sonarFL); triggerSonar(&sonarSR); triggerSonar(&sonarSL);

}

processSonar(&sonarFR); processSonar(&sonarFL); processSonar(&sonarSR); processSonar(&sonarSL);

if(Serial1.available()) {

processSerial();

}

if(loopT - lasT2 >= 100) {

lasT2 = loopT;

calcNewCommand(&actual_rps,&actual_w);

/*#ifdef SERIALON

{

unsigned long tVal;

tVal=mfiltUSReadings(&sonarFR);

if(tVal) { Serial.print("FR "); Serial.print(tVal/58); Serial.println("; "); }

tVal=mfiltUSReadings(&sonarFL);

if(tVal) { Serial.print("\t\tFL "); Serial.print(tVal/58); Serial.println("; "); }

tVal=mfiltUSReadings(&sonarSR);

if(tVal){ Serial.print("\t\t\t\tSR "); Serial.print(tVal/58); Serial.println("; "); }

tVal=mfiltUSReadings(&sonarSL);

if(tVal){ Serial.print("\t\t\t\t\t\tSL "); Serial.print(tVal/58); Serial.println("; "); }

Serial.print("("); Serial.print(desired_rps); Serial.print(","); Serial.print(desired_w); Serial.print(")-");

Serial.print("("); Serial.print(actual_rps); Serial.print(","); Serial.print(actual_w); Serial.println(")");

}

#endif*/

}

33

if(loopT - lasT3 >= 300) {

lasT3 = loopT;

float angle_r = ( + half_LoverR * actual_w);

float angle_l = ( - half_LoverR * actual_w);

float v_r = actual_rps+angle_r; // rev/sec

float v_l = actual_rps+angle_l;

v_r_direction = v_r > 0;

v_l_direction = v_l > 0;

unsigned char v_r_command = (unsigned char) (abs(v_r)>1?255:abs(v_r)*255.0); rSetpoint = (double)v_r_command;

unsigned char v_l_command = (unsigned char) (abs(v_l)>1?255:abs(v_l)*255.0); lSetpoint = (double)v_l_command;

// driveMotor(true,v_r_command,v_r_direction);

// driveMotor(false,v_l_command,v_l_direction);

}

if(loopT - lasT4 >= sT ) {

lasT4=loopT;

double rateRight = (double)rotRight-lastRotRight;

rInput = rateRight*35.0*50.0/sT;

lastRotRight = rotRight;

double rateLeft = (double)rotLeft-lastRotLeft;

lInput = rateLeft*35.0*50.0/sT;

lastRotLeft = rotLeft;

// prevent the I term from overdriving the robot when it needs to stop

unsigned char rCommand = rSetpoint!=0 ? (unsigned char)rSetpoint : 0;

unsigned char lCommand = lSetpoint!=0 ? (unsigned char)lSetpoint : 0;

driveMotor(true,rCommand>=100?rCommand:0,v_r_direction);

driveMotor(false,lCommand>=100?lCommand:0,v_l_direction);

#ifdef SERIALON

Serial.print("(");

Serial.print(lSetpoint); Serial.print(","); Serial.print(rSetpoint);

Serial.print(")-(");

Serial.print(lInput); Serial.print(","); Serial.print(rInput);

Serial.print(")-(");

Serial.print(lOutput); Serial.print(","); Serial.print(v_l_direction?"F":"B");

Serial.print(")-(");

Serial.print(rOutput); Serial.print(","); Serial.print(v_r_direction?"F":"B");

Serial.println(")");

#endif

}

rPID.Compute();

lPID.Compute();

}

Swift Packet Parser for TGAM [Playground]// Playground - noun: a place where people can play

import Cocoa

let disconnectArray : [UInt8] = [0xAA,0xAA,0x20,0x02,0xC8,0x83,0x18,0x0F,0x47,0xA4,0x03,0x58,0xD8,0x00,0x8E,0xCA,0x00,0x90,0xDE,0x00,0x68,0x90,0x00,0x58,0x93,0x00,0x15,0x5F,0x00,0x0E,0x2C,0x04,0x00,0x05,0x00,0x0D]

let connectArray : [UInt8] = [0xAA,0xAA,0x20,0x02,0x00,0x83,0x18,0x01,0xC1,0xF6,0x00,0x82,0xF1,0x00,0x42,0x38,0x00,0x67,0x21,0x00,0x3D,0x29,0x00,0x33,0x56,0x00,0x16,0x5B,0x00,0x0B,0x57,0x04,0x5D,0x05,0x40,0xCD]

34

class TgamParser {

enum TgamCode : UInt8 {

case Sync=0xAA, ExCode=0x55

case SigQual=0x02, Attention=0x04,

Meditation=0x05, Raw=0x80, Crash=0x82, Power=0x83

}

var payload = [TgamCode:Any]()

/* TgamCode.SigQual : 0,

TgamCode.Attention : 0,

TgamCode.Meditation : 0,

TgamCode.Crash : false,

TgamCode.Raw : 0,

TgamCode.Power : [0,0,0,0,0,0,0,0] */

var rawBytes : [UInt8] = []

var pLength = 0

func isValid() -> Bool {

pLength = 0; payload.removeAll(keepCapacity: true) // reset packet info

var valid = true

if rawBytes.count < 6 {return false} // minimum possible packet length in theory

valid &= (rawBytes[0] == TgamCode.Sync.rawValue); if !valid { return false }

valid &= (rawBytes[1] == TgamCode.Sync.rawValue); if !valid { return false }

pLength = Int(rawBytes[2])

valid &= pLength < 170; if !valid { return false }

valid &= (Int(pLength) == rawBytes.count-4); if !valid { return false }

var sum : UInt8 = 0

for val in rawBytes[3...pLength+2] {

sum = sum &+ val

}

valid &= ((sum &+ rawBytes[pLength+3]) == 255)

return valid

}

func parse() {

let payloadBytes = rawBytes[3...pLength+2]

var i:Int=0, exCodeCount=0

while i < pLength {

while payloadBytes[i] == TgamCode.ExCode.rawValue {

++exCodeCount

++i

}

let code = Int(payloadBytes[i++])

if let tgamCode = TgamCode(rawValue: UInt8(code)) {

tgamCode

if code < 0x80 {

payload[tgamCode] = Int(payloadBytes[i++])

} else {

let dLength = Int(payloadBytes[i++])

switch tgamCode {

case .Raw:

assert(dLength==2)

payload[tgamCode] = Int(payloadBytes[i+1])<<8 + Int(payloadBytes[i+2])

i += 2

case .Power:

assert(dLength%3 == 0)

var powers : [UInt32] = []

powers.reserveCapacity(dLength/3)

var k:Int = 0

while k<dLength {

35

powers.append(UInt32(payloadBytes[i])<<16 +

UInt32(payloadBytes[i+1])<<8 +

UInt32(payloadBytes[i+2]) )

k+=3

i+=3

}

payload[tgamCode] = powers

default:

break

}

}

}

}

}

}

let parser = TgamParser()

parser.rawBytes = connectArray

parser.isValid()

parser.parse()

parser.payload

parser.payload[TgamParser.TgamCode.Power]

parser.rawBytes = disconnectArray

parser.isValid()

parser.parse()

parser.payload

parser.payload[TgamParser.TgamCode.Power]

// incremental byte appending

parser.rawBytes = []; var i=0

while !parser.isValid() {

parser.rawBytes.append(disconnectArray[i++])

}

parser.parse()

parser.payload

let x = [1,3,7]

let sum = x.reduce(0) {$0+$1}

sum

Final Swift Code//

// ViewController.swift

// BlueTest

//

// Created by Islam Badreldin on 10/19/14.

// Copyright (c) 2014 Islam Badreldin. All rights reserved.

//

import Cocoa

struct KinData {

var valid:Bool;

36

var lin:UInt8;

var rot:UInt8;

}

class ViewController: NSViewController, BTSmartSensorDelegate {

@IBOutlet weak var scanButton: NSButton!

@IBOutlet var peripheralsController: NSArrayController!

@IBOutlet weak var poorLabel: NSTextField!

@IBOutlet weak var uuidLabel: NSTextField!

@IBOutlet var receivedText: NSTextView!

@IBOutlet weak var indicSigQuality: NSProgressIndicator!

@IBOutlet weak var indicMeditation: NSProgressIndicator!

@IBOutlet weak var indicAttention: NSProgressIndicator!

@IBOutlet weak var indicP1: NSProgressIndicator!

@IBOutlet weak var indicP2: NSProgressIndicator!

@IBOutlet weak var indicP3: NSProgressIndicator!

@IBOutlet weak var indicP4: NSProgressIndicator!

@IBOutlet weak var indicP5: NSProgressIndicator!

@IBOutlet weak var indicP6: NSProgressIndicator!

@IBOutlet weak var indicP7: NSProgressIndicator!

@IBOutlet weak var indicP8: NSProgressIndicator!

@IBOutlet weak var attentionNum: NSTextField!

@IBOutlet weak var linMotion: NSSlider!

@IBOutlet weak var chkAlwaysSend: NSButton!

@IBOutlet weak var txtMean: NSTextField!

@IBOutlet weak var txtStd: NSTextField!

var indicators = [NSProgressIndicator]()

enum indicLabel:Int {

case sigQuality=0,meditation,attention,p1,p2,p3,p4,p5,p6,p7,p8

}

let serialGatt = SerialGATT()

var udpSock : Socket? = nil

var kinData = KinData(valid: false, lin: 0, rot:0)

var midpoint = 50.0

var M2 = 225.0

var nSamp : Int = 1

var btConnected = false

var lastRot : Int16 = 0

dynamic var peripherals = [CBPeripheral]()

let parser = TgamParser()

var buffer = [UInt8]()

override func viewDidLoad() {

super.viewDidLoad()

// Do any additional setup after loading the view.

serialGatt.setup()

serialGatt.delegate = self

37

indicators = [indicSigQuality,indicMeditation,indicAttention,

indicP1, indicP2, indicP3, indicP4, indicP5, indicP6, indicP7, indicP8]

NSTimer.scheduledTimerWithTimeInterval(1, target: self, selector: "refresh:", userInfo: nil, repeats: true)

NSTimer.scheduledTimerWithTimeInterval(1, target: self, selector: "udpSend:", userInfo: nil, repeats: true)

}

override var representedObject: AnyObject? {

didSet {

// Update the view, if already loaded.

}

}

@IBAction func scanDevices(sender: NSButton) {

peripherals.removeAll(keepCapacity: true)

if let activePeriph = serialGatt.activePeripheral {

if activePeriph.state == CBPeripheralState.Connected {

serialGatt.manager.cancelPeripheralConnection(activePeriph)

serialGatt.activePeripheral = nil

}

}

if serialGatt.peripherals != nil {

serialGatt.peripherals = nil

}

serialGatt.delegate = self

println("now we are searching device...");

sender.title = "Scanning..."

NSTimer.scheduledTimerWithTimeInterval(5, target: self, selector: "scanTimer:", userInfo: nil, repeats: false)

serialGatt.findHMSoftPeripherals(5)

}

func scanTimer(timer: NSTimer! ) {

scanButton.title = "Scan"

peripherals.removeAll(keepCapacity: true)

if let periphs = serialGatt.peripherals {

for i in 0..<periphs.count {

if let currPeriph = periphs[i] as? CBPeripheral {

peripherals.append(currPeriph)

}

}

}

}

@IBAction func connectTapped(sender: NSButton) {

if let selectedPeriph = peripheralsController.selectedObjects[0] as? CBPeripheral {

if let activePeriph = serialGatt.activePeripheral {

if activePeriph != selectedPeriph {

serialGatt.disconnect(activePeriph)

} else {

return

}

}

serialGatt.connect(selectedPeriph)

serialGatt.activePeripheral = selectedPeriph

serialGatt.stopScan()

scanButton.title = "Scan"

btConnected = true

}

}

38

@IBAction func wifiPressed(sender: AnyObject) {

// connect UDP too if not connected

if let sock = udpSock {

return

}

udpSock = Socket()

}

@IBAction func disconnectTapped(sender: NSButton) {

serialGatt.disconnect(serialGatt.activePeripheral)

serialGatt.activePeripheral = nil

btConnected = false

}

// BTSmartSensorDelegate

func peripheralFound(_: CBPeripheral) {

}

func serialGATTCharValueUpdated(uuid: NSString, value: NSData) {

var byteArray = [UInt8](count: value.length,repeatedValue: 0)

value.getBytes(&byteArray, length:value.length)

for thisByte in byteArray {

buffer.append(thisByte)

}

var hexBytes = "" as String

for val in byteArray {

hexBytes += NSString(format:"%2X:", val) as String

}

// Thanks to Swift Soda! http://www.swiftsoda.com/swift-coding/get-bytes-from-nsdata/

let hexBytesNoSpaces: String = hexBytes.stringByReplacingOccurrencesOfString("\u{0020}", withString: "0", options: NSStringCompareOptions.CaseInsensitiveSearch)

receivedText.string = receivedText.string! + hexBytesNoSpaces

}

func setConnect() {

if let activePeriph = serialGatt.activePeripheral {

uuidLabel.stringValue = activePeriph.identifier.UUIDString

}

}

func setDisconnect() {

}

func refresh(timer: NSTimer!) {

if buffer.count > 32 {

parser.rawBytes.removeAll(keepCapacity: true)

while !parser.isValid() {

parser.rawBytes.append(buffer.removeAtIndex(0))

}

parser.parse()

if let val = parser.payload[TgamParser.TgamCode.SigQual] as? Int {

kinData.valid = true

if(val > 50) {

poorLabel.textColor = NSColor.redColor()

kinData.valid = false

39

} else { poorLabel.textColor = NSColor.blackColor() }

indicators[indicLabel.sigQuality.rawValue].doubleValue = Double(val)/255.0*100.0

}

if let val = parser.payload[TgamParser.TgamCode.Meditation] as? Int {

indicators[indicLabel.meditation.rawValue].doubleValue = Double(val)

}

if let val = parser.payload[TgamParser.TgamCode.Attention] as? Int {

let attVal = Double(val)

indicators[indicLabel.attention.rawValue].doubleValue = attVal

if kinData.valid {

// online estimation of mean and variance

++nSamp

let delta = attVal - midpoint

midpoint = midpoint + delta * 1.0/Double(nSamp)

M2 = M2 + delta * (attVal - midpoint)

let variance = M2 / Double(nSamp-1)

txtMean.integerValue = Int(midpoint)

txtStd.integerValue = Int(sqrt(variance))

var zeroCentered = attVal - midpoint

var transformed = Int16(abs(zeroCentered*zeroCentered) >= variance ?

zeroCentered * 2 : 0) // dead-band & gain

transformed = transformed>0 ? +30 : transformed;

transformed = transformed<0 ? -30 : transformed;

if(lastRot == 0) {

attentionNum.integerValue = Int(transformed) }

else {

attentionNum.integerValue = 999*(transformed>0 ? (transformed==0 ? 0 : +1)

: -1)

}

transformed = transformed>=0 ? transformed : 256+transformed

if(lastRot == 0) {

kinData.rot = UInt8(transformed)

}

else {

kinData.rot = 0

}

lastRot = transformed

}

}

if let val = parser.payload[TgamParser.TgamCode.Power] as? [UInt32] {

let totalPower = Double(val.reduce(0) {$0+$1})

for i in indicLabel.p1.rawValue...indicLabel.p8.rawValue {

indicators[i].doubleValue = Double(val[i-indicLabel.p1.rawValue])/totalPower*100.0

}

}

}

}

func udpSend(timer: NSTimer!) {

if (kinData.valid && btConnected) || chkAlwaysSend.integerValue == NSOnState {

let rotByte = kinData.rot

let linByte = kinData.lin

let dataBytes : [UInt8] = [0xAA, 0xAA, linByte, rotByte, UInt8( linByte &+ rotByte )]

let data = CFDataCreate(nil, dataBytes, 6)

if let sock = udpSock {

40

sock.send(data)

}

}

}

@IBAction func stopPressed(sender: NSButton) {

if let sock = udpSock {

let dataBytes : [UInt8] = [0xAA,0xAA,0x00, 0x00, 0x00]

let data = CFDataCreate(nil, dataBytes, 6)

sock.send(data)

}

}

@IBAction func obstaclePressed(sender: NSButton) {

if let sock = udpSock {

var obstacleOn=false

if sender.state == NSOnState {

obstacleOn=true

}

let oByte = UInt8( obstacleOn ? 0x01 : 0x00 )

let dataBytes : [UInt8] = [0xAA,0x55,0x6F, oByte , UInt8(0x6F &+ oByte)]

let data = CFDataCreate(nil, dataBytes, 6)

sock.send(data)

}

}

@IBAction func linMotionChanged(sender: NSSlider) {

kinData.lin = UInt8(sender.integerValue >= 30 ? sender.integerValue : 0)

if let sock = udpSock {

let linByte = kinData.lin

let dataBytes : [UInt8] = [0xAA, 0xAA, linByte, 0x00, linByte]

let data = CFDataCreate(nil, dataBytes, 6)

sock.send(data)

}

}

@IBAction func rotMotionChanged(sender: NSSlider) {

let val = sender.integerValue

kinData.rot = UInt8(val >= 0 ? val : 256+val)

attentionNum.integerValue = sender.integerValue

if let sock = udpSock {

let rotByte = kinData.rot

let dataBytes : [UInt8] = [0xAA, 0xAA, 0x00, rotByte, rotByte]

let data = CFDataCreate(nil, dataBytes, 6)

sock.send(data)

}

}

}

41