Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront...

27
Capstone Design Course Abstracts Spring 2014

Transcript of Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront...

Page 1: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

Capstone Design Course Abstracts

Spring 2014

Page 2: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

2

Table of Contents  

PhASE: PhotoAcoustic Schlieren Elastography 4 Team: Tristan Swedish, Taryn Connor, Stephanie Reimer, Sarah Janiszewski, Emmanuel Llado 4 Advisors: Professors DiMarzio and Kowalski 4

CloudFire: Your Next Smart Residential Fire & CO Detector 5 Team: Casey Corcoran, Jabier Cartagena, Elisa Cheng, Ches Koblents, Marta Skomin, and Sean Zensen 5 Advisor: Professor Salehi 5

Cyclops 6 Team: Ryan Hanley, Aaron Harlap, Dimitri Makris, Matthew McDonald, Charlie Yao, and Richard Zhao 6 Advisor: DiMarzio 6

Perfect Parallel Parker 7 Team: Steve Corbesero, Kyle Eaton, Scott Eaton, Sahil Maripuri 7 Advisor: Professor Shafai 7

EMERGE 8 Team: Bryan McGrane, Brian Kracoff, Zhen Jiang, Kevin Langer, and Lukas Schulte 8 Advisor: Professor Salehi 8

Vehicle Heads Up Display 9 Team: Mike Borgatti, Sideris Angelou, Tony Freitas, Christian Kim, and Matt Lu 9 Advisor: Professor Meleis 9

EOG Assisted Communication Device 11 Team: Spencer Wenners, Colin Sullivan, Jeffrey Mui, Robin Yohannan, Ryan Whyte 11 Advisor: Professor Meleis 11

ATLAS: Autonomous Technology for Localization of Acoustic Signals 12 Team: Dylan Robinson, Ivan Illinsky, Mark Hatch, Nick Robinson, Gabe Diamant 12 Advisor: Professor Shafai 12

Portable Analysis of the Hemodynamic Response in the Prefrontal Cortex 13 Team: Tim Wolfe, Wes Robinson, Josh Pouliot, Mark Haynes, Trong Nguyen 13 Advisor: Professor DiMarzio 13

My Front Desk 15 Team: Chun Au-Yeung, Shu Chen, Huan Chang Wei, Wing Tung Yuen, Luchen Zhang 15 Advisor: Professor Salehi 15

Dynatrack 16 Team: Russ Gunther, Nick Johnson, Kris McGrath, Liam Meck, Craig Predatsch 16 Advisor: Professor Meleis 16

Tunnel Inspection Robot 18 Team: Joe Robinson, Robert Watson, Sam Coe, Matt van Berlo, Josh Johnson 18 Advisor: Professor Shafai 18

Lawnster 19 Team: Denis Ansah, Taylor Marks, Matt Spofford, Mark Tiburu, Tien Vu 19 Advisor: Professor Salehi 19

Page 3: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

3

SMILES: Superior Multiple Integrated Laser Engagement System 20 Team: Calvin Maguranis, Andrew Balow, Kevin Pacheco, Kevin Castro, Tom Kennard 20 Advisor: Professor Meleis 20

Autonomous Search Mechanism for Aerial Reconnaissance and Tracking 21 Team: Scott Goldberg, Nikolas Heleen, Austen Higgins-Cassidy, Daniel McNamara, and Antonio Rufo 21 Advisor: Professor DiMarzio 21

EEGµ (EEGmicro) 22 Team: David Crozier, David Karol, Javi Muhrer, Matthew Wood, Alex Zorn 22 Advisor: Professor Salehi 22

MAG 23 Team: Christopher Valek, Matthew Mahagan, Evan Scorpio, Michael Harrington, and Steve Morin 23 Advisor: Professor Meleis 23

HD AutoRobots (Heat-Detecting Autonomous Robots) 24 Members: Abraham Miller, Aleen Alferaihi, Ang Shen, Nii Lankwei Bannerman, Siyuan Li 24 Advisor: Professor Shafai 24

Urban Facade Thermography 25 Mechanical Design Team: 25 Kevin Barnaby, Emile Bourgeois 25 Dan Congdon, Carl Fiester, Cory Martin 25 Electrical / Computer Design Team: 25 Xian Liao, John Sutherland 25 Advisors: Professors DiMarzio, Sivak, and Salvas 25

Plan-It 26 Team: Jacob Agger, Aaron Cooper, Khoa Duong, Chris Larson & Ben Storms 26 Adviser: Professor Salehi 26

BCI-Enabling System 27 Team: Becker Awqatty, Tim Liming, Zakariah Alrmaih, Rachel Lee, and Chris Campbell 27 Advisors: Professors DiMarzio and Shafai 27

Page 4: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

4

PhASE: PhotoAcoustic Schlieren Elastography

Team: Tristan Swedish, Taryn Connor, Stephanie Reimer, Sarah Janiszewski, Emmanuel Llado

Advisors: Professors DiMarzio and Kowalski

Over 35 million Americans are estimated to have corneal diseases, which can lead

to vision loss. The current diagnostic methods often detect these diseases only after irreversible damage has been done. Advances are needed in research, modeling, and instrumentation to provide earlier detection of corneal tissue damage. PhASE focuses on research of more effective diagnostic methods and computer modeling of the system’s resulting images.

The objective of this project is to provide researchers with a method of characterizing the mechanical properties of the cornea. PhASE seeks to create a system that can measure the shear modulus and pressure wave modulus of an acoustic pressure wave in cornea utilizing purely optical techniques. As this technology is still fairly new, the PhASE system was benchmarked against a material of known mechanical properties. Phantoms made of silicone (Sylgard 184) were used to calibrate the system and tested mechanically for shear and elastic moduli.

The PhASE system operates by utilizing several biomedical optics techniques. First, a pulse generator is used to control a heating laser directed at the cornea phantom to produce an acoustic wave. The acoustic wave front travels at the speed of sound in the phantom (approx. 1500 m/s). The detection system must have temporal resolution of about 100 nanoseconds (ns) and a spatial resolution of at least 100 micrometers (μm) to capture data at distinct instances of the wave propagation. The heating laser produces 10 Watts (W) at the 850 nm wavelength and is integrated with a high-speed pulse generator to pulse for 50 ns durations. This acoustic wave changes the index of refraction in a localized region of the tissue as it propagates. A schlieren imaging system is used to track this pressure wave by stroboscopically imaging the induced gradient in the index of refraction of the tissue. The light source of the schlieren system is a high power, 2000+ lumen LED with a 10 ns rise time. A 100 ns output signal from the pulse generator is amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and the image capture.

The combination of schlieren imaging and computational algorithms should yield the shear modulus and elastic modulus of the sample by finding the velocity of the pressure wave. The main advantage of the PhASE system is that two elastic moduli can be found, which allows for complete mechanical characterization of the tissue. Current technology can only measure one modulus, varies in accuracy, and requires excision of the cornea.

Page 5: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

5

CloudFire: Your Next Smart Residential Fire & CO Detector

Team: Casey Corcoran, Jabier Cartagena, Elisa Cheng, Ches Koblents, Marta Skomin, and Sean Zensen

Advisor: Professor Salehi

In the United States, almost two-thirds of residential fire deaths result from fires in

properties without working smoke alarms and three-fourths of these alarms are battery-powered with either dead, disconnected or missing batteries. In an effort to prevent similar deaths in the future, CloudFire re-engineers the current residential smoke alarm by modernizing it into a smart all-in-one customizable temperature, smoke, and carbon monoxide detector with Wi-Fi capabilities. Through a secured open protocol, CloudFire allows communication between the user and any internet-enabled device without the use of a central hub and provides the user with accurate quantitative data regarding the gravity of the alarm, adjustable sensitivity settings, and notification of neighbors and emergency personnel when necessary. CloudFire is designed to primarily operate on wall power to ensure reliability and has a backup battery for emergencies. With the addition of wireless user notifications, situational data analysis, and reliable power source, CloudFire has the capability to save lives by operating nominally, reducing false positives, and decreasing response time. Physical controlled testing of the prototype yielded positive results and is highly recommended for household implementation.

Page 6: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

6

Cyclops

Team: Ryan Hanley, Aaron Harlap, Dimitri Makris, Matthew McDonald, Charlie Yao, and Richard Zhao

Advisor: DiMarzio

Cyclops is an automated theft prevention security device. One of the major problems with LoJack and other competitors is that an appreciable delay between theft time and theft reporting occurs as the owner does not have a way to track and see where their property is 24/7. For example, by the time an owner returns home to see their car is stolen, the perpetrator may already have detached the LoJack device. With Cyclops, we bridge this gap by giving the owner continuous monitoring of their valuables in addition to immediate alert of theft detection. Due to the nature of the required information, very little data needs to be transmitted. This allows for a low cost solution for transmission in practical use.

The prototype system utilizes a Raspberry Pi unit with a number of vital hardware components attached. Enclosed in a custom 3D printed enclosure are a GPS module, a 3G antenna, an alarm, an accelerometer, and a portable rechargeable battery. This unit is responsible for tracking and detecting any movement of the system for alarm triggering as well as using its GPS to record location data of the device. This device communicates directly with Cyclops’ backend cloud service to record all information and receive unlock/lock commands from the user.

The backend used for storing the data recorded from the physical device is implemented using Google App Engine and a slew of associated technologies. By using a cloud service, we solve the sparsity of user to device communication through an intermediary. The Android application is our front end service that provides all user functionality. It has the ability to remotely arm and disarm the alarm and to put Cyclops in a protective mode. There also exists an option to manually trigger an alarm. It exists the ability to track the device through its intuitive map interface from data that is collected in real-time. The most notable feature is the near instantaneous alert system that evident between Cyclops’ hardware reporting a theft and the owner receiving a mobile notification.

Page 7: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

7

Perfect Parallel Parker

Team: Steve Corbesero, Kyle Eaton, Scott Eaton, Sahil Maripuri

Advisor: Professor Shafai

Regardless of skill level, a large number of drivers struggle with parallel parking even after years of motor vehicle operation experience. In a stressful busy environment, performing a maneuver that one does not generally practice can be a daunting task. Instead of attempting to improve via more practice, most drivers simply avoid parallel parking altogether. This problem inspired us to create a system to guide drivers who do not have the necessary experience through the process of parallel parking. Our assist system consists of two parts. The first is a pair of sensors to be placed on the right front wheel. One of these sensors is an accelerometer, which is used to measure the distance the car travels. The other is a ping distance sensor, and is used to figure out whether a spot is empty or not and how close the parking car is to the already parked cars. These sensors connect to a low energy Bluetooth module that transmits the measurements they take to the iPhone app. The iPhone app forms the other part of our system. The app reads the data from the sensors via Bluetooth and uses that data to guide the driver into the space. All of the math used to determine the location of the car and what the driver needs to do next is performed on the iPhone. Also contained in the app is the user interface that, via spoken word, communicates to the driver each step of the process.

Page 8: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

8

EMERGE

Team: Bryan McGrane, Brian Kracoff, Zhen Jiang, Kevin Langer, and Lukas Schulte

Advisor: Professor Salehi

EMERGE, or Exact MEtacarpus-Reflecting GEsture-control, is a 1/100 mm

precision real-time hand-tracking program which uses absolute cylindrical position analysis to intuitively drive a robotic manipulator. The user’s hand is tracked using a new gesture recognition device, called LeapMotion, and each motion of the input hand is exactly mirrored by the output manipulator. Thus, the user is free from the constraints of traditional controllers and can control a robotic arm using the world’s most intuitive controller, their hand in free space.

Page 9: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

9

Vehicle Heads Up Display

Team: Mike Borgatti, Sideris Angelou, Tony Freitas, Christian Kim, and Matt Lu

Advisor: Professor Meleis

The objective of our Capstone project is to create a vehicle heads-up-display

(VHUD) by using a palm-sized laser projector that will be mounted to the dashboard of an automobile. The display will show pertinent information to the driver, such as GPS data from your cellphone, as well as oil temperature, speed, RPM, etc., obtained from the On-Board Diagnostics (OBD-II) port in the vehicle.

Our VHUD will enable a driver to maintain focus on the road while viewing his/her cell phone’s GPS application as well as other key vehicle information. This is much safer than a driver holding a cell phone in his/her hand or lap and regularly glancing down while operating his/her vehicle. In addition, many users would prefer to use the map application on his/her cell phone rather than a conventional GPS that he/she is unfamiliar with. A conventional GPS is limited in functionality, and requires the user to plug the GPS into a computer for updates to maps or firmware. Since your VHUD users Google/Apple Maps, the maps are constantly being updated, ensuring optimum directions. Further, our product is easily setup/removed for a truly portable solution to the growing need for HUDs in the driving community. Our product allows this freedom while also increasing safety at a relatively low cost of $400. Similar setups, including kinds found in new luxury cars cost upwards of $2,000.

Our projected display consists of two halves: the OBD data on the right side and the mirrored phone image on the left. Using the application, AirServer, our Intel NUC becomes an AirPlay receiver. As an AirPlay receiver, any iOS device can be streamed to our NUC. An ad-hoc wireless network is created for the iOS device to connect to our Intel NUC using a Wi-Fi card. Once connected wirelessly, the iOS device’s screen is mirrored on the display. This allows the user to see the phone screen in a heads-up format. Useful information such as Google Maps, GPS directions, incoming texts, phone calls, or emails can be shown at eye level to help keep the driver from looking down, allowing greater focus on the road. The left side of the display is OBD data from the car. This information is sent over a Bluetooth OBD adapter (ELM327) pairing to the internal Intel Bluetooth card. The OBD program is based off an open source project called OBD-II Manager released on CodePlex by Microsoft. The backend of the OBD software is written in C#, while the graphical user interface (GUI) uses XAML. The open source project was modified to gather the most important data and display it to fit the needs of our projector resolution. Data that can be read and displayed from the car consist of: instantaneous speed, rpm, mpg, turn signal inputs, oil temperature, and fuel capacity. The VHUD will also come with a custom converter to power the projector and Intel NUC from the vehicle's 12V socket. Our power supply will consist of two parts to satisfy the power needs of our hardware: a buck converter (voltage step-down from 12V to 5V) for the laser pico projector, and a boost converter (voltage step-up from 12V to 19V) for the NUC. Both converters will draw power

Page 10: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

10

from a single 12V socket like the one found in a standard vehicle. Our product can be installed in the user’s vehicle without much effort. The user simply has to mount the projector unit onto the dashboard and plug in its power cord. From there, a Wi-Fi connection between the cellphone and the Intel NUC needs to be established over the ad-hoc Wi-Fi network. This will enable a user to control everything through voice commands. For OBD-II information, the driver will need to attach the bluetooth OBD adapter into the OBD-II port on the vehicle and attach the screen onto the windshield or dashboard for better viewing.

Page 11: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

11

EOG Assisted Communication Device

Team: Spencer Wenners, Colin Sullivan, Jeffrey Mui, Robin Yohannan, Ryan Whyte

Advisor: Professor Meleis

There are currently about 50,000 people with locked-in-syndrome (LIS) in the United States. Locked-in-syndrome is a condition that results in complete paralysis with the exception of the eyes. This condition reduces the patient’s ability to communicate with the outside world using conventional methods. The current communication methods for these patients are limited, time consuming, and dependent on other individuals.

There are currently three major communication techniques for individuals with locked-in-syndrome. These communication techniques include video eye-tracking, partner-assisted scanning, and direct brain interfacing (EEG). All of these current methods have various negative aspects. Video eye-tracking is expensive and needs a well-lit room to function correctly. Partner-assisted scanning is time-consuming and requires the work of an assistant. Direct brain interfacing (EEG) is invasive, expensive, and has a steep learning curve. Our solution is based on the principles of electrooculogaphy (EOG). A dipole is a division of negative electrical charges and positive electrical charges. A human’s eye is an example of an electric dipole. The front of the human eye is the positive pole and the rear of the human eye is the negative pole. EOG is a method for measuring the voltage potential between these two poles of the human eye. EOG uses electrodes positioned on the user’s face to measure eye movements. This method results in a measured signal that is referred to as the electrooculogram.

Our system consists of four major entities which include the user, EOG circuit, signal processing unit, and graphical user interface (GUI). The user is an individual with severe motor impairment in the form of a condition that resembles locked-in-syndrome. The user wears five electrodes on their face. One electrode is used for the reference. Two electrodes are used for the vertical channel. The other two electrodes are used for the horizontal channel. The signal that is measured directly from the electrodes has a small voltage magnitude usually in the microvolt range. As result, these electrodes are attached to an EOG circuit that amplifies the amplitude of the signal and filter out the noise of the signal. The signal processing unit translates the output signal of the EOG circuit into eye movements. These movements include up, down, and blink from the vertical channel as well as right and left from the horizontal channel. The signal processing unit sends the eye movements over a USB serial connection to the graphical user interface (GUI) on a laptop or desktop. The GUI uses these inputs to move the cursor around the keyboard and select a highlighted key on the keyboard. The GUI has a save to text file feature and a text-to-speech feature.

Page 12: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

12

ATLAS: Autonomous Technology for Localization of Acoustic Signals

Team: Dylan Robinson, Ivan Illinsky, Mark Hatch, Nick Robinson, Gabe Diamant

Advisor: Professor Shafai

We propose an affordable audio localization and alert system using a scalable mesh

network of sensor nodes. Our solution involves work in the fields of digital signal analysis, mesh networking, embedded systems, and server-client architecture. Our team consists of five electrical and computer engineers with experience in all of these fields.

ATLAS is much cheaper than current systems in terms of maintenance and cost. Each node is an embedded device with a microphone array inside of an enclosure. We use digital signal processing algorithms to analyze audio signals from the each node’s microphone array to estimate the location of a sound event. The nodes communicate and pass messages through each other, forming a scalable mesh network. By sharing information from the network, nodes work together to increase the accuracy of localization. This network scales easily and provides internet access to each node.

The backend server functions as a command and control device that stores crucial information, and can perform further audio analysis if needed. The backend server also provides a front-end user interface (a dynamic web page) where users can view nodes and events on a map. Initially our team had set its sights on gun violence however, we believe our project has many applications beyond gunshot detection. Our system can be adapted to work with any audio profile, allowing for a flexible platform with many potential applications. For example, the system could be modified to support the tracking of endangered animals by their unique sound profiles. Our goal is to provide a wide range of users with the ability to simply and inexpensively configure and deploy an acoustic sensor network for their particular needs.

Page 13: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

13

Portable Analysis of the Hemodynamic Response in the Prefrontal Cortex

Team: Tim Wolfe, Wes Robinson, Josh Pouliot, Mark Haynes, Trong Nguyen

Advisor: Professor DiMarzio

Near Infrared Spectroscopy (NIRS) is a non-invasive neuroimaging technique that has garnered increased popularity in the past few decades. It's popularity can be attributed to its cost effectiveness and increased portability when compared to other imaging techniques such as MRI, PET, or EEG. NIRS functions on the principle of the hemodynamic response, through which the relative concentrations of oxygenated and deoxygenated hemoglobin are measured. The theory behind this technique is based upon the fact that light in the near infrared spectrum (650-900nm) is transmissible through water, bone, and tissue while it is absorbed by hemoglobin. Conveniently the isosbestic point of hemoglobin(Hb), where oxy-Hb and deoxy-Hb have the same absorption coefficient, is centered in this region around 800nm. Therefore by selecting two wavelengths, one above and one below this point, we are able to calculate the relative hemoglobin concentration through observing changes in light present at the detector. This information is representative of levels of brain activity. Typically when used as an imaging technique, NIRS requires vast arrays of light sources and detectors which greatly increases cost and complexity of analysis. However, by using a small number of sources and detectors it is still possible to measure relative blood oxygenation levels in the brain. The goal of our project is to create a portable, bluetooth enabled, cost-effective device ideal for taking real-time field measurements of the hemodynamic response in the forebrain. The idea is that while using a smartphone as an interface, a subject will perform a mathematical or memorization task which is known to typically stimulate the prefrontal cortex. As the subject is performing the task, a processor on our device will be controlling LEDs and measuring the intensity of the light present at an infrared sensor. The data is then pre-processed and transmitted via bluetooth back to the phone where it undergoes further computation and analysis which outputs the relative changes in concentration of oxy-Hb and deoxy-Hb. These results are plotted to a graph which is then presented to the user for a visual analysis. The driving idea behind our design is to create something that can be used on an individual who has sustained a head injury as a way to quantify concussion symptoms. Concussions are one of the most common sports injuries, and they often go under reported and under diagnosed. Typically a concussion is either diagnosed by observation of physical symptoms (which can differ greatly between individuals) or through a costly hospital visit and brain scan, which leaves a need for a portable device that can provide an effective real time analysis of these types of injuries. Considering that a concussion results in reduced cerebral blood flow and that our device measures blood flow up to a few centimeters deep into the frontal lobe, we hypothesize that it should be possible to detect a statistically significant difference in blood oxygenation levels in the brains of healthy and concussed individuals.

Page 14: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

14

Although our project does not delve into the diagnosis of brain injuries, we have created a portable, cost-effective device which can be easily used by any individual with little to no training in order to measure the hemodynamic response in the prefrontal cortex. We believe that misdiagnosis of mild traumatic brain injury is an important issue in today's society, and that by approaching the problem in a new way, we hope to generate interest in developing modern forms of diagnoses.

Page 15: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

15

My Front Desk

Team: Chun Au-Yeung, Shu Chen, Huan Chang Wei, Wing Tung Yuen, Luchen Zhang

Advisor: Professor Salehi

My Front Desk is a facial recognition self-servicing receptionist. Many of the public

locations need receptionists, namely hospitals, gyms, school dormitories, security in offices, etc. Many of these high streaming places are having human receptionist to welcome their guests and provide services to them. Nonetheless, there is often a problem for those who wants to accomplish simple tasks but have to wait in lines for receptionists because there is no such priority arrangement in receptions. This will make the overall reception system inefficient in terms of time, money and service quality. Besides, certain people would account for privacy issue when communicating with human receptionists. Sometimes there are sensitive personal information that the visitors do not want to share. Consider solving the above problems, we develop a software system to assist the receptionist, where people could login into the self-servicing receptionist and perform their tasks. Examples of tasks are check-in appointments and getting directions to certain locations.

The prototype of this system is mainly using the facial recognition software with an user interface for tasks. OpenCV grants the permission to access the facial recognition software, which can perform its face-checking by utilizing at least one picture taken previously from the user. Then we will have face feature matching and tracking done once the person is registered. Qt Creator was used to develop the user interface, which aim to produce a simple and easy way to serve the visitors. SQLite Database was used for storing our clients’ personal information and their respective history.

A simple example is that a man wants to check in a booked appointment in a hospital. He will get to the ‘My Front Desk’ and login through ID and facial recognition checking. He will be leaded to a his personal page showing all his recent activities and certain appointment details. Simply check the appointment he has booked and press the ‘check in’ button will have the system send him an email confirmation with details and directions he is going.

This device is serving as a receptionist assistant so far, since there are limited options of services providing. First comers are not our preference. However, we are still aiming to give an efficient, easy, and secrecy experience to all of our clients.

Page 16: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

16

Dynatrack

Team: Russ Gunther, Nick Johnson, Kris McGrath, Liam Meck, Craig Predatsch

Advisor: Professor Meleis

The Dynatrack is a video-conferencing tool that helps to bring personal conversations into video meetings that may span vast distances. Our product looks to eliminate the awkward feel of a corporate video conference where a group of people lean over a table to view a distant TV monitor, only to see another group doing the same. In this scenario, when listening to a remote speaker, it is difficult to see exactly who is speaking and who is being addressed. It can be frustrating when one intends to communicate with a specific person, but the screen displays a see a sea of indistinguishable faces. Our project provides employees with the ability to have face-to-face communication with a person who is not in their conference room, while also providing the adaptability to change focus to another team member so that a conversation can be continued between intended participants. We have developed a device that will emulate sitting at a remote conference table. This consists of a mobile monitor, microphone, and camera that stand in for a person’s face, ears, and eyes. When prompted, the device will turn to focus on whoever is speaking in a meeting and maintain concentration until prompted to change focus again, much as a person naturally would. When being spoken to, the remote members will be able to see the native user on the monitor as if he/she held a seat at their table. This facilitates more natural face-to-face conversation with a person also using the same device on the other end or with a group seeking to only address one person.

Our design is comprised of a monitor, displaying video feed of the remote meeting room, and a webcam/microphone, recording you. These are incorporated into a rotating, tracking system that turns to face the direction of the speaker’s voice and then further turns and tilts to focus on the speaker’s face as a form of fine tuning. Once voice tracking is enabled (with a momentary switch), speaking will cause the monitor to rotate towards the sound source. This is accomplished by triangulating the source angle using three microphones. Three microphones constantly record audio and convert each audio source into a digital signal which is then input into the Arduino. Once all three source amplitudes break a predetermined volume threshold, a source angle is calculated trigonometrically using the speed of sound, the physical mic positions, and the times at which the threshold was broken. The Raspberry Pi then receives this value and instructs the motor to turn. Once rotated, the device constantly centers its camera’s view on the nearest face. Face tracking functionality is provided by the OpenCV library, which queries the webcam for data and provides methods to manipulate the data. Other existing products provide a communication service like Skype, but either rely on one large TV at the end of the room, or require that everyone brings their laptops into the meeting. Both implementations are impersonal, confusing, and obtrusive.These methods can be expensive and time consuming, requiring set up for every meeting. Our product is dedicated to focusing on only one speaker at a time. These conversations are

Page 17: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

17

more personal and allow the ability to hold up a model or show a drawing or sketch in a much less awkward fashion. The Dynatrack is also a lot more cost effective than setting up a room with TV’s, sound equipment, and webcams or purchasing software licenses for everyone’s laptop. Another advantage of the Dynatrack is its one time, easy setup. Using a dedicated computer, there is no longer a need to hook up a computer/laptop to a conference room to make a call. Now you can just simply place the Dynatrack on the table, turn it on, and make the conference call without any complications or additional equipment.

Page 18: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

18

Tunnel Inspection Robot

Team: Joe Robinson, Robert Watson, Sam Coe, Matt van Berlo, Josh Johnson

Advisor: Professor Shafai

A mobile robotic system was developed as a tool to improve the efficiency and the overall accuracy of the tunnel inspection process. Important factors presented to our team by Massachusetts Department of Transportation (MassDOT) were to reduce cost, save time, and mitigate additional traffic congestion caused by these required annual inspections. Current methods for inspecting the Central Artery/Tunnel Project “Big Dig” fresh air intake and CO2 exhaust tunnels (plenums) are time consuming, costly, and pose potential safety hazards to both the inspectors and drivers. Our system will provide a means for inspectors to conduct these inspections from a central inspection office without requiring onsite manual labor. In addition to reducing costs, time, and potential hazards, our proposed method enhances inspection reports through its user-friendly software package with integrated automated features.

While working through the engineering design process we ultimately broke the project into four categories: hardware, software, communications, and detection. During the design process we utilized 3D modeling and simulation software to develop a robust, reliable, cost efficient, easy-to-use inspection platform. Software requirements included an intuitive interface with real-time video stream and a means of controlling the mobile platform in a natural way, all over a reliable Wi-Fi communication scheme.

Page 19: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

19

Lawnster The Easy, Affordable, Autonomous Lawn Mower

Team: Denis Ansah, Taylor Marks, Matt Spofford, Mark Tiburu, Tien Vu

Advisor: Professor Salehi

It’s the most ubiquitous status symbol within European and North American

countries: the neatly manicured lawn, trimmed in precise rows to an exacting height. But who actually has the time to dedicate to their lawn like that?

Historically, the only options available to a homeowner were to either roll up their sleeves and mow their own lawn, or to pay someone else to do it for them, with the later only being viable for more upper-class individuals. More recently, lawnmower manufacturers have begun selling prohibitively expensive ($1200+) autonomous lawn mowers that required the owner to dig up their entire lawn to lay a cable beneath it, no small task.

Our goal is to create an autonomous lawn mower that would carry a price tag just a fraction of that of our competitors, and that would require no more than a few minutes to setup. Our solution, the Lawnster, only asks that the user carries its brain (a box with the CPU, GPS, IMU, SD Card, and compass) around the perimeter of their lawn the first time, then to attach it to the main unit. During this stroll, the brain records the coordinates of the boundaries of your lawn, which it then uses to determine an optimum route along which to mow your lawn along once it is reattached to the main unit (with the motors and sensors).

Page 20: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

20

SMILES: Superior Multiple Integrated Laser Engagement System

Team: Calvin Maguranis, Andrew Balow, Kevin Pacheco, Kevin Castro, Tom Kennard

Advisor: Professor Meleis

An accurate simulation of military combat is crucial in upholding the common

military notion: “we train like we fight”. The United States Military currently employs Cubic Corporation's Multiple Integrated Laser Engagement System (MILES) as part of its training program. MILES utilizes laser transmitter/receiver pairing in order to simulate real combat situations. Laser integration provides the most realistic force-on-force combat training due to the range and accuracy of laser transmissions. Training environments demand simulation that can closely replicate real-world combat; MILES meets many of the requirements to simulate realistic combat scenarios, but has many shortcomings that limit its effectiveness.

Our team has identified the shortcomings and has designed a novel improvement to the current training tool. Superior MILES (SMILES) provides the functionality needed to produce unparalleled realism. The gaps in accurate training that arise using MILES are bridged by the innovation in SMILES. In general, SMILES sustains the concepts of MILES as a training system: an encoded laser signal that is detected by a man-worn set of laser sensors. SMILES builds on these basic concepts while withholding many of the minimum requirements that MILES has already met.

The improvements that are included in SMILES will create a more effect training experience by proving location based feedback when a person is 'hit' by an incoming laser transmission. Location based feedback allows for a more realistic casualty assessment to the first responders that come across a 'wounded' soldier. The medic and the combat life saving soldiers will be able to read a comprehensive list of symptoms paired with a location on the 'wounded' soldier's body. Our team designed the medical readout based on FM 4-25.11, the military field manual for First Aid. Our algorithm takes into account that larger caliber types create more bodily damage. When hit, a random generation of symptoms occurs where the symptoms have weighted outcomes influenced by the bullet caliber that the soldier has been hit with.

In addition to localized feedback, SMILES also includes: a programmable laser transmitter that will allow for one type of transmitter for all weapons; an increased number of receivers that extends to the arms and legs of the soldier; haptic feedback to the soldier when hit; integration of an android phone that allows for an intuitive login sequence and status readouts including symptoms when a soldier is hit. The many improvements that SMILES provides will greatly enhance the military laser training experience.

Page 21: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

21

Autonomous Search Mechanism for Aerial Reconnaissance and Tracking (A-SMART)

Team: Scott Goldberg, Nikolas Heleen, Austen Higgins-Cassidy, Daniel McNamara, and Antonio

Rufo

Advisor: Professor DiMarzio

This study aims to develop an autonomous Unmanned Aerial Tracking platform in

order to increase effectiveness of law enforcement agencies and search and rescue teams. Little existing technology is successfully used autonomously in search and rescue operations or AMBER Alert situations, particularly in rural areas or those where there is a lack of surveillance cameras. This platform will achieve its mission by providing an autonomous flight platform to aid in locating and tracking an object based on initial mission parameters. Our group is developing a prototype using a commercially available multi-rotor flight platform equipped with a vision system on its underside consisting of both visible and infrared cameras mounted on a stabilizing gimbal. The data from the cameras is then wirelessly transmitted to a base station for live processing. Using Gaussian mixture models and feature extraction the targets are autonomously identified and tracked. This prototype works well at identifying and tracking multiple objects including people and vehicles. Once the specified target is located, feedback from the computer vision and tracking algorithms are then used to control the flight path of the copter. It will offer a valuable asset to law enforcement and first responders during the course of their work by searching for, finding, and tracking a specified object, such as a vehicle.

Page 22: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

22

EEGµ (EEGmicro) An embedded, portable, low-power biopotential signals data acquisition system.

Team: David Crozier, David Karol, Javi Muhrer, Matthew Wood, Alex Zorn

Advisor: Professor Salehi

EEGµ is a portable, low-power platform used to collect and process biopotential data. In application, we will demonstrate that we can collect and process electroencephalography (EEG) data. EEG signals are a measurement of the brain’s electrical activity, as measured by electrodes that are coupled to the scalp. In practice, we stimulate a patient based on the steady-state visually evoked potential (SSVEP) model. In this scenario, a user makes choices based on their visual focus on one of several LED arrays that illuminate at different frequencies. Using this data we can detect a user’s frequency response, and make a decision based on what we believe to have been the user’s selection. The biopotential sensing platform is purpose-built for general use. In its existing consolidated form factor, researchers and consumers can gather ECG, EMG, and EEG data. This information can be processed directly, in real-time on the BeagleBone Black that serves as a system core. Alternatively, the information can be communicated to an external platform (like a PC running MATLAB) for processing. The Analog Front End (AFE) which acts as a modular system on top of the processing platform, has been designed to meet existing EEG and medical electronics standards. Following IEC60601 medical electronics standards, we have designed the platform with the necessary system isolation and patient protection circuitry to ensure safety compliance. The AFE allows for biopotential signal measurements on up to 16 channels, with a simultaneous data collection rate up to 16,000 samples per second. Additionally, the AFE reduces common-mode interference with driven right leg (DRL) circuitry and provides electrode impedance characteristics through system lead-off detection. The EEGµ distinguishes itself from similar products by delivering reliable, low-noise biopotential signals from a lightweight, low-power, portable form factor. Running off of a battery pack, the EEGµ fits comfortably on a belt or similar configuration for easy integration into commercial or medical products.

Page 23: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

23

MAG Memory Assistive Glasses

Team: Christopher Valek, Matthew Mahagan, Evan Scorpio, Michael Harrington, and Steve

Morin

Advisor: Professor Meleis

Our memory assistance Glassware will allow memory impaired individuals, such as

early stage Alzheimer's patients, to identify people they come in contact with, identify objects, display step-by-step instructions for simple tasks, and display reminders for events such as medications. Our project uses the new, intelligent glasses created by Google called Google Glass [7]. We will use the camera on the Glass to take photos that will run through our recognition software to determine what should be displayed on the Glass for the user.

Our design solves the issue of memory impaired individuals not remembering the names of family members, friends, and others that they come in contact with on a regular basis. It will also help these individuals recognize objects by labeling them with QR codes and help them perform basic tasks by displaying the necessary steps on the glasses. Medication reminders for patients are necessary for those who take several medications. This allows for the user to have a higher quality of life because of innovative technology and ease of use.

Our design involves creating an app on the Glass that acts a facial/QR detection mode. The user will take a picture of either a face or a QR code. Once they take the picture, they will be prompted to choose either facial recognition or QR recognition. The image is then processed based on their decision. If a facial or QR match is found, the correct result is displayed on the Glass for the user to see and spoken for the user to hear. If a match is not found or there is a timeout, then the user will be informed as such and they will have the option to add the unknown face or QR code to the corresponding database. We will create an Android app that is used by the caregiver to add people to the facial recognition database or objects/tasks to the QR recognition database. We will provide some QR codes that are linked to common everyday items and tasks. This lowers that amount of items that the caregiver needs to add, yet still allows for them to add items to the database. The more pictures of the face that are taken in different lighting the higher the probability that the person will be correctly and quickly identified.

One of only a handful of Glass projects utilizing facial recognition is MedRef [14]. It uses Google Glass as a medical reference for doctors and nurses that allows the user to add new patients, verbal notes about a patient, and perform facial recognition to retrieve the information about the current patient. MAG is different because we are targeting a different group of individuals, providing more features, and making it simple to use.

Page 24: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

24

HD AutoRobots (Heat-Detecting Autonomous Robots)

Members: Abraham Miller, Aleen Alferaihi, Ang Shen, Nii Lankwei Bannerman, Siyuan Li

Advisor: Professor Shafai

In the wake of numerous man made and natural disasters, and time sensitive situations that deal with human life and expensive property, we have developed a solution to efficiently and effectively respond to such situations. This autonomous robotic system can be used to reduce the damage that comes with such unpredictable real-life scenarios.

The system we designed makes use of a series of fixed heat and infrared sensors as well as a high resolution camera (Logitech HD Pro Webcam C920) that communicates data to a pair of land-based robotic systems. For our project, we bought the Arduino compatible DFRoboShop Rover V2 and we made use of existing sample code provided by Arduino to configure the basic movements of the robots (i.e. forward, backward & rotation). The system consists of three nodes; a PC and two Arduino robots. The nodes are set-up to communicate with each other using XBee modules, forming a mesh-like network. The PC is configured to be the router while the two robots are configured as the coordinators. Through OpenCV, the two robots will receive signals that transmit the angle and distance to the “fire” from the router (i.e. the PC). We use colored shapes and small light bulbs to simulate the fire; the shapes and bulbs are to be detected by the camera with pre-loaded image processing code. Finally, we created a GUI system to observe the thermal map, the location of the fire, and mobile robots. The temperature and the parameters will be updated simultaneously.

We chose to focus on a hypothetical scenario where the land based robots respond to signals after fire is detected (a large increase in temperature) in order to mitigate the potential damage. With the use of complex algorithms, and the manipulation of key data, the system is able to nullify the severity of a potentially life threatening situation (a wild fire in our test case). A graphical user interface that makes use of image processing software was also used to highlight the key strengths of the algorithm both through a series of simulations, and the data received from the fixed sensors in real time.

Page 25: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

25

Urban Facade Thermography

Mechanical Design Team: Kevin Barnaby, Emile Bourgeois

Dan Congdon, Carl Fiester, Cory Martin

Electrical / Computer Design Team: Xian Liao, John Sutherland

Advisors: Professors DiMarzio, Sivak, and Salvas

In the 1960s, many buildings were built with insufficient thermal insulation, and as a result now have higher utility costs. Small gaps in the insulation are allowing inside air to leak out, driving up the cost of heating and cooling in winter and summer respectively. The best way to find these leaks is to take an IR camera and examine the façade of the building. However, to find the small gaps, the camera can only be a few feet away from the building. In crowded urban environments, there is no efficient way to get the camera in a good position to take an accurate image. To solve this problem, numerous approaches were analyzed and a drone was determined to be the best solution. Due to the urban environment, the design must consider safety as a priority. Numerous fail-safes were written into the autopilot and if conditions become unsafe the drone will be able to land autonomously. Proximity sensors were installed and integrated to provide full autonomy and prevent collisions with the surrounding buildings. The drone was extensively tested to ensure that both safety and performance are acceptable before testing began in an urban environment. This prototype will deliver a single stitched thermal image of any building for prompt high quality thermal analysis by forensic architects. This solution will make time consuming practices such as rappelling down the sides of buildings clutching a hand held thermal camera a solution of the past and will improve the face of urban forensic architecture.

Page 26: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

26

Plan-It

Team: Jacob Agger, Aaron Cooper, Khoa Duong, Chris Larson & Ben Storms

Adviser: Professor Salehi

The Plan-It is a digital calendar that combines the convenience of physical

calendars with online scheduling software. It is a tablet-sized touchscreen device that allows clients to schedule appointments with the owner’s Google Calendar. This digital calendar can replace a traditional paper calendar in any location, such as on a wall, desk or the door of an office.

With its intuitive GUI, not only is this device easy to use, it also allows a high level of customization. Anyone who walks up to the calendar and wants to schedule an appointment can tap the calendar to see a view of the current month. He or she can navigate to the desired time and see when the owner of the calendar is available. The user can simply touch any available time on the screen and create an appointment block. The new appointment is then uploaded to Google Calendar.

The owner can customize the Plan-It for specific needs in a home, office, or use in the service industry. The calendar can be set up to allow anyone who interacts with it full freedom to view any existing appointment and make new ones. There are also many restrictions that can be added such as the ability to define what times people can make appointments, the types and lengths of appointments that can be scheduled, and what existing appointment information can be viewed.

The Plan-It provides an elegant alternative to paper calendars which require constant updating. It features capabilities that are present in online scheduling applications, while giving people all the things they look for in a traditional calendar. Existing applications on phones and tablets require sharing personal calendars with each other for access; this device allows anyone the ability to schedule an appointment without having to go through that process. The Plan-It has the potential to streamline the way people coordinate meetings and events.

Page 27: Capstone Design Abstracts 2014€¦ · amplified to 40 W and controls the LED strobing. Wavefront propagation is captured by changing the offset between the initial laser pulse and

27

BCI-Enabling System

Team: Becker Awqatty, Tim Liming, Zakariah Alrmaih, Rachel Lee, and Chris Campbell

Advisors: Professors DiMarzio and Shafai

In the past, various developer groups have designed systems with a brain-

computer interface (BCI) that allow individuals with physical impairments to better interact with their environment. However, many of these projects are only small, separate pieces of the system necessary to enable such individuals. Using the Emotiv EEG headset & software as the BCI, we have designed a software GUI to enable people with physical impairments with two core functions: communication, and voluntary movement. For communication, we give users access to receiving and sending text messages or emails over Wi-fi, and a means for quickly sending messages for common requests, as well as an optimized typing system for sending general messages. We enable voluntary movement for users in a modified motorized wheelchair, giving the user controls for basic movements (forward, left/right, etc.), as well as collision prevention using sensors built-in to the wheelchair.