The e ects of wearable computing and augmented reality …The e ects of wearable computing and...

15
The eects of wearable computing and augmented reality on performing everyday tasks Divyen Sanganee School of Computer Science, The University of Birmingham Abstract This report looks at the advances in context aware, ubiquitous and commercial wearable computing systems. Augmented reality is often the chosen means to deliver output from these systems so advances in this field are also considered. It finds that the cutting-edge technology that is currently available oers promise through further research. However, it will be up to large technology companies to fully combine all the dierent areas required in order to bring a wearable device to market. Keywords: Wearable computing, augmented reality, context aware, ubiquitous Introduction The realisation of Moore’s Law over the past decade has meant that it is now possible to have incredibly powerful computers built into tiny devices (Schaller, 1997). This has al- lowed the field of wearable computing to take great strides forward in making this technol- ogy more appealing to the general public. To most, the idea of ‘wearing a computer’ is still a product of science fiction and Hollywood, however, it is entirely possible that this could soon change (Starner, 2002). This paper aims to examine the current state of wearable computers combined with aug- mented reality. Augmented reality (AR) has really come to the forefront of modern Human- Computer Interaction research with the advent of applications (or ’apps’) on mobile devices (de Sa et al., 2011). Apps allow anyone to de- velop programs that can harness AR both in- doors and outdoors. Commercial use of AR can be found in products such as QR (Quick Response) codes (Yoon et al., 2011) and Mi- crosoft’s Kinect (Vera et al., 2011). Although other areas will also be consid- ered, the primary focus of this paper is to eval- uate the impact of wearable computing and augmented reality on the ability to perform ev- eryday tasks. This could range from enhancing tasks such as navigation to completely over- hauling the way the fashion is seen. State-of- the-art systems will be examined, with their merits and drawbacks being discussed. An indication of possible future routes the fields could take will also be proposed. History of AR There is a subtle dierence between virtual reality (VR) and AR in that VR aims to re- place an entire environment whereas AR only wishes to add objects in real-time (Van Kreve- len & Poelman, 2010). The term ‘Augmented Preprint submitted to Research Topics in HCI March 4, 2013

Transcript of The e ects of wearable computing and augmented reality …The e ects of wearable computing and...

The effects of wearable computing and augmented reality onperforming everyday tasks

Divyen SanganeeSchool of Computer Science, The University of Birmingham

Abstract

This report looks at the advances in context aware, ubiquitous and commercial wearable computingsystems. Augmented reality is often the chosen means to deliver output from these systems soadvances in this field are also considered. It finds that the cutting-edge technology that is currentlyavailable offers promise through further research. However, it will be up to large technologycompanies to fully combine all the different areas required in order to bring a wearable device tomarket.

Keywords: Wearable computing, augmented reality, context aware, ubiquitous

Introduction

The realisation of Moore’s Law over thepast decade has meant that it is now possibleto have incredibly powerful computers builtinto tiny devices (Schaller, 1997). This has al-lowed the field of wearable computing to takegreat strides forward in making this technol-ogy more appealing to the general public. Tomost, the idea of ‘wearing a computer’ is stilla product of science fiction and Hollywood,however, it is entirely possible that this couldsoon change (Starner, 2002).

This paper aims to examine the current stateof wearable computers combined with aug-mented reality. Augmented reality (AR) hasreally come to the forefront of modern Human-Computer Interaction research with the adventof applications (or ’apps’) on mobile devices(de Sa et al., 2011). Apps allow anyone to de-velop programs that can harness AR both in-doors and outdoors. Commercial use of AR

can be found in products such as QR (QuickResponse) codes (Yoon et al., 2011) and Mi-crosoft’s Kinect (Vera et al., 2011).

Although other areas will also be consid-ered, the primary focus of this paper is to eval-uate the impact of wearable computing andaugmented reality on the ability to perform ev-eryday tasks. This could range from enhancingtasks such as navigation to completely over-hauling the way the fashion is seen. State-of-the-art systems will be examined, with theirmerits and drawbacks being discussed. Anindication of possible future routes the fieldscould take will also be proposed.

History of AR

There is a subtle difference between virtualreality (VR) and AR in that VR aims to re-place an entire environment whereas AR onlywishes to add objects in real-time (Van Kreve-len & Poelman, 2010). The term ‘Augmented

Preprint submitted to Research Topics in HCI March 4, 2013

Reality’ was only coined in 1990, making ita relatively new field of study (Van Kreve-len & Poelman, 2010). The first AR systemsinvented were targeted at walkers and hikerswith visual impairment and used sound in or-der to guide the user (Van Krevelen & Poel-man, 2010; Azuma et al., 1997). From this, itis clear that AR does not solely revolve aroundthe use of vision. AR systems can attempt toenhance many other senses including hearing,touch and smell.

Following on from this, AR was inte-grated into a number of objects includinghead-mounted displays, projectors and mobilephones (Van Krevelen & Poelman, 2010). Al-though initially only confined to the military,industrial and medical markets, AR has sincebecome a matter of interest in the commer-cial and entertainment sectors (Van Krevelen& Poelman, 2010).

History of Wearable Computing

Similar to the subtle difference between ARand VR, wearable computers are defined to bedevices that “run continuously and can be op-erated hands-free” (Huang et al., 2000). Thiscan be reinforced using an example of a mo-bile phone (not a wearable computer) and aBluetooth headset (wearable).

Early wearable computers, such as the Pri-vateEye developed by Reflection Technology,coincided with the release of the film The Ter-minator (Rhodes, 2001; Huang et al., 2000).Hollywood was readily available to provideideas as to how wearable computers shouldlook and this led to the release of numerousmachines that could be attached to the user’sbody and have a display embedded withinglasses (Huang et al., 2000). Although othersystems embedded into different areas such asshoes were also developed, the main focus still

remains on using vision as a means of inputand output (Huang et al., 2000).

Context Awareness

For a wearable computer to effectively aug-ment reality, it needs to be aware of the envi-ronment that surrounds it - known as contextawareness (Chen et al., 2000). There are es-sentially two types of context awareness - ac-tive and passive (Chen et al., 2000). An activesystem will adapt automatically to a change incontext and is constantly receiving input anddelivering output (Chen et al., 2000). For ex-ample, a necklace created by Prof. Kevin War-wick in 2002 changes colour dependent on thewearer’s mood (CS4FN, 2002).

In contrast, passive context awareness onlyoccurs as and when it is required and usuallystores data for review later (Chen et al., 2000).An example being the personal awareness sys-tem developed by Accenture (Van Krevelen &Poelman, 2010). This small device attachesto the user’s clothing and stores informationabout any new people the user meets. It ac-complishes this by listening out for phrasessuch as ‘My name is...’, ‘I am...’ etc. and thentakes a small voice recording of the person aswell as a picture of their face. The user canthen review this information to facilitate theirmemory.

Conferencing System

Dey et al. developed a wearable computersystem with both active and passive elementsfor use in academic conferences (Dey et al.,1999). The system is capable of displayingschedules, identifying the whereabouts of col-leagues and identifying talks of potential in-terest. All these are based on information pro-vided about the user to the system both beforeand during the conference. In addition to this,it is also able to point the user to the correct

2

room for talks and demonstrations as well asidentify the presenter.

All this information is presented to the userthrough a visual interface. Additionally, theslides and any notes the wearer takes are alsostored on the system and can be retrieved af-ter the conference to be reviewed. Finally, incertain scenarios (such as when the user hasa question about a specific point) the user canalso remotely control the slides of the presen-tation.

The authors identify a number of advantagesthrough the use this system. Firstly, time canbe spent more efficiently in two ways: a) asslides do not need to be copied, the user’s at-tention can be focused on more important is-sues and b) as other colleagues can be located,if there are two interesting talks occurring si-multaneously, the user may decide to go toone and take notes from their colleague on theother. Secondly, the system is also useful forthe presenter who can use it to identify whenindividuals left their talk and what questionswere asked in order to improve for next time.

DiscussionThe contextual elements for this system

come from the ability to automatically adaptthe information presented to the user (forslides, presenter etc.) based on where they are.One problem that arises from using a methodsuch as this is actually locating the wearer. Asthe authors point out, GPS is not quite accu-rate enough to position within rooms withina building. The study made use of a plan ofa specific building already built into the com-puter. Clearly, doing this for all possible con-ference venues would be expensive and diffi-cult.

A solution is just to wait for positioningsignals to become stronger and more accuratewhich will only be a matter of time. Alterna-tively, a way forward could be through the use

of ubiquitous (or pervasive) computing whichwill be discussed in more depth in a later sec-tion.

The expandability for this type of wear-able computer is endless. There is no rea-son why this system could not be implementedin venues for other events such as media re-leases, product launches, conventions and ex-positions. Currently, many of the organisersfor these events hand out brochures and infor-mation packs to the attendees. If it were cost-effective, they could instead deliver wearablecomputers. They would function in exactly thesame way as the conference study and infor-mation could be saved in a cloud and then laterretrieved by the user at home. The ability tore-program computers means that they couldbe configured to suit many different types ofevents as well, identifying an incentive for sus-tained returns.

As with any area of Computer Science, oneof the main concerns with a system such as thisis privacy. Some presenters may not want toprovide their slides and others may not wish tobe tracked. These issues can be easily resolvedby simply allowing people to opt out, however,they need to be addressed before the productsbecome commercial.

Call InterruptionAt some point, everyone has experienced

a phone call at an inconvenient time. Thiswas the subject matter of a study conductedby Krause and colleagues examining the pos-sibility of using wearable computers to config-ure mobile phones (Krause et al., 2006). Theobjective was to detect when the correct timewas to request the user’s attention by influenc-ing aspects of their phone such as it’s ringervolume, vibration setting and call rejection.

To do this, they created a network com-prised of several different wearable computers.The various components were: an armband,

3

an earpiece, a backpack and an antenna, all ofwhich communicated with a smart phone.

The data gathered from all the different sen-sors on the body network was analysed usingmachine learning and statistical methods to de-termine the context. The results obtained indi-cated that the system was able to cluster activ-ities such as working, no motion, walking anddriving very quickly. This contextual aware-ness information was then used to adjust thephone to a suitable mode. For example, whenworking the phone would turn off all notifica-tions - these desirabilities could be changed bythe user and the system would ‘learn’ the bestway to respond to a situation.

The results indicated that meaningful con-texts could be extracted from a body networkin order to configure a mobile phone.

DiscussionA clear advantage of this system is that it

helps the wearer concentrate on performing aspecific task - anything from working to exer-cising. Additionally, as it can be configuredto a user’s required needs and then continueto adapt further, suggests that perhaps afterenough ‘training’ it could be an optimal solu-tion to interruption.

That being said, as individuals give differentpriorities to different types of notifications, itmay be that over the course of ‘training’ thesystem, many important interruptions could belost.

In addition, many people check their phonesas a habit. In this case, perhaps a system suchas this is not necessary as phones are manuallyconfigured on a regular basis. This questionlays the foundation for future work in the re-quirements for an interruption system.

One of the major disadvantages of design-ing a sensor body suit is that it would have tobe worn all the time to function. Specificallyin this example, having to wear items such as

a backpack and antenna all day would prove tobe an inconvenience during most tasks (exer-cising, driving etc.). A solution to this wouldjust be to wait until sensors are small enoughto wear without becoming a problem. Mo-bile phones are already packed with sensors in-cluding accelerometers, gyroscopes and GPS.It is just a matter of creating devices that canharness these elements in such a way so thatthey can be worn all the time - perhaps throughclothing.

Order Picking

Order picking is the process of collectingsets of different items by human workers whothen deliver the parts to the next work sta-tion to facilitate material flow (Schwerdtfeger& Klinker, 2008). In 2008, Schwerdtfegerand Klinker conducted an experiment to testwhether this process could be improved us-ing wearable computing and augmented vision(Schwerdtfeger & Klinker, 2008).

The study used a pair of glasses to directthe worker towards the correct box (out of 96possible ones) using three different types ofaugmented reality interventions (a box, an ar-row and a tunnel). The correct order of selec-tion was pre-programmed into the glasses andvarious time related measurements were takensuch as selection and error rates.

Figure 1: The different types of augmented reality of-fered with the glasses. Left: box, Middle: arrow, Right:tunnel

4

The results found that no errors were madein picking a box and that most selections tookaround 6 seconds. Although these were notcompared to current industry efficiency times,they provide an impressive benchmark for oth-ers to match. The authors found that the tun-nel was the most effective guidance system andproposed a larger scale study.

DiscussionThis study was focused on solely the manu-

facturing industry, with the results providing apromising benchmark for further studies. Thelack of comparison to how quickly and accu-rately this work is normally carried out in in-dustry is disappointing and should be some-thing that is included in further work. Theauthors imply that future work in this areawill be on a much larger scale potentially us-ing more workers and more boxes or a longermanufacturing line. Although this paper ismainly concerned with using wearable com-puting and augmented reality for performingeveryday tasks, it is easy to see how this tech-nology could be adapted.

One potential use could arise from usingglasses outfitted with this technology in orderto find misplaced items. A potential scenariothat everyone finds themselves in is trying tolocate keys, phones, wallets etc. A systemsuch as this one could be adapted to remem-ber the last time items such as these were seenand guide the user back to their last location.

Another use could be for high street or su-permarket shopping. The user could programa list of items they are interested in purchas-ing into the device, then when they are outon the high street, the system would recog-nise these items as the wearer came close tothem. This would not only speed up the pur-chasing process, but also help to ensure thatthe wearer does not accidentally forget an itemthey wanted to purchase.

In general, a wearable system emulatingscenarios such as the ones mentioned above islikely to aid the user’s memory and efficiency.In a time where completing tasks as quicklyand accurately as possible is key, it could behypothesised that a system such as this wouldhave a sizeable impact on consumers’ lives.

A downside to having a system such as thisintegrated into daily life is that there are peo-ple who enjoy activities such as shopping andsearching for items at leisure. Although it isunlikely that they will be alienated from usingthis system, the benefit of efficiency may see alot of shoppers adapt a new system.

Ubiquitous Computing

Ubiquitous computing (also known as per-vasive computing) is an area of computer sci-ence that aims to embed technology in en-vironments populated by humans in order toallow computing and communication (Satya-narayanan, 2001). Ideally, ubiquitous comput-ing would make it possible to allow users toengage with technology without even realis-ing that it is there. For example, electricityis now commonplace and can be found every-where although once this was not the case - itis ubiquitous. It is likely that soon the Internetwill be available everywhere through expand-ing mobile networks enabling technology suchas 4G and increasingly available WiFi (Khanet al., 2009).

With regards to this paper, wearable com-puting and augmented reality can be thoughtof as building blocks aiding the constructionof ‘computing everywhere’. In this section,wearable computers that are particularly mo-bile will be discussed. These systems are morelikely to try and augment a number of differ-ent situations and possibilities. This is in con-trast to the previous section relating to contextawareness that focused on specific scenarios.

5

GesturesIn this section, a system called Sixth-

Sense that uses gesture recognition to augmentthe environment will be discussed (Mistry &Maes, 2009). SixthSense (developed at theMIT Media Lab.) is a wearable device that at-tempts to project an augmented reality that canbe interacted with using gestures. The idea be-hind this is that gestures are what most peopleuse to interact with physical objects, so whynot augmented ones as well?

Developed by Pranav Mistry and PattieMaes, the physical composition of the ma-chine includes a camera that is used to capturegestures, a projector that is used to display theaugmentations on a surface, and a mobile de-vice that all these are connected to. The weareralso needs to add small coloured strips of ma-terial to their fingers as this is what the cam-era tracks for gesture recognition. Originallyconstructed as a projector attached to a bicy-cle helmet, the project now has a more appeal-ing and practical form factor in the shape of awearable neck pendant.

The wearable system attempts to fully em-bed itself within everyday life and as a result,has a vast array of features. Firstly, it hasthe ability to project onto any flat surface -this could be indoors or outdoors. Projectionof maps is a heavily advertised feature whichallows zooming and panning using the stan-dard ‘pinch’ and ‘slide’ gestures as found onmost touch-screen devices. Next, it allows thewearer to dial a phone number using an inter-active projection on their palm - akin to thetechnology found in science fiction films suchas Minority Report (Spielberg, 2002). Sixth-Sense also projects relevant object-related in-formation straight onto the item being focusedon. For example, it can overlay a newspa-per article with a video of the same article orgive current weather information layered overa static image of possible old weather news.

Figure 2: Some of the capabilities of SixthSense. X -The pendant form factor. A - Using Maps. B - Paintingon a flat surface. C - Taking a picture using a framegesture. D - Augmenting onto a newspaper. E - Anaugmented watch.

Many of these activities are illustrated in Fig-ure 2.

The inventors plan on releasing the designbehind the project as open-source so anyonecan build their own version. Mistry claims thatthe system he demonstrates to others cost himas little as $300 to build (Mistry, 2009).

DiscussionIt is clear that SixthSense is attempting to re-

semble a system that can perform many tasksin different places making it ubiquitous. Cur-rently, many mobile devices are gesture-based.Motions such as swiping and pinching haveimmediate connotations relating to what ac-tions should be performed - moving the screen,zooming etc. This system does well to inte-grate with the norms of society is used to andwould likely be intuitive to use.

The augmentation mechanism used in thiscase is a built in projector, which differs to thesystems discussed earlier that use glasses. A

6

merit of using a projector is that the weareris able to interact with a much larger display.As of late, interfaces like these have becomemore commonplace e.g. Microsoft’s Kinect,again adding to the intuitiveness of the device.However, using a projector does mean that thesurface being projected onto must be flat. Thismay prove troublesome when the wearer hasto interact with the system outdoors and a flatsurface is not always available. A possible so-lution to this problem could allow the systemto recognise when a surface was not flat andthen adapt the projected image to compensatefor this.

A major strength of SixthSense is the largenumber of tasks it can perform. Certain partsof the system even display the ability to becontext aware e.g. when it projects the rele-vant news story on top of a newspaper. Thishelps transform potentially old data into infor-mation that is up-to-the-minute. However, theinventors do not provide any benchmarks asto how the system performs in comparison toother mobile devices. This is a critical com-ponent that needs to be addressed before thesystem can be delivered to the public. Withthis being said, the project’s source code is be-ing made available so it is quite possible thatsome organisations may seek to make Sixth-Sense into a consumer product.

The inventors have demonstrated that thisproject can be scaled to fit into what they terma ‘pendant’ and worn around the neck. How-ever, it is still quite a sizeable device and maynot be suitable for tasks such as exercise. Thissystem is likely to be more acceptable whenthe size of the hardware decreases, allowing itto be more discrete. Furthermore, as the de-vice requires the user to wear special items ontheir fingers, will this alienate potential users?What happens if these items are lost? Whatabout those who have limited use of their fin-gers? These are all questions that can be pro-

posed for future work.All in all, this system seems promising. Fur-

ther work needs to be conducted that assessesthe needs of the potential end users - the gen-eral public. It may not be the case that peopleare convinced that they need to wear an ob-ject with a projector built in to it to get up todate information. This also raises additionalquestions relating to the use of objects such asnewspapers and conventional maps. If any sur-face can be used to get this information in it’smost current state, will these items still be re-quired?

An Assistant

The next wearable computer that will beexamined is the Rememberance Agent (RA)(Starner et al., 1997). This device acts as apersonal assistant that knows the user’s tastesand preferences and can make suggestions asto what may help the user based on this infor-mation.

The idea for the text-based system camefrom observing the fact that 95% of time spentbetween a human and a computer is on word-processing. The RA uses a camera to observewhat the wearer types and stores this informa-tion in a database to learn how to react to fu-ture queries. As this is a relatively simple task,the other system resources are concerned withproviding helpful suggestions to the user basedon what they are currently doing. The outputof the system is shown to the user through adisplay mounted onto glasses. The entire inter-face of the RA is text-based but the text is pre-sented in a non-obtrusive manner. The wearercan interact with the RA through using theirfinger as a pointing device.

The agent aims to be intelligent and is spe-cific to the wearer. Therefore, it is capable ofperforming tasks such as replying to e-mails.For example, if a student had a query about

7

the research area of a lecturer, they can sim-ply e-mail the lecturer’s RA and it will auto-matically reply with a list of current researchinterests and papers. This means the systemdoes not need to interrupt the user who may beperforming tasks of a higher priority. Anotherexample is the scheduling of appointments. Asthe agent monitors the wearer’s activities, if itrealises that multiple appointments have beenscheduled for the same time, it can advise theuser of this. Needless to say, electronic sys-tems already perform tasks such as this. How-ever, the advantage of using a wearable de-vice is that updates can be given in real-timee.g. if a verbal agreement was made betweenthe wearer and another person about a meetingwhich did cause a conflict but the wearer wasunaware of it.

Due to the RA becoming an ‘expert’ inthe person it is monitoring, it is able to con-textualise pieces of data and react accord-ingly. Consider homonyms (words that sharethe same spelling but have multiple meanings)such as ‘agent’. The RA would be able totell the difference between a wearer who wasa researcher in AI (autonomous agents) and achemist (chemical agents).

DiscussionThe biggest merit for this system is it’s abil-

ity to learn the user’s preferences to the pointwhere it can automatically respond to situa-tions without the need to interrupt the user.The RA also demonstrates the ability to becontext aware. By constantly observing thewearer, the system needs to be able to differ-entiate useful and useless data. Additionally,when presenting information to the wearer, theRA contextualises the environment to only dis-play relevant information.

Currently, the RA only accepts input fromthe recording device relating to what thewearer sees. Including information from other

wearable sensors could provide data relating towhat activity the user is currently doing (work-ing, travelling, exercising etc.). This wouldfurther help contextualise the data and the sys-tem could make more educated decisions.

The research does not provide any empiri-cal data regarding the effectiveness of the sys-tem. A study could consider analysing the re-sponses of several participants that use the sys-tem over the course of a few weeks. Questionsbased around finding out if the system does ac-tually provide satisfactory replies and data asan agent could be proposed. This would hintas to whether or not a system such as the Re-memberance Agent could be significantly em-braced by society.

Social Networks

Two physical, implemented, ubiquitous,wearable computer systems have been dis-cussed in depth. The next system that willbe considered has not been implemented butprovides concepts and a framework that couldlead to another type of wearable computer:one built around sociability.

Kortuem & Segall indicate that a wearablecomputers can be networked together to cre-ate a localised, digital community (Kortuem& Segall, 2003). This is based on the prin-ciple that wearable computers can augmentphysical interactions that humans have on adaily basis. That is, the system does not re-place any interaction between people but en-courages those that share common goals to in-teract more. For example, instead of phys-ically exchanging business cards, two wear-able systems that are close enough could sendelectronic profiles between them. Therefore,the wearable computers create a ‘digital socialsphere’ around the user and only perform cer-tain actions when other systems are within it’srange.

8

The authors implemented three systems oftheir own that build on this idea of wearablecommunities. The first - Genie - is a ques-tion and answer device that asks questions thewearer has to other wearers in close vicin-ity that may be willing to answer. When amatch is made between a question and an-swer, the system encourages the individuals tomeet physically to discuss their ideas furtherand aids this meeting by providing informationsuch as user-names and photos.

The second system, known as Pirate, ex-changes data concerning music, such asplaylists, with wearers that come in contactwith each-other regularly. The computer willthen notify the wearer that it has some newmusic recommendation based on the listeninghabits of people within the wearable commu-nity. This differs to Genie as the wearer is un-aware of the interaction taking place.

Lastly, there is the Wearable AugmentedTask-List Interchange Device, or WALID,which creates a localised community that co-operates with each other using the wearabledevices for completing day-to-day tasks. Thesystem aims to exchange tasks with otherwearable devices for mutual gain e.g. if aneighbour was making a trip to the post officeand the wearer needed something posted thesystem would accept this task in exchange forthe wearer having to babysit for the neighbournext weekend.

DiscussionThe concept of being able to network

wearable computers together poses interestingquestions. Firstly, if these wearable commu-nities are established, what sorts of securityprotocol will govern them? As has been seenwith the Internet, there are always those whowish to use systems to bend the rules. Thishas led to rising numbers of CD, DVD andsoftware piracy (Peitz & Waelbroeck, 2004;

Smith & Telang, 2009; Hinduja, 2001). A pos-sible solution could be to have every wearablesystem establish a secure connection with theprovider of such a system through the Internetbefore making a transaction with another de-vice. However, this would all have to happenvery quickly especially if both users are on themove but with the introduction of high speednetworks carrying 4G signals and data speeds,it could be a realistic solution.

Secondly, as the authors demonstrated, awearable community can have multiple pur-poses and tasks that can be accomplished. Thismay give rise to a wearable application storesimilar to those currently on offer for mobiledevices. Developers would be able to de-velop applications for a generic wearable de-vice to complete specific objectives. Follow-ing this track of thought may lead future re-search into wearable communities to developan open, developer-friendly framework. Anundertaking of this nature is likely to requiresubstantial amounts of funding which wouldrequire the backing of a large, commercial or-ganisation. This leads into the next section ofthis report.

Figure 3: The interaction between two social spheres

9

Commercial Systems

By now, the reader is sure to be wonder-ing why none of these systems have been pre-sented to the general public. While they allmanage to contribute to the advancement ofwearable computing and augmented reality,they also have their flaws. In this section, sys-tems that have been built with the primary con-cern being the wearer will be examined. Thesesystems look to integrate the computationaldevices into objects that will not be obtrusiveto the user in daily life.

Smart Watch

When it comes to using mobile devices suchas phones, laptops and tablets as a method ofcontrolling home appliances, a common setof restrictions are usually found. For exam-ple, it is not possible to use these items inthe shower and although mobile, people areunlikely to carry them around constantly intheir homes. This constructs the groundingfor the invention of the dWatch, an easily ex-tendible, programmable watch designed foruse with a smart home (Bonino et al., 2012).A smart home relates back to the earlier sec-tion of ubiquitous computing. It is an envi-ronment that has sensors, input and output de-vices that allow it to communicate to the in-habitants. The dWatch, when combined witha compatible environment, can perform homeautomation tasks such as regulating tempera-ture and sounding alarms. The inventors pro-posed three key scenarios that demonstrate theusefulness of the system.

Firstly, there is the scenario where thereis only one user with one watch in a smarthome. As the watch is easily programmable,the wearer can control whatever he / she wants.For example, they could use it to open or closeshutters or blinds. The watch can also be pro-grammed to recognise gestures e.g. using a left

to right motion to open blinds and a down to upmotion to close them. Additionally, the watchcan also be used as a notification centre. Thesecentres are based on patterns recognised in thehome as well as the location of the user. Forexample, the smart home may notice that theuser is running low on milk. It will then no-tify the wearer’s dWatch when they go outsidewith a message saying ‘MILK’.

Next, is the scenario in which there aremany dWatches in a smart home. In this case,watches can be given varying levels of permis-sion. Consider a family in which the parentsand children have more or less permission tocontrol appliances respectively. For example,the children could control items in their roomsbut not in their parent’s study. Additionally,the watches can also communicate with oneanother. If a child needed to contact the par-ent, they could send them an alarm on theirdWatch. If the message fails to be sent thenthe dWatch can automatically re-direct it to thereceiver’s mobile device.

The final scenario takes a more focusedview on building management. The manageror landlord of a building could use a dWatchto communicate with appliances in the build-ing that require attention. Instead of waitingfor a tenant to inform the landlord when some-thing breaks, he / she would be able to imme-diately identify, locate and fix the issue. Thisrequires all the appliances to be aware of theirinternal state and maintain a connection to thelandlord’s dWatch even when broken.

DiscussionCombining technology such as the dWatch

with smart environments could lead to revolu-tionary advancements in how people behave intheir homes. Using gestures to complete sim-ple actions such as opening and closing blindscould be easily extended to other devices aswell. Therefore, instead of controlling just one

10

appliance, it could control many. For exam-ple, when waking up, the wearer could makea gesture that would open the blinds, turn onthe heating, tune the TV to the news and boilthe kettle. Similarly, it could be used to makea home more energy efficient. A device likethe dWatch could check when a home wasempty and switch off any unnecessary itemsfor power consumption and reduce the temper-ature to a suitable level.

The inventors already provided the exam-ple of children harnessing the dWatch to alerttheir parents of a problem. The same couldbe said for the elderly. Being able to commu-nicate with their home would provide an in-valuable tool to communicate with the outsideworld if they are in need of assistance. A simi-larly methodology could also be implementedin hospitals so that patients and nurses couldbe in contact at all times.

Fashion

Electronic textiles incorporate microchipsinto fabrics that allow them to sense com-munication, transmit power and network to-gether (Berzowska, 2005). Conductive yarnsand electronic ink are just a couple of exam-ples of such fabrics.

Joanna Berzowska, in collaboration with XSLabs, has helped to develop some of the mostcutting-edge electronic fabrics. Firstly, is theBlazer Sleeve. This sleeve combines knowl-edge of how the retina works and movement ofthe body to display lights and text. An exten-sion called SoundSleeve was also created thatplayed music instead of displaying text.

Next, is a special type of ink that changescolour dependent on temperature. This wasnot incorporated into any pieces of wearablecomputing but could lead to future work onclothing that changes dynamically dependenton the weather.

Finally, there was a system known as Shim-mering Flower. This piece of intelligentfabric uses a sixty-four pixel grid that canbe addressed individually to change colour.Although this technology currently uses toomuch power to be wearable, with energy ef-ficient power mechanisms emerging, peoplecould soon be wearing highly personalisedclothing.

Figure 4: Shimmering Flower showing how the fabricchanges colour

The author concludes by stating that in orderto deliver electronic textiles to the mass, thebasic reasoning behind fashion must be con-sidered. That is, people buy clothes for indi-viduality and to stand out from others. There-fore, although these technologies themselvesare not revolutionary, when they are developedfurther and with added functionality, the fash-ion industry may be about to undergo an over-haul.

11

Discussion

These fabrics currently do not provide anyuseful features beyond fun and creativity.However, it is easy to see that if they couldbe networked to access the Internet, they couldaccess more personal information about thewearer. The displays on the clothing couldthen convey information about the person suchas their favourite colour, hobbies and interests.

In terms of personalisation, no other compu-tational device would be more personal thanclothing. Intelligent items of clothing couldgather information ubiquitously as the wearerperforms different activities. For example,items frequently worn for exercise could pro-vide information and suggestions on how toimprove health routines.

Google Glass

Glass is the latest project delivered bythe multinational corporation Google (www.google.com/glass). As very little is currentlyknown about project Glass,there are no schol-arly articles available. However, it has beenincluded in this report as it is the most cuttingedge piece of wearable computing and will bereleased to consumers near the end of 2013(IGN, 2013).

As numerous media articles have stated,Google Glass aims to solve any need thewearer has by visually augmenting the world(IGN, 2013; PCMAG, 2013; TheVerge, 2013).Examples include asking the device to thenearest coffee shop and having it guide the userthere or taking a picture of whatever is in thecurrent field of vision. Glass can understandthe wearer as it is directly addressed using thekeyword ‘Glass’ and then whatever the com-mand or question is, similar to Google Nowand Siri technologies.

Figure 5: Two of the final Glass devices released totesters

Discussion

It is difficult to provide a balanced argumentwhen a device like Glass has not even beenreleased yet. However, conglomerates rarelycommit the large amounts time and moneyto such ambitious projects without conductingthorough research. With Glass, Google hastried to combine context aware, ubiquitous andcommercial wearable computing. The market-ing and advertising behind Glass has caughtthe attention of many but whether consumersare ready to adopt such a bold product thatseeks to break the bounds of mobile phonesremains to be seen.

Relating back to the discussion on fashion,there are some wearers who will undoubtedlyargue the aesthetics of Glass. Many individu-als do not like wearing glasses or are very self-concious. It remains to be seen how Googlewill target these individuals besides offeringseveral different colours. Additionally, thereare those who wear prescription lenses to seeand it is unclear whether or not conventionallenses can be incorporated into Google Glass.

12

Other Types

This paper has now covered some of the keysystems of wearable computers that are usedto aid wearers in performing everyday tasks. Itwill now continue to discuss some of the otherareas where wearable computing is making animpact. The areas of medicine and militaryare currently embracing wearable computersto a much larger extent than commercial busi-nesses and therefore inspiration can be takenfrom these field.

Medical

Much of the use of wearable computers inmedicine is for patient rehabilitation after re-ceiving treatment. The work by Paolo Bonatofocuses on using sensors for clinical assess-ment of patients during rehabilitation (Bonato,2005). Bonato states that in order to accom-plish this, wearable systems need to be smalland unobtrusive. In addition to this, they alsoneed to be able to collect data in an efficientmanner. For example, a system would needto be active enough to send data of observa-tions frequently but also be energy efficient.Advances in making such systems unobtrusivehave focused on creating devices such as foamsensors to measure pressure on the neck andfingers.

A second implementation in medicinemakes use of augmented reality to aid sur-geons (Lamata et al., 2010). Wearable systemscan be used to help surgeons see what they areoperating on without the need to actually per-form surgery. In essence the surgeon wears acamera to tell the system where they are fo-cusing and the system displays a render of theorgan in question. The output from the systemvaries from being displayed on a screen to onthe patient.

MilitaryWith regards to the military, Robert Thom-

son and John Lynn demonstrate the effective-ness of a system that makes use of a headmounted display for military equipment main-tenance (Thomson & Lynn, 2010). The wear-able device known as REMAIN (REMote As-sistance and Investigation) was developed byThales Optronics and allows the wearer to besent technical information through a secure in-ternet link which is then displayed througha head mounted display. Compared to theconventional paper-based methods that involvesearching thorough a manual, this system of-fers much quicker maintenance of militaryequipment especially on the front-line. Thesystem also allows other data (in addition totext) to be transferred over the link such as theaudio of another technical expert or video.

The authors found, through testing, thatthe system was generally favoured. However,there were issues raised as many techniciansfound that receiving both aural and visual in-formation became confusing.

A related system known as the BattlefieldAugmented Reality System was developed bythe Naval Research Lab (Azuma et al., 2001).This system provides a soldier with a headmounted display that renders the environmentand augments information about hazards andobjectives into the wearer’s field of vision.

Conclusions

This report has examined and discussedseveral different systems from context-aware,ubiquitous and commercial wearable comput-ing.

It has become apparent that in order tocreate a wearable system that harnesses aug-mented reality effectively enough to make itappealing to the general public, a balance be-tween context awareness and ubiquitous com-

13

puting needs to be achieved. Current contextaware systems only work in very specific envi-ronments for which they have been developedfor. Conversely, ubiquitous systems do notemploy enough context awareness for them tobe that much more intelligent than current mo-bile phones. This is where commercial sys-tems attempt to fill the void. Although ad-vances are being made in fields such as fash-ion and intelligent homes, it will take largetechnology companies such as Google to seewearable computing through to market. This islargely due to the financial backing they havebut also because of the experience they possesin bringing items for sale to the masses.

In closing, although wearable computing islikely to be embraced fully by fields such asmedicine and the military first, it is only a mat-ter of time before smart phones are no longerintelligent enough for society’s day to dayneeds. Wearable computing and augmentedreality will be able to deliver information ina here and now fashion but when this will hap-pen is likely to be up to large, multinationaltechnology companies.

References

Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier,S., & MacIntyre, B. (2001). Recent advances inaugmented reality. Computer Graphics and Appli-cations,IEE, 21, 34–47.

Azuma, R. et al. (1997). A survey of augmented reality.Presence-Teleoperators and Virtual Environments, 6,355–385.

Berzowska, J. (2005). Electronic textiles: Wearablecomputers, reactive fashion, and soft computation.Textile: The Journal of Cloth and Culture, 3, 58–75.

Bonato, P. (2005). Advances in wearable technologyand applications in physical medicine and rehabilita-tion. Journal of NeuroEngineering and Rehabilita-tion, 2, 2.

Bonino, D., Corno, F., & Russis, L. D. (2012).dwatch: A personal wrist watch for smart en-vironments. Procedia Computer Science, 10,300 – 307. URL: http://www.sciencedirect.com/science/article/pii/S1877050912003973.doi:10.1016/j.procs.2012.06.040.¡ce:title¿ANT 2012 and MobiWIS 2012¡/ce:title¿.

Chen, G., Kotz, D. et al. (2000). A survey of context-aware mobile computing research. Technical ReportTechnical Report TR2000-381, Dept. of ComputerScience, Dartmouth College.

CS4FN (2002). Cyborg super series. URL: http://www.cs4fn.org/alife/cyborg.html.

Dey, A., Salber, D., Abowd, G., & Futakawa, M.(1999). The conference assistant: Combiningcontext-awareness with wearable computing. InWearable Computers, 1999. Digest of Papers. TheThird International Symposium on (pp. 21–28).IEEE.

Hinduja, S. (2001). Correlates of internet softwarepiracy. Journal of Contemporary Criminal Justice,17, 369–382.

Huang, P. et al. (2000). Promoting wearable comput-ing: A survey and future agenda. Technical ReportNational Taiwan University.

IGN (2013). Everything you need to know about googleglass. URL: http://uk.ign.com/articles/2013/02/27/

everything-you-need-to-know-about-google-glass.Khan, A., Qadeer, M., Ansari, J., & Waheed, S. (2009).

4g as a next generation wireless network. In FutureComputer and Communication, 2009. ICFCC 2009.International Conference on (pp. 334 –338). doi:10.1109/ICFCC.2009.108.

Kortuem, G., & Segall, Z. (2003). Wearable commu-nities: augmenting social networks with wearablecomputers. Pervasive Computing, IEEE, 2, 71–78.

14

Krause, A., Smailagic, A., & Siewiorek, D. P. (2006).Context-aware mobile computing: Learning context-dependent personal preferences from a wearable sen-sor array. Mobile Computing, IEEE Transactions on,5, 113–127.

Lamata, P., Ali, W., Cano, A., Cornella, J., Declerck, J.,Elle, O. J., Freudenthal, A., Furtado, H., Kalkofen,D., Naerum, E. et al. (2010). Augmented reality forminimally invasive surgery: Overview and some re-cent advances. Augmented Reality, ISBN, (pp. 978–953).

Mistry, P. (2009). Pranav mistry: The thrillingpotential of sixthsense technology. URL:http://www.ted.com/talks/pranav mistry thethrilling potential of sixthsense technology.html.

Mistry, P., & Maes, P. (2009). Sixthsense: a wear-able gestural interface. In ACM SIGGRAPH ASIA2009 Sketches SIGGRAPH ASIA ’09 (pp. 11:1–11:1). New York, NY, USA: ACM. URL: http://doi.acm.org/10.1145/1667146.1667160. doi:10.1145/1667146.1667160.

PCMAG (2013). 13 cool things youcan do with google glass. URL: http://www.pcmag.com/slideshow/story/308711/

13-cool-things-you-can-do-with-google-glass.Peitz, M., & Waelbroeck, P. (2004). The effect of inter-

net piracy on cd sales: Cross-section evidence. CE-Sifo Working Paper Series, .

Rhodes, B. (2001). A brief history of wearable com-puting. URL: www.media.mit.edu/wearables/lizzy/

timeline.de Sa, M., Churchill, E., & Isbister, K. (2011). Mo-

bile augmented reality: design issues and opportuni-ties. In Proceedings of the 13th International Con-ference on Human Computer Interaction with MobileDevices and Services (pp. 749–752). ACM.

Satyanarayanan, M. (2001). Pervasive computing: Vi-sion and challenges. Personal Communications,IEEE, 8, 10–17.

Schaller, R. (1997). Moore’s law: past, present and fu-ture. Spectrum, IEEE, 34, 52–59.

Schwerdtfeger, B., & Klinker, G. (2008). Support-ing order picking with augmented reality. In Mixedand Augmented Reality, 2008. ISMAR 2008. 7thIEEE/ACM International Symposium on (pp. 91–94).IEEE.

Smith, M. D., & Telang, R. (2009). Competing withfree: the impact of movie broadcasts on dvd salesand internet piracy 1. MIS Quarterly, 33, 321–338.

Spielberg, S. (2002). Minority report.Starner, T. (2002). Wearable computers: No longer sci-

ence fiction. Pervasive Computing, IEEE, 1, 86–88.Starner, T., Mann, S., Rhodes, B., Levine, J., Healey,

J., Kirsch, D., Picard, R. W., & Pentland, A.(1997). Augmented reality through wearable com-puting. Presence: Teleoperators and Virtual Envi-ronments, 6, 386–398.

TheVerge (2013). I used google glass: thefuture, but with monthly updates. URL:http://www.theverge.com/2013/2/22/4013406/

i-used-google-glass-its-the-future-with-monthly-updates.Thomson, R., & Lynn, J. (2010). The benefots of head

mounted displays and wearable computers in a mil-itary maintenance environment. In Education andManagement (ICEMET), 2013 International Confer-ence on (pp. 560–564). IEEE.

Van Krevelen, D., & Poelman, R. (2010). A surveyof augmented reality technologies, applications andlimitations. International Journal of Virtual Reality,9, 1.

Vera, L., Gimeno, J., Coma, I., & Fernandez, M.(2011). Augmented mirror: interactive augmentedreality system based on kinect. Human-ComputerInteraction–INTERACT 2011, (pp. 483–486).

Yoon, H., Park, N., Lee, W., Jang, Y., & Woo, W.(2011). Qr code data representation for mobile aug-mented reality. In The International AR StandardsMeeting (pp. 17–19).

15