Future Human Computer Interaction

Click here to load reader

  • date post

  • Category


  • view

  • download


Embed Size (px)

Transcript of Future Human Computer Interaction

Future Human Computer Interaction with special focus on input and output techniquesThomas Hahn University of Reykjavik thahn1985@gmail.com March 26, 2010

AbstractHuman Computer Interaction in the eld of input and output techniques has developed a lot of new techniques over the last few years. With the recently released full multitouch tablets and notebooks the way how people interact with the computer is coming to a new dimension. As humans are used to handle things with their hands the technology of multi-touch displays or touchpads brought much more convenience for use in daily life. But for sure the usage of human speech recognition will also play an important part in the future of human computer interaction. This paper introduces techniques and devices using the humans hand gestures for the use with multi-touch tablets and video recognition and techniques for voice interaction. Thereby the gesture and speech recognition take an important role as these are the main communication methods between humans and how they could disrupt the keyboard or mouse as we know it today.



As mentioned before, much work in the sector of human computer interaction, in the eld of input and output techniques, has been made since the last years. Now since 1

the release of the multi-touch tablets and notebooks, some of these developed multitouch techniques are coming into practical usage. Thus it will surely take not much time that sophisticated techniques will enhance these techniques for human gesture or voice detection. These two new methods will for sure play an important role of how the HCI in future will change and how people can interact more easily with their computer in daily life. Hewett, et al dened that Humancomputer interaction is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them.[1] So since the invention of the Human Computer Interface in the 1970s at Xerox Park, we are used to have a mouse and a keyboard to interact with the computer and to have the screen as a simple output device. With upcoming new technologies these devices are more and more converting with each other or sophisticated methods are replacing them. Therefore this paper mainly deals with these new developments, how they should be implemented in future and how they could inuence and change the daily computer interaction. For example with the techniques used for multi-touch devices, described in sec-

tion 2.1, the screen recently becomes a new input and output tool in one device. In doing so it is seen that there is no need for extra input devices. Thus even this fact is a completely new way, as we are used to having more than just one device. Included in section 2.2 is another awarding method concerning human gesture interaction in the eld of detecting gestures via video devices. In section 2.3 then another future technique is pointed out which is concerned with the human speech detection as an input method. Section 2.4 then deals with a method of using the combination of video and speech detection. After these sections about the dierent types of recent human computer interaction work the section 3 then deals with the opportunities of these new techniques and how they can be used in future especially how life can be changed. It is then also pointed out in which eld these new developments can be adopted.

methods can also be used on bigger screens like the approach of Miller [2] or Microsofts Surface [3], only to mention these two. Thus the trend is going more and more not only in the direction of merging input and output devices but rather using every surface as an input and output facility. In this relation the video is becoming of greater interest as it uses the full range of human motion gestures and it is usable on any ground. In the end the input via speech also takes a part in the HCI as it is the humans easiest way to communicate, but for sure something completely dierent, compared with the other two types of input and output methods, as it is more an algorithm than a device. The combination of dierent input methods, called multi-modal interaction, is then described in the last Section.


Multi-Touch Devices


Recent Developments

Human gestures and the human speech are the most intuitive motions which humans use to communicate with each other. Although after the invention of the mouse and the keyboard, no further devices which could replace this two objects as a computer input method have been developed. Relating to the fact that it are more human like methods of a lot of research of how this fact can be used for communication between computers and human beings has been done. As there are some dierent ways of how gestures should be used as input this section is divided into multitouch, video, speech and mutli-modal interaction Sections. Nowadays there are already many tablets with touch screens available and with the new Apple iPad a new full multi-touch product has been released. But there is also a trend noticeable that these 2

As mentioned before this section is dealing with the technique of the recently released multi-touch devices and with some new enhanced approaches. This method is now becoming common in the tablet pcs like for example the new Apple iPad in the sector of notebooks and the HP TouchSmart in the desktop sector. Thereby the screen becomes an input and output device. But multitouch is also used today in many normal touch pads which are oering four-nger navigation. With this invention of a new human computer interaction many more work in this sector has been done and should sooner or later also come into practical use. Nowadays the usage of touch screens and multi-touch pads seems to be really common and that this is going to be the future of human computer interaction, but there is for sure more enhancement which can be seen in many approaches. In the eld of multi-touch products the trend to bigger touch pads in terms of multi-touch screens can be seen. Therefore the technique of a

single-touch touchpad as it is known from former notebooks is enhanced and more ngers oering natural human hand gestures can be used. Thus the user can use up to 10 ngers to fully control things with both hands as with the 10/GUI system which R. Clayton Miller [2] introduced in 2009. With another upcoming tool even the usage of any surface for such a touch screen can be used in future, like for the example DisplaxTM Multitouch Technology from DISPLAXTM InteractiveSystems [4]. From these examples it can be clearly seen that new high potential techniques are pushing into the market and are going to replace the Apples iPad and Microsofts Surface. In the next following sections these new tools and also the related devices which are already in the market are described in detail. 2.1.1 iPad


Microsoft c Surface

The recently introduced iPad from Apple is one of the many implementations of full multi-touch displays whereas it is a completely new way how people can interact with their computer. Thus the possibility to use the screen not only as a single-touch display is taking the Human Computer Interaction to the next level. With the introduced iPad it is possible to use all the nger movements which are also possible with the build-in multi touchpads in the Apple MacBooks that can be found on the Apple1 homepage. In doing so the user is able to use up to four ngers at the same time to navigate through the interface. For example two ngers can be used to zoom and four ngers to browse through windows. With using the screen as a big touchpad the techniques of the normal touchpad have been enhanced. Although this technique is just the beginning of the new multi-touch display revolution which will be for sure expanded by increasing the display size.1

Microsofts called their example of a multitouch screen just simply Surface. With this tool they generated a large touch screen tabletop computer. Therefore they are using infrared cameras for recognition of objects that are used on this screen. An object can thereby the human ngers or even other tagged items which can be placed on the screen. With this opportunity the device supports recognition of humans natural hand gestures as well as interaction with real objects and shape recognition. With this opportunity no extra devices are required for a usage of this tool and interaction can be made directly with the hands. With the large 30 inch display even more people then only one person at the same time can interact with the system and with each other at the same time. The recognition of the objects placed on the tabletop pc then provides more information and interaction. So it is perhaps possible to browse through dierent information menus about the placed item and obtain more digital information. On the other hand the size of the display and the therefore needed infrared cameras underneath is leading to an increase of size. Thus the tool is mainly designed for stationary use, like for example as a normal table with which it is then possible to interact with. 2.1.3 10/GUI


The 10/GUI system which Miller [2] invented, is an enhanced touchpad for desktop computer purpose which can recognize 10 ngers. With this opportunity human beings can interact with the computer with both hands and use it as a tracking and maybe also as a keyboard device. Therefore Miller designed this new touch surface tool especially for use in the desktop eld. To have the best ergonomically position he argues that it is better to have a full multi3

touch pad in front of a screen as the keyboard and mouse replacement then having the whole screen as an input device, like it is known from other touch-screens. This novel