Towards a Service-oriented Architecture for a Mobile ...

17
TSINGHUA SCIENCE AND TECHNOLOGY ISSNll 1007-0214 ll 0?/?? ll pp???-??? Volume 18, Number 3, June 2013 Towards a Service-oriented Architecture for a Mobile Assistive System with Real-time Environmental Sensing Darpan Triboan, Liming Chen, Feng Chen and Zumin Wang* Abstract: With the growing aging population, age-related diseases have increased considerably over the years. In response to these, ambient assistive living (AAL) systems are being developed and are continually evolving to enrich and support independent living. While most researchers investigate robust activity recognition (AR) tech- niques, this paper focuses on some of the architectural challenges of the AAL systems. This work proposes a system architecture that fuses varying software design patterns and integrates readily available hardware devices to create wireless sensor networks for real-time applications. The system architecture brings together the service- oriented architecture (SOA), semantic web technologies, and other methods to address some of the shortcomings of the preceding system implementations using off-the-shelf and open source components. In order to validate the proposed architecture, a prototype is developed and tested positively to recognize basic user activities in real time. The system provides a base that can be further extended in many areas of AAL systems, including composite AR. Key words: Activities of Daily Living (ADL), Service-Oriented Architecture (SOA), Semantic Web, Ontology Model- ing, Web Ontology Language (OWL), Activity Recognition (AR), Smart Homes (SH), and Wireless Sensor Networks (WSNs). 1 Introduction The global aging population is estimated to reach 2 bil- lion by 2050 [1–3], and such an increase will inevitably create a larger demand on the health care system that is already facing a shortage in resources. The smart homes (SH) environment is a technological solution for this modern-day problem, and it works by monitoring and gathering contextual activity recognition (AR) date from inhabitants to come up with solutions to provide Darpan Triboan, Liming Chen, Feng Chen are with the Context, Intelligence, and Interaction Research Group (CIIRG), De Montfort University, Leicester, LE1 9BH, UK. E-mail: {darpan.triboan@my365., liming.chen@, fengchen@}dmu.ac.uk Zumin Wang is with the Department of Information Engineer- ing, Dalian University, China. E-mail: [email protected] * To whom correspondence should be addressed. Manuscript received: 2016-07-01; revised: 2016-09-13, 2016- 10-31; accepted: year-month-day real-time assistance and care for the patient or elderly person. However, many problems must be resolved so that SH can fully simulate and take the role of a care- provider or health care professional to certain degree [4]. This paper is set within the context of current prob- lems related to the delivery of high-quality of care for the aging population by health care professionals, which focused on addressing the three levels of sys- tem architecture challenges in building an assistive sys- tem. These levels are (a) selecting appropriate style and design pattern, (b) considering specific technological and technical requirements for activity recognition, and (c) building and integrating appropriate wireless sen- sor technologies for providing real-time assistance and monitoring. The sections below introduce these three levels, identify the key challenges, and discuss their po- tential opportunities.

Transcript of Towards a Service-oriented Architecture for a Mobile ...

TSINGHUA SCIENCE AND TECHNOLOGYISSNll1007-0214ll0?/??llpp???-???Volume 18, Number 3, June 2013

Towards a Service-oriented Architecture for a Mobile Assistive Systemwith Real-time Environmental Sensing

Darpan Triboan, Liming Chen, Feng Chen and Zumin Wang*

Abstract: With the growing aging population, age-related diseases have increased considerably over the years.

In response to these, ambient assistive living (AAL) systems are being developed and are continually evolving to

enrich and support independent living. While most researchers investigate robust activity recognition (AR) tech-

niques, this paper focuses on some of the architectural challenges of the AAL systems. This work proposes a

system architecture that fuses varying software design patterns and integrates readily available hardware devices

to create wireless sensor networks for real-time applications. The system architecture brings together the service-

oriented architecture (SOA), semantic web technologies, and other methods to address some of the shortcomings

of the preceding system implementations using off-the-shelf and open source components. In order to validate the

proposed architecture, a prototype is developed and tested positively to recognize basic user activities in real time.

The system provides a base that can be further extended in many areas of AAL systems, including composite AR.

Key words: Activities of Daily Living (ADL), Service-Oriented Architecture (SOA), Semantic Web, Ontology Model-ing, Web Ontology Language (OWL), Activity Recognition (AR), Smart Homes (SH), and Wireless Sensor Networks(WSNs).

1 Introduction

The global aging population is estimated to reach 2 bil-lion by 2050 [1–3], and such an increase will inevitablycreate a larger demand on the health care system thatis already facing a shortage in resources. The smarthomes (SH) environment is a technological solution forthis modern-day problem, and it works by monitoringand gathering contextual activity recognition (AR) datefrom inhabitants to come up with solutions to provide

•Darpan Triboan, Liming Chen, Feng Chen are with theContext, Intelligence, and Interaction Research Group(CIIRG), De Montfort University, Leicester, LE1 9BH,UK. E-mail: darpan.triboan@my365., liming.chen@,[email protected]•Zumin Wang is with the Department of Information Engineer-

ing, Dalian University, China. E-mail: [email protected]∗To whom correspondence should be addressed.

Manuscript received: 2016-07-01; revised: 2016-09-13, 2016-10-31; accepted: year-month-day

real-time assistance and care for the patient or elderlyperson. However, many problems must be resolved sothat SH can fully simulate and take the role of a care-provider or health care professional to certain degree[4].

This paper is set within the context of current prob-lems related to the delivery of high-quality of carefor the aging population by health care professionals,which focused on addressing the three levels of sys-tem architecture challenges in building an assistive sys-tem. These levels are (a) selecting appropriate style anddesign pattern, (b) considering specific technologicaland technical requirements for activity recognition, and(c) building and integrating appropriate wireless sen-sor technologies for providing real-time assistance andmonitoring. The sections below introduce these threelevels, identify the key challenges, and discuss their po-tential opportunities.

2 Tsinghua Science and Technology, June 2016, 18(3): 000-000

1.1 Challenges and Opportunities

1.1.1 Assistive System Architecture Style and Pat-terns

One of the main systems architectural challenges inbuilding an assistive system is to select appropriate de-sign styles and patterns, that, unfortunately, may beeasily misused [5–7]. Engaging with the wider com-munity by having open source components and usingpopular programming languages can play a key role incoming up with useful, adaptive, and personalised so-lutions. Other factors influencing the design decisionsinclude: semantica data storage, computation power re-quirement, low latency communication protocols, andthe ability to allow simultaneous access to the users witha convenient human-computer interface (HCI). Some ofthe exiting assistive systems (explored further in Sec-tion 2) are built in a standalone application environ-ment. However, questions have been raised regardingits extensibility, reusability, scalability, maintainability,and/or use of proprietary components, which may havelimited community support. In addition, having a pooror an unnatural HCI design poses practical limitationsfor its the key users.

Over the years, the service-orientated architecture(SOA) approach has become popular, because it canaddress some of the aforementioned issues as well ascreate a mechanism by which to delegate resource-intensive tasks and storage to powerful sets of comput-ers over a network (cloud computing). Moreover, usingthe SOA approach also allows low-power devices suchas mobile devices or any other gadgets with network ca-pabilities, to utilise the available services. This has notonly improved the HCI of the system, but also made itscalable such that it can serve cross-platform clients aswell as integrate and reuse third-party services in a cre-ative manner. The approach now drives the concepts ofSH, Internet-of-Things (IoT), and ubiquitous or perva-sive computing. This is the main approach by whicheveryday objects can be seamlessly integrated into theinterconnected World Wide Web (WWW).

1.1.2 Activity Recognition (AR)A key part of an assistive system is to achieve ac-curate AR. However, AR capabilities within SH posemany challenges. AR involves three main tasks inAR: activity modeling, data collection and monitoring,and data processing and pattern recognition [8]. Thefirst task aims to create computational activity mod-els from which the system infers and performs reason-

ing. These models can be generated using two dif-ferent approaches, namely, data-driven and knowledge-driven. The data-driven approach involves processingthe predefined data to create a training model by us-ing various machine-learning techniques. In contrast,the knowledge-driven approach takes the conceptual-ization of the real world axioms (i.e., established oraccepted statements) [9, 10] and, from these, formallydefines the domain specific knowledge explicitly. Thesecond task aims to monitor and capture the inhabitants’behaviors along with the changes in the environmentalconditions. Here a wide range of monitoring technolo-gies and devices can be used, such as vision-based andsensor-based techniques, which depend on various fac-tors, such as the type of information required, granular-ity level, privacy, and technical availability/feasibilityof the devices. The third task aims to process sensordata and map the extracted patterns against the activitymodel created in the initial stage to determine which ac-tivity is performed and, from such information, provideassistance accordingly.

In addition, within the first task of activity modeling,the data-driven approach includes the generative anddiscriminative methods that employ statistical and prob-abilistic methods to analyze pre-existing datasets andderive activity models. This approach can handle mod-eling uncertainties and temporal information. However,this technique suffers from the “cold start” problem be-cause of the need of pre-existing datasets for modellearning, which in turn leads to reusability, and scal-ability issues. In comparison, knowledge-based mod-eling is performed using formal theories (mining- andlogical-based theories) and domain expertise to createactivity models [11]. In turn, this approach eliminatesthe need of learning from a pre-existing dataset, hence,no “cold start” problems. However, the knowledge-driven approach also suffers from handling uncertain-ties and temporal information as a result of the manualpre-defined activity models. A previous study proposeda hybrid approach in this work [12] to address the short-comings of both approaches by developing an incre-mental model discovery and activity learning method.

For each of the abovementioned tasks AR tasks, var-ious interdependent underlying technologies also ex-ist [8]. These technologies present further integrationchallenges, which are mainly due to their differencesin programming languages, development environments,proprietary components, and communication protocols.Therefore, interconnectivity of each stage into a single

Triboan et al.: Towards Service-oriented Architecture for a Mobile Assistive System with Real-time Environmental Sensing 3

technology poses challenges for researchers.Another challenge faced that arises from this topic

is the problem of storing the activity modeling andrecognition data using the semantical structure in sucha way that the data can be later be used in a mean-ingful way. The storage options considered here alsoinfluence the overall system architectural design deci-sions. Recently, this has become a much wider issuewith the accumulation of large amounts of unstructuredor in a semi-structured data with no clear semanticalrelations. This has created many problems, such as au-tomating the task of processing and retrieving data ef-ficiently [13]. Currently, machine-learning techniques,such as genetic algorithms, are used to extract and traincomputers on how to process the data over time. Thisapproach, however, is lengthy and requires a high com-putation rate. To make this process more efficient, theconcept of the semantic web was introduced. This con-cept was originally envisioned by Tim Berners-Lee andhis colleagues to create the Web with linked data, whichhave semantic meanings, formalisms, and a structurethat can be processed by a machine [9, 14]. This isachieved by representing the data in the form of a triplet,subject-predict-object. The most common vocabular-ies are used and shared to create an expressivity inthe data (i.e., using Resource Description Framework(RDF) [15, 16] and Web Ontology Language (OWL)[17]). In addition, various reasoning engines (i.e., Pel-let, HermiT, and FaCT++) are used to perform infer-encing utilizing the user-specific rules and formal lan-guages. The triple datasets can be stored in the triple-store (database) as a graph, which are specially opti-mized for handling them. Moreover, just like the Struc-tured Query Language (SQL) in traditional relationaldatabases, the SPARQL protocol and RDF Query Lan-gauage (SPARQL) is used to perform, create, read, up-date and delete (CRUD) operations [15, 18]. These ca-pabilities and benefits enable the back end of any ap-plication to achieve greater flexibility within its specificsystem architecture.

1.1.3 Wireless Sensor Networks (WSNs)WSN technology has enabled a large variety of ap-plications to be developed; these have also been ap-plied across many domains, i.e., military [19], health-care, transport [20], and smart city infrastructure.WSNs play an important role in emerging Network-of-Things(NoT) or IoT paradigms [21]. The capabilitiesof the WSNs within the assistive systems can be seen

as a supporting tool to allow humans or machines tointeract with their environment and react to real-worldevents [22]. Therefore, the key responsibility of WSNsis to acquire environmental data from remote nodesand execute commands instructed by a coordinator, alsoknown as sink or base station. Depending on the ap-plication requirements, various communication proto-cols are available, through which a remote node cansend data to the coordinator. These protocols have theirown properties, benefits, and limitations but they can becharacterized by their range and energy consumptions.Some of the popular protocols are ZigBee [20], Z-Wave,WiFi, 6LoWPAn, 2G/3G/4G/5G, Blue-tooth(+BLE),radio frequency identification (RFID), near field com-munication (NFC), and infrared.

Owing of the diversity of communication proto-cols, there exist a large number of vendors who createapplication-specific off-the-shelf products that are notalways open-source. This can create a big challenge asfar as integrating them within WSNs of any given sizeis concerned. However, to address this challenge, manyefforts have been exerted by the vendors in recent years.One common practice is to provide application pro-gram interfaces (APIs) and software development kits(SDKs) to allow cross-platform third-party service inte-grations. For instance, Securifi Almond+ router, Ama-zon Echo [23], and Samsung SmartThings [24] have theability to interact with each other’s devices. Althoughthese services are growing, limited intelligence can beadded to the sensor nodes as they are governed by rules,such as “if this, then that” concepts (i.e., IFTTT [25]).Furthermore, they still have limited types of sensors thatcan support fine-grained sensing capabilities for AR,i.e., capacitive touch sensor on an object for dense sens-ing. Therefore, bespoke Arduino-based wireless sens-ing methods are still commonly used [26, 27]. The cur-rent paper integrates some of these aforementioned off-the-self and open-source WSN technologies within thesystem architecture to achieve real time AR, monitor-ing, and assistance.

The consecutive sections are organized as follow.Section 2 discusses related works and existing systemsto find their shortcomings. Sections 3 and 4 present aproposed system architecture and the implementationdetails of an assistive system, respectively. Section 5analyzes the experimental results and provides somediscussions. It must be noted here that the nature ofthis paper is not to propose a new way of modeling orrecognizing activities, but rather to assess the feasibil-

4 Tsinghua Science and Technology, June 2016, 18(3): 000-000

ity of the proposed system architecture and to preparefor further works in these areas. Finally, Sections 6 and6.1 conclude the paper with recommendations for fur-ther work and acknowledgements of the contributors,respectively.

2 Related Works

In the past, several assistive systems were implemented.In particular, two prototype assistive systems were im-plemented to provide activity recognition and assistancefeatures for the elderly or those who have cognitivedifficulties in carrying out Activities of Daily Living(ADL), namely, the SMART system [28, 29]. In its ini-tial implementation, the SMART system was built ina standalone environment with a direct interface to theSH environment and featured a rich web-based inter-face using dotNet programming language. As shown inFig. 1, the SMART system consists of six main classes:speech core, reasoning core, preferences core, commu-nication core, simulation recording core, and databasetools core. The speech core class is used to output pre-recorded audio messages to the user when the assis-tance is triggered; personalization of the pre-recordedmessage is also supported. The reasoning and prefer-ences core classes are the core components of this sys-tem. The reasoning core class is used to infer the users’activities from their preferences. The user preferencesare administered via basic or advance learning methodspresented by the system as well as the sensor activationdata retrieved from the communication core. The datafrom the sensor activations (i.e., inferred activities fromreasoning) can be recorded using simulation recordingcore class. Such data can then be exported to the userslocal disk or stored in a repository database as a historylog.

In the latter implementation, the SOA approach wasintroduced (see Fig. 2) with open-source components.The core system was written in one of the more popularprogramming languages, Java. The main reasons wereto move away from a standalone environment as well asto resolve limited community support and proprietarycomponents. This approach allows many users frommultiple devices to communicate simultaneously withplatform independency. The system further addressesthe monolithic code structure of the source code by log-ically separating it into three web services. The Enter-prise Service Bus (ESB) supporting software is used tobind these services together; thus enabling better main-

Fig. 1 System Architecture Overview: Initial implementationof the SMART system (2009)

Fig. 2 System Architecture Overview: Service-oriented imple-mentation of the SMART system (2012)

tainability, reuse, and debugging. The system still has aweb based interface that uses JavaScript, AsynchronousJavaScript, and XML (AJAX) features to request andload the data from the ESB. In addition, the Simple Ob-ject Access Protocol (SOAP) and Hypertext TransferProtocol (HTTP) have been used for exchanging databetween different devices. Moreover, this service hasthe potential to be deployed on to the cloud servers thatposses superior computational capacity to perform verycomplex reasoning within a short amount of time [30].One of the disadvantages of using this system, however,is that it has multiple web services with an ESB, whichrequires it to be hosted on the network. This can createunnecessary overhead and delays in the system.

A previous study [31] presented a location-basedcontext-aware system architecture, in which a range ofstakeholders can work collaboratively. The users do notrequire any prior knowledge of programming skills tomodel, manage rules, infer, and specify actions. Thesystem adapts the SOA style architecture and has a webbrowser-based interface similar to a SMART SOA sys-tem. The results of the study indicate that the system iseasy to use; however, the performance of the reasoningdegrades with the increase in the number of models andthe complexity of the rules. Likewise, [32] provides apioneering OPEN framework. The OPEN framework

Triboan et al.: Towards Service-oriented Architecture for a Mobile Assistive System with Real-time Environmental Sensing 5

is based on ontology for rapid prototyping, sharing andpersonalization of the system for the cooperative use ofthe developers, and non-expert users.

A number of other related works exist in the liter-ature. For example, the assistive system [33] enablesremote assistance and monitoring between the hospi-tal and the clients’ SH environment. Another study[34] proposed an SOA-style architecture involving amobile device and a web service to detect objects inreal-time by using image analysis techniques and aug-menting the assistance on the user’s tablet; here, a data-driven approach is employed through which images inthe database are analyzed. Meanwhile, the work in[35] adapts the knowledge-driven approach to proposea multi-tier architecture for an autonomic Ambient In-telligent system. The system exploits ontology model-ing techniques and logical rules [Java Expert SystemShell (Jess)] to formally describe the environment aswell as to infer and reason the activity. In addition,[36] fused the data-driven and knowledge-driven tech-niques to achieve unusual behavior recognition with thehelp of Decision Support Systems (DSS) and the ontol-ogy modeling technique for activity inferencing. Thesystem provides a natural interaction (i.e., speech andgesture) within the smart environment and everything iscontrolled by the centralized server.

The current paper proposes a new system architectureand presents the system prototype to enhance service-oriented implementation. In addition, further assistivefeatures have been implemented such as, medicine dosemanagement [37], appointment management, and noti-fications services. The implemented system extends theprevious web-based service to the more relevant mo-bile assistive service. The usage of the mobile phone’ssensor capabilities can also play a role in supportingadditional application scenarios for the inhabitant andimproving the system’s usability. Table 1 provides anoverview of the two SMART system implementationsalong with the proposed system.

3 The Proposed System Architecture

The proposed system continues with the SOA approach,but develops the web service using RepresentationalState Transfer (REST) protocol instead of the SOAP. Inaddition, the HCI with the SMART system has been im-proved by building an Android application instead of abrowser-based interface see Fig. 3. By creating the mo-bile application, it not only supports patients and care-

Table 1 Comparison between predecessors and the proposedsystem

givers on the move, but potentially enable other stake-holders of the system (e.g., a patient’s family membersand relatives) to be more connected when the systemis developed in the future. In addition, new featuresthat can further assist the inhabitant in living indepen-dently or in the care home have been added. The fea-tures are derived from recent inspection reports of var-ious care homes as carried out by Care Quality Com-mission [40]. Three different web services have beencombined into one web service by using suitable soft-ware design patterns, such as Faade, Repository, andMVC; hence removing ESB to reduce the communica-tions overhead. Other creational, structural, and behav-ioral patterns are also considered [7, 38]. Furthermore,the triple-store (Jena Fuseki server [39]) is used to cre-ate a distributed system that can be published, reused,and shared, thereby contributing towards the vision ofa semantic web and linked data in the future. Overall,the proposed system consists of three main components:REST-based Web Service (including triple-store), themobile application and the sensing network.

3.1 Web Service

The REST-based web service has been identified to bebetter suited for the following reasons. The REST-basedprotocol is lightweight in nature, and is easy to useand implement compared with the SOAP web service.The SOAP-based protocol supports richer functionali-ties, but incurs communication overhead [40,41]. In ad-

6 Tsinghua Science and Technology, June 2016, 18(3): 000-000

Fig. 3 The proposed Mobile SMART system using SOA and Semantic Web technologies

dition, it poses some restrictions in terms of flexibility,explicit functional parameter requirements, and the dataformat that it can produce and consume. In comparison,the JAX-RS library [42] in the REST-based service doesnot require function parameter definitions or publicationof their service, i.e., with universal description, discov-ery, and integration (UDDI). Another main feature ofthe REST-based service is is that it enables clients toconsume and produce data in a variety of data formats,such as XML, JSON, HTML and encoded text. Thismakes the system more interoperable compared withothers and gives it the ability to support low-powereddevices, thus reducing their limited energy consumptionresulting from its light weight nature.

One of the main requirements for the web service isto capture and expose all the sensor data and activityinferencing results to the client devices upon user in-teractions with the environment. This is achieved bybroadcasting the real time sensor data to the clients us-ing the Server-Sent Events (SSE) [43] mechanism in-stead of a bidirectional WebSockets or pooling method.One of the main reasons for this is to reduce the con-nection overhead. Although SEE is a bi-directionalprotocol, other standard requests can still be made bya client outside their SSE connection asynchronously.Another requirement of a web service is to capture andprocess sensor data that are communicated to the serverin various media formats depending on the device ven-

dor. In this proposal, the web service currently supportsAlmond+ router WebSocket connection, XBee coordi-nator connected via comport, and other Arduino-basedsensor collection using standard comports (see Sections1.1.3 and 3.3 for more details).

The Jena Fuseki server has been used in order toachieve a distributed collection of data for higher scal-ability, reuse, and performance; however, other triple-stores are also available. Furthermore, this server sup-ports the Java programming language and works welltogether with the Apache Jena API [44], a supportinglibrary that can be used to perform SPARQL queriesand reasoning on the graph models stored on the server.Furthermore, the Jena Fuseki server supports variousdevelopment tools, such as command line execution ofthe data (ARQ), and user-friendly web-based interfacewith which write, perform queries, and manage multi-ple datasets.

The web service uses a combination of design pat-terns, such as faade and repository to layer. In addi-tion, the components are logically separate from thethree web services of the SOA SMART implementa-tion, in terms of the task level being performed by theclasses. This process created five major layers: SmartWeb Service API, Faade, Repository, Domain, and Util-ity. The Smart Web Service API exposes services asan API to enable client devices to consume their fea-tures/data. The Faade layer presents classes that per-

Triboan et al.: Towards Service-oriented Architecture for a Mobile Assistive System with Real-time Environmental Sensing 7

form high-level commands for complex operations byutilizing multiple repository classes. This layer alsoenables general CRUD operations to take place, henceserving as the Data Accessor component. The reposi-tory layer is where the main logics are defined to per-form querying and updating tasks to the Fuseki server,accessing sensor states, and creating the reasoner repos-itory to enable inferencing using rules and variances ofreasoner implementations. The domain layer containsclasses that enable data to be mapped when communi-cating among the Fuseki server, Web service, and theAndroid application. Finally, at the utility layer, lowlevel processes are performed, i.e., communicating withthe Fuseki server via HTTP and with sensor devices viaserial ports, as well as supporting ontology managementfor inferring and reasoning.

3.2 Mobile Applications

Smartphones have become more, ubiquitous and havebeen integrated to part of the modern lifestyle. Smart-phones are continuously becoming more powerful witha diverse number of embedded sensors. In the futurethese can be used for better contextual data collection aswell as better usability of the system. In addition, del-egating resource-intensive tasks to cloud-based serviceapproaches can not only further increase the capabilitiesof smartphones but also open up endless possibilities,such as Mobile Cloud Computing (MCC) [45], Cloud-based Mobile Augmentation (CMA) [46], and ImageRecognition processing (i.e., mobile landmark recogni-tion systems) [47]. The old browser-based applicationsin previous system implementations make a system lessaccessible to its users. For instance, the patients andcarergivers would need to carry a laptop, tablet, or otherbrowser-based devices to interact with the web servicein order to receive real-time assistance.Furthermore, abrowser-based application may not be able to utilizeall services available on the device, whereas built-inhardware components, such as a heart rate sensor, canbe used to detect/monitor the users’ inactivity. Furtherhardware devices can be attached to mobile devices us-ing wired or wireless communication protocols, such asBluetooth, NFC, and Infrared. This capability allowslimitless possibilities to collect diverse types of contex-tual data about the user.

The mobile application in the proposed architectureprovides the main user interface(UI) that makes anasynchronous HTTP request to the REST web services.It uses a simple model-view-controller (MVC) design

pattern to logically separate the classes. The modelpackage contains all of the domain models that are usedto map the data communicating with the web service.The view package can be composed of all the classesthat are being used to display views on the screens, i.e.,activity classes, fragment classes, and dialog classes.Depending on the user types, the view package mayhave further sub-packages to separate all the views. Thecontroller package may consist of all the classes thattrigger requests to the server with the help of the util-ity classes, mainly view listeners and adapters. Finally,the utility package holds all the support classes, suchas HTTP async requester classes, data parsing classes,data dictionary classes, and date format utility.

The SOA approach essentially follows a client-serverpattern, in resolving some of the technical challengesmentioned above in building an assistive system usingthe SH environment. For instance, a Web Service as aservice provider and a Mobile application as a client,can work well together to bridge the communicationgaps between the SH environments and mobile deviceas well as to, make the system more flexible in terms ofscalability, performance, and platform independency.

Furthermore, the web service can take advantage ofcloud computing technology to increase the ability toperform complex reasoning or computation tasks effort-lessly. The main benefits of using the mobile device,can be numerous. For example, it would not only al-low the inhabitant to have a better HCI, but also allowthe utilization of embedded sensors within the device orthe attachment of external devices using wireless con-nectivity (i.e., Bluetooth). Such devices, such as Smart-watch and Shimmer [48] sensing devices can be used toobtain additional contextual information about the in-habitant to increase AR accuracy, which in turn, canlead to the provision of adequate assistance.

However, despite the advantages of usingsmartphone-based application, providing every pa-tient in the care home with a smartphone may not befinancially feasible and getting the elderly to use it canpose further challenges. Therefore, providing efficientand natural HCI methods for an elderly can reducethose problems to a degree. For instance, the recentintroduction of devices, such as Amazon Echo [23]provides voice-based interaction to the system and theability to interconnect with smartphone and other smartdevices using SmartThings [24], can be advantageous.

8 Tsinghua Science and Technology, June 2016, 18(3): 000-000

3.3 Sensing Network

As discussed in previous sections, a diverse numberof sensors and communication protocols are currentlyavailable in the market. The proposed architecture cur-rently uses the Securifi Almond+ router to perform am-bient sensing, Arduino boards for dense sensing, andAmazon Echo for voice interaction (see Section 4.1 forconfiguration details). The Securifi Almond+ router isused as a main “IOT” hub because of its WiFi, Zig-Bee, and Z-Wave protocol capabilities. Other hubs sup-porting similar protocols are also available, such as Li-belium Waspmote [49], SmartThing Hub, and VeraLite.However, further investigation may be required in orderto obtain real-time data from these hubs. The popu-lar Ardunio boards and shield-based approach providesgreater capabilities and flexibility with which to per-form sensing; however, additional steps are required toconfigure the individual components. Meanwhile, theAmazon Echo currently supports WiFI and Bluetoothcommunication protocols, thus allowing voice interac-tion capabilities with third party services.

In relation to overall system architecture, the “Utility”library consists of packages and classes through whichto extract, store, and process the data from the sens-ing hardware devices. In particular, the “Sensor Utils”package contains sub-packages and classes that interactwith third-party APIs and hardware libraries (i.e., “*.al-mond” and “.ardunio”). Some of the key Java librariesused are WebSocket API (for Almond+ router), XBee,and comPort (both for Arduinio). Moreover, theseclasses are used by the parallel thread classes to logthe events (“EventLogThread”), perform device man-agement (“DeviceManagementThread”), and store thedata in the triple-store (“TDBStorageThread”). Fig.4 il-lustrates the abovementioned utility library structure.

Fig. 4 Software: Breakdown of the “Sensor Utils” package

4 Implementation

The SMART system has been re-engineered to performADL assistance within both simulated and real environ-ments. The inferencing is currently performed by us-ing the preference matching technique from the users’pre-defined preferences list. In addition, the care homesinspection reports provided by the Care Quality Com-mission [50] have been analyzed, and several problemshave been identified. Seven application scenarios areconsidered important; hence, these are partially imple-mented to support users. The scenarios are as follows:to allow the user to manage daily medication doses, ap-pointments and, shopping checklists, as well as to reportissues, make requests for bedwetting assistance, detectinactiveness (i.e., by using the users heart rate values),and faciliate smart bedroom cupboard interaction.

Currently, only the web service supports all of thefeatures described above, whereas the Android oper-ating system (OS)-based application is yet to be fullyresolved. The real-time ADL inferencing and simula-tion environments as well as the, preference manage-ment and medicine dose management interfaces havebeen implemented to demonstrate this architecture. AnAndroid OS based application has been selected be-cause of its availability, popularity, large communitysupport, and previous experiences of working with An-droid applications. Other operating systems were con-sidered, however, due to a lack of resources and essen-tial skillsets, it was not considered further.

Apart from the technologies already mentioned inthe previous sections, other supporting software com-ponents that are used to build the system are Jerseylibraries [42] (i.e., Jackson library for JavaScript Ob-ject Notation (JSON) strings to object mapping), Jena[16, 44] Pellet (reasoner, see others [51]), Protege [52](ontology editing tool), and Google API Services [53](i.e., for Text-To-Speech APIs, and Maps API [53]).The Jersey library plays a key role in developing theRESTful web services for the function and parametermappings of the incoming requests from the clients, aswell as in producing and consuming data in various for-mats dynamically. In general, Jersey library is used tobind the web services with the Android application andmapping data into various object classes.

4.1 Sensing Hardware Configuration

Ambient sensing is performed using preconfigured sen-sors that are compatible with the IOT hub, i.e., door,

Triboan et al.: Towards Service-oriented Architecture for a Mobile Assistive System with Real-time Environmental Sensing 9

Fig. 5 Hardware: Connectivity diagram of sensing devices

motion, and multi sensors. Dense sensing is per-formed using bespoke configurations wherein ArduinoUno boards with XBee shields and modules are used tocreate a mesh network; see [54] and [55] for more de-tails. The main coordinator that receives data from theremote nodes is directly connected to the web server us-ing comport. However, other options are also availableto send the data from the coordinator to server, suchas by using WIFI shields or Bluetooth. The remotenodes, wjhich are connected with various multimodalsensors and sends their statuses to the coordinator whenan event is triggered. In addition, an Android mobilephone, Amazon Echo, and WeMo Sockets are also at-tached to the IOT router. The Android mobile phoneis directly connected to the Amazon Echo via Blue-tooth to output activity recognition results. In turn, theAmazon Echo can interact with the Almond+ router andwith other popular sensing vendors. The WeMo Sock-ets and Amazon Echo can be easily integrated withinthe proposed mobile application using their APIs. Fig.5presents the possible hardware configuration in order tostart collecting the raw data.

4.2 Dataflow among the Android Application, WebService, and Apache Fuseki Server

The web service is central to the Android applicationand Apache Fuseki Server. The Android applicationmakes standard HTTP requests (i.e., GET, PUT, POST,and DELETE) to the Web Service to perform severaltasks, such as CRUD operations, inferencing, reason-ing, and other complex application-based logics. Allthe RDF data and ontologies are stored in the ApacheFuseki Server as a graph. Therefore, the data areretrieved and manipulated by the web service usingSPARQL query language with the support of ApacheJena library and the standard HTTP protocol. However,

Fig. 6 Server-sent event (SSE) mechanism for real-time mes-sage flow of sensing and inferencing results between client andweb service

the real-time sensing data are exposed to the clients us-ing a half-duplex, listener-subscription mechanism (i.e.,Server-sent events (SSE) [43]) in comparison to full-duplex WebSocket. One of the key reasons for this de-cision is so that the process intensive tasks of inferenc-ing and reasoning are performed independently of thereal-time event logging process.

The web service broadcasts two SSE methods to theclients: one for broadcasting real-time sensor eventsand another with inferencing results for the clients witha session token. This sequence of events between clientdevice and the key components in the web service isillustrated in Fig.6 below. As can be seen, the clientAndroid application can listen to the sensor eventsin the background asynchronously by making an SSEcall to “EventBroadcaster” function in the SensorsCallclass located in “SmartWebServiceAPI” (A). To receiveclient-specific inferencing results, the client must obtainthe session identity from the “ReasonerCall” first (B).The “ReasonerCall” is responsible for the task of listen-ing to the sensor events from the given time, perform-ing inferencing and then broadcast the result using “Re-sultsBroadcaster” function (B.1). Once the client re-ceives the session token, a request can be made to “Re-sultsBroadcaster,” after which the task of listening tothe inferencing results associated to their session iden-tity is initiated. Meanwhile, the client device is respon-sible for to closing the session (C) and, if required, stor-

10 Tsinghua Science and Technology, June 2016, 18(3): 000-000

Fig. 7 Pseudocode for executing a SPARQL query on the serverendpoint using Jena API

ing the session data separately.The web service performs a query and an update

request in three simple steps: (1) building SPQARLquery/update string , (2) using Jena classes/standardHTTP post methods to execute the request, and (3) pars-ing the responses . The pseudocode, shown in Fig.7,performs a simple SPARQL query on the local Fusekiserver end point and parses the result using the Result-Set and QuerySolution method. The standard HTTPpost request can be made to perform SPARQL up-date using the HttpPost, HttpClient, and HttpResponseclasses. However, the request content type is set to“application/sparql-update,” and a static variable al-ready defined in the Jenas WebContent class (“WebCon-tent.contentTypeSPARQLUpdate”) can be used.

Next, the Android application makes the requeststo the web service using the standard HTTP pro-tocols (HttpGet, HttpPost, and HttpPut, HttpDelete),only in a JSON format; hence, the request head-ers must be set appropriately. The Android ap-plication parse the JSON data, and by using the“org.codehaus.jackson.map.ObjectMapper” class, thedata can be automatically remapped into their respec-tive class instances.

4.3 Ontology Modeling and Data Structuring

An ontology editing tool, such as Protege [52], can beused to build a conceptual model at varying levels of ab-straction, leading to encapsulation of a particular set ofknowledge. Then, while structuring and adding meta-data to the raw data, these ontologies can be used acrossvarious domains as a vocabulary. In this way, the datasetis semantically enriched and the reusability of the datais increased, along with the improved ability to inferadditional data, and perform reasoning using real worldaxioms [10]. However, a problem that must be solved

Fig. 8 Layered object properties for bucket-based structure data

Fig. 9 Bucket-based approach for data structuring using objectproperties

at this point existence of multiple events or activitiesassociated with one single instance.

A few possible solutions are considered, one of whichis by simply linking the activity/event instances directlyto the main instance using object properties. This couldwork, but it would create a large number of instancesthat would still be unstructured in terms of instance datagrouping, ordering, and visualization. In turn, this couldincrease the querying complexity and create unneces-sary computation overhead as the system data grow.

Another approach is a bucket-based one, similar to atable-like structure in any relational database, in whichall the data can be associated to the bucket instance us-ing object properties at various inherent levels (see Fig.8). For instance, Patient1 individual can have an ob-ject property of“hasAppointment” and a value as an ob-ject instance of “Patient1 Appointments” (bucket). Thisbucket, (“Patient1 Appointments”) can have N number

Triboan et al.: Towards Service-oriented Architecture for a Mobile Assistive System with Real-time Environmental Sensing 11

of appointment object instances as a value, such as “Pa-tient1 Appointments 10 10 15,” which is defined usingthe sub property of “hasAppointment” object propertycalled “hasAppointmentItem” [see Fig. 9(1)]. The indi-vidual,“Patient1 Appointments 10 10 15”, will hold allthe relevant data required for the appointment, such asdate and time of the appointment, location, and notes.This process can be repeated to represent other applica-tion scenarios, such as medications lists, notifications,and other user-specific preferences [see Fig. 9 (2)].

4.4 SPARQL-based Inferencing

In order to perform activity assistance in ADL, a simplesimulated environment is created to enable various sen-sors and view the activity recognition results [see Fig.10(b) and Fig 13]; here, the Text-to-Speech feature isalso used for the resulting output. The activity recog-nition algorithm is performed by the web service us-ing a data-driven approach. Currently, only pre-defineduser preferences [shown in Fig. 10(a) and Fig. 12 forthe preference management interface] are used to matchagainst the activated sensors. The aim of the matchingprocess is to find the related user preference(s) and otherinactivated sensor object(s) from the matched individ-ual preference(s) in order to complete the activity. Forthis to be carried out, the current implementation usesSPARQL queries using the steps defined below. Someexamples illustrated in Fig. 11.

1 Find a user preference that has all the activated sen-sor objects and does not contain additional sensorsobjects in the same preference.

2 Otherwise, N number of user preferences are re-turned, which has all or some activated deviceslisted in a particular preference and other inactivesensor objects.

21 The number of activated sensor object(s) existin each user preference is taken and ordered ina descending order.

22 Using the results obtained, the search for themissing sensor object(s) is carried out by in-specting the individual user preferences. Thematched sensor object from the individualuser preference is excluded by using the keyfunctions, such as FILTER, Logical & Com-parisons, or Conditional SPARQL operators[18].

One of the advantages of this SPARQL query basedapproach is that it does not require model loading or

Fig. 10 Managing User Preferences and ADL Simulation modeinterface

Fig. 11 Illustrating the inferencing steps taken using SPARQLquery language

reasoning libraries. However, this approach does re-quire explicit relationships to be defined in the dataset.To bridge this gap, the notion of SPARQL Inferenc-ing Notation (SPIN) can be used to create rules, con-straints, and functions in SPARQL syntax, which canbe executed on the triplestore. SPIN is also known asSPARQL rules; for more information, see [56, 57].

4.5 Additional Application Scenarios

The Android application currently provides a simple lo-gin mechanism that directs users to different interfacesdepending on their user types, i.e., Patient, Carergiver,Administrator, and System Manager. The user-specificinterface allows the user to navigate through differentactivities. Fig. 14(a) shows the patient’s UI and Fig.

12 Tsinghua Science and Technology, June 2016, 18(3): 000-000

Fig. 12 User preference management interface in action

Fig. 13 ADL simulation result of two possible preferences withtheir missing sensors to complete the activity

Fig. 14 Patient’s main menu and UI of managing medicinesdoses

14(b) shows one the features that allows patients tomanage their medication and dose timing records. TheUI and other features for other users will be further de-veloped in the successive prototypes.

5 Experiment And Discussions

5.1 Experiment Details

The proposed system implementation is tested by mea-suring the time between sensor activation and genera-tion of inferencing results on the client device. Thesensor activation time is only taken into consideration

once the data are received by the web service. This is toreduce the effort for time synchronization between thesensing devices.

A fixed time window length is defined for six user ac-tivity preferences (UAPs) that are listed and tested withthree different scenarios, see Tables 2 and 3. The firstscenario (TP1) activates the exact number of sensors de-fined in the user preferences, the second scenario (TP2)shows the activation of additional sensors objects, andthe third scenario (TP2) shows a simulation of faultysensors by using some sensor objects that are missingor not activated. The scenarios for the first two activi-ties are illustrated in the Table 4. Overall, each of thesix activities are executed with three different scenariosby two actors (Exp).

The web service was deployed on the HP Z440 work-station with Intel(R) Xeon(E) v3 3.50GHz processorwith 16GB RAM. The mobile application was tested ona Samsung S6 edge smartphone running Android 6.0.1OS. The sensing data were collected using several touchsensors and door contact sensors using different proto-cols defined in Section 4.1.

5.2 Results

The results in Table 5 indicate that on average, it takes4477 ms to receive the inferencing result on the mobilephone for all six UAPs with three different scenarios ex-ecuted thrice. Overall, the results show little to no cor-relation between the number of sensors in the UAPs andthe average time taken for inferencing and then commu-nicating the results to the user.

Table 2 User activity preferences with the associated totalnumber of sensor objects

Triboan et al.: Towards Service-oriented Architecture for a Mobile Assistive System with Real-time Environmental Sensing 13

Table 3 AR test scenario types

Table 4 Two examples of AR test cases

Table 5 Results showing average activity inferencing durationfrom the last activities recorded

5.3 Discussions

Although this paper does not focus on proposing activ-ity recognition approaches, further changes in the sys-tem are still required to utilize the full capabilities ofOWL and DLs. One of the current limitations of the de-fined SPARQL-based inferencing approach is that as-sertions (ABox), or instances, are mainly used ratherthan the terminology (TBox). The term TBox refers tothe concepts and roles that are defined as vocabulary,whereas the ABox are named individually for those in-stances of a concept [10]. This allows the vocabulariesto be generalized, shared, and applied across domains.In addition, the AR process can be enriched by inves-tigating the process of dynamically separating and seg-menting using these shared vocabularies and personal-ized rules/preferences. Another key difference in theproposed system is that all the data are stored in thetriplestore and all open-source hardware and softwarecomponents are utilized.

The HCI with the system also plays a key role gain-

ing further benefits from the system’s capabilities. Thesystem implementation uses a mobile application; how-ever, our society is moving towards more natural andubiquitous HCI. Other systems discussed [34, 36] inSection 2, have already adapted the notion of aug-mented reality to overlay instructions on the cameraor use natural gesture-/voice-based HCI. In comparisonto the SMART system implementations and other sys-tems discussed in Section 2, mainly have a web-browserbased interface, this may limit the client devices fromfurther utilization, unlike with mobile devices with em-bedded sensor capabilities to collect meaningful andcontextual data. In addition to embedded sensors withinthe mobile device, instead of configuring additionaldense or ambient sensors in the SH enviornment, moreexternal sensors can be directly attached to a mobile de-vice using any standard communication protocol[58].

The past system implementations with similar archi-tectural styles and patterns have shown positive resultsin both functional and non-functional requirements; notonly for AAL systems [59, 60]. However, finding suit-able design patterns for a given application can be chal-lenging and be easily misused [5, 7]. Nevertheless, sev-eral benefits of using a popular styles and pattern exist.One example is system maintainability, which can im-prove code compensation level and efficient debuggingfor the developer. Furthermore, the decomposed SOAarchitecture can enable any application to improve itsscalability. In the case of the proposed system, addi-tional sensing devices can be added within the SH sothat the server can collect, process and disseminate datato multiple clients more easily. Moreover, creating anopportunity to interact with other third-party servicescan help to extend the capabilities of the existing ubiq-uitous system.

6 Conclusions

This paper investigates some of the system architec-tural issues when building an AAL system. This wasachieved by investigating some of the latest requiredcomponents that can integrate and complement one an-other. A generic system architecture is proposed, whichintegrates and further extends the previous system im-plementations by introducing a light-weight, REST-based web service with an Android mobile applicationinterface. The web service plays a key role in interact-ing with the triple-store (Apache Jena Fuseki Server)endpoint, SH sensors and mobile client applications.

14 Tsinghua Science and Technology, June 2016, 18(3): 000-000

The web service provides activity inferencing and rea-soning capabilities using Jena API; different reasoningengines can also be easily integrated. Moreover, thisgeneric architecture uses simple design patterns (faade,repository, and domain for the web service and MVCfor the Android application). The proposed system ar-chitecture also has open-source components that can bedeployed in a distributed environment, making it scal-able, as well as easy to use, maintain, and develop fur-ther.

A real-time system was implemented to illustrate thefeasibility of the proposed architecture with some addi-tional use case scenarios. The system leverages on thepopular hardware components that are off-the-shelf andopen-source. The real time testing results show that theaverage inferencing time taken to display the results tothe user is 4477 ms on average. Finally, the implemen-tation shows greater flexibility and potential for furtherdevelopment in terms of usability, ability to support ad-ditional application scenarios, and capacity to providea greater scope of collecting personalized and contex-tual data (i.e., by paring wearable devices to the mobilephone and integrating other third-party APIs), thus in-creasing the accuracy of activity recognition.

6.1 Further Development

The future implementations will focus on areas such asimproving data modeling techniques, semantically pro-cessing raw sensor data with an efficient timing mech-anism [61], inferencing and reasoning activities withJena API, as well as enhancing the SH sensing capa-bilities, performance optimization, and HCI methods(i.e. utilizing Amazon’s Alexa voice services [62]). Inaddition, exploring rules (i.e., SPIN [57] and SWRLrules [63]), and Description Logics (DLs) capabilitiesinstead of current SPARQL-based querying approachcan be carried out. Finally, the system currently solvesthe problem of single sequential activity detection. Thechallenges of recognizing multiple or interweaving ac-tivities occurring concurrently in a non-sequential orderis still being investigated. In this light, future workswill focus on developing a framework or a mechanismthat can support the ability to disentangle complex ac-tivities, i.e., recognizing that the user is making a hotchocolate, taking medicine, or speaking on the phonesimultaneously.

Acknowledgments

The authors gratefully acknowledge the contributions

of Simon Forest for the implementation, deployment,and testing of the system. This project has been par-tially funded by EU H2020 Marie Sklodowska-CurieActions, ITN-ETN (ACROSSING Project ID: 676157)and Research Investment Fund, DMU.

References

[1] X. Zhang, H. Wang, and Z. Yu. Toward a SmartHome Environment for Elder People Based onSituation Analysis. 2010 7th International Con-ference on Ubiquitous Intelligence & Computingand 7th International Conference on Autonomic &Trusted Computing, pages 7–12, 2010.

[2] R. Sterritt and C. Nugent. Autonomic Comput-ing and Ambient Assisted Living - Extended Ab-stract. Engineering of Autonomic and AutonomousSystems (EASe), 2010 Seventh IEEE InternationalConference and Workshops on, pages 149–151,2010.

[3] D. Triboan, L. Chen, and F. Chen. Towards aMobile Assistive System Using Service-orientedArchitecture. In 2016 IEEE Symposium onService-Oriented System Engineering Towards,pages 187–196, Oxford, 2016. IEEE.

[4] G. Bohme. Invasive Technification: Critical Es-says In The Philosophy Of Technology. Blooms-bury Publishing, 2012.

[5] L. Pavlic, M. Hericko, and V. Podgorelec. Improv-ing design pattern adoption with ontology-baseddesign pattern repository. In Information Technol-ogy Interfaces, 2008. ITI 2008. 30th InternationalConference on, pages 649–654, June 2008.

[6] M. Ali and M. O. Elish. A Comparative Liter-ature Survey of Design Patterns Impact on Soft-ware Quality. Information Science and Applica-tions (ICISA), 2013 International Conference on,pages 1–7, 2013.

[7] C. Zhang, D. Budgen, and S. Drummond. Usinga follow-on survey to investigate why use of thevisitor, singleton & facade patterns is controver-sial. In Proceedings of the ACM-IEEE interna-tional symposium on Empirical software engineer-ing and measurement - ESEM ’12, pages 79–88,2012.

[8] L. Chen, J. Hoey, C. D. Nugent, J. D. Cook, andZ. Yu. Sensor-based activity recognition. IEEE

Triboan et al.: Towards Service-oriented Architecture for a Mobile Assistive System with Real-time Environmental Sensing 15

Transactions on Systems, Man and CyberneticsPart C: Applications and Reviews, 42(6):790–808,2012.

[9] A. Ameen, Rani, and B. Padmaja. Extractingknowledge from Ontology using Jena for Seman-tic Web. Pune, pages 2–6, 2014.

[10] S. Staab and S. Rudi. Handbook on Ontolo-gies. Springer-Verlag Berlin Heidelberg, 2 edition,2009.

[11] R. Culmone, M. Falcioni, R. Giuliodori,E. Merelli, A. Orru, M. Quadrini, P. Ciampolini,F. Grossi, and G. Matrella. AAL domain ontologyfor event-based human activity recognition.Mechatronic and Embedded Systems and Ap-plications (MESA), IEEE/ASME 10th Intl Conf,24:1–6, 2014.

[12] L. Chen, C. Nugent, and G. Okeyo. An ontology-based hybrid approach to activity modeling forsmart homes. IEEE Transactions on Human-Machine Systems, 44(1):92–105, 2014.

[13] D. Gaaevic, D. Djuric, V. Devedzic, and B. Selic.Model Driven Architecture and Ontology Devel-opment. Springer-Verlag New York, Inc., Secau-cus, NJ, USA, 2006.

[14] J. Davies, F. Harmelen, and D. Fensel, editors. To-wards the Semantic Web: Ontology-driven Knowl-edge Management. John Wiley & Sons, Inc., NewYork, NY, USA, 2002.

[15] S. Powers. Practical RDF. O’Reilly & Associates,Inc., Sebastopol, CA, USA, 2003.

[16] Apache. An Introduction to RDF and the JenaRDF API. http://jena.apache.org/tutorials/rdf_api.html.

[17] W3C. OWL 2 Web Ontology Language Docu-ment Overview. http://www.w3.org/TR/owl2-overview/, December 2012.

[18] B. DuCharme. Learning SPARQL, 2nd edition. Si-mon St. Laurent and Meghan Blanchette, Eds. Se-bastopol, CA, United States of America: O’Reilly,2013.

[19] W. Pawgasame. A Survey in Adaptive HybridWireless Sensor Network for Military Operations.In Defence Technology (ACDT), pages 78–83,Chiang Mai, 2016. IEEE.

[20] X. Hu, L. Yang, and W. Xiong. A Novel Wire-less Sensor Network Frame for Urban Transporta-tion. IEEE Internet of Things Journal, 2(6):586–595, 2015.

[21] P. Gaikwad, J. P. Gabhane, and S. S. Golait.A survey based on Smart Homes system usingInternet-of-Things. In 2015 International Confer-ence on Computation of Power, Energy, Informa-tion and Communication (ICCPEIC), pages 330–335, Chennai, 2015. IEEE.

[22] I. Khan, F. Belqasmi, R. Glitho, N. Crespi,M. Morrow, and P. Polakos. Wireless Sensor Net-work Virtualization: A Survey. IEEE Communica-tions Surveys & Tutorials, 18(1):553–576, 2016.

[23] Amazon. Amazon Echo. http://www.amazon.com/Amazon-SK705DI-Echo/dp/B00X4WHP5E, January 2016.

[24] Samsung. SmartThings. https://www.smartthings.com/compatible\discretionary-products.

[25] IFTTT. Recipes on IFTTT are the easy way toautomate your world. https://ifttt.com/.

[26] M. S. Perez and E. Carrera. Time synchroniza-tion in Arduino-based wireless sensor networks.IEEE Latin America Transactions, 13(2):455–461, 2015.

[27] Samsung SmartThings. SmartThingsShield for Arduino. https://shop.smartthings.com/#!/products/smartthings-shield-arduino.

[28] L. Chen, C. Nugent, and A. Al-Bashrawi. Se-mantic data management for situation-aware as-sistance in ambient assisted living. Proceedings ofthe 11th International Conference on InformationIntegration and Web-based Applications & Ser-vices - iiWAS ’09, page 298, 2009.

[29] L. Chen, C. Nugent, and J. Rafferty. Ontology-based Activity Recognition Framework and Ser-vices. Proceedings of International Conferenceon Information Integration and Web-based Appli-cations & Services - IIWAS ’13, pages 463–469,2013.

[30] X. Wang, J. Wang, X. Wang, and X. Chen. En-ergy and delay tradeoff for application offloadingin mobile cloud computing. IEEE Systems Jour-nal, PP(99):1–10, 2015.

16 Tsinghua Science and Technology, June 2016, 18(3): 000-000

[31] D. Martn, D. Lpez de Ipia, A. Alzua-Sorzabal,C. Lamsfus, and E. Torres-Manzanera. A method-ology and a web platform for the collaborativedevelopment of context-aware systems. Sensors,13(5):6032, 2013.

[32] B. Guo, D. Zhang, and M. Imai. Toward a cooper-ative programming framework for context-awareapplications. Personal and Ubiquitous Comput-ing, 15(3):221–233, March 2011.

[33] P. N. Borza, M. Romanca, and V. Delgado-Gomes.Embedding patient remote monitoring and assis-tive facilities on home multimedia systems. In2014 International Conference on Optimizationof Electrical and Electronic Equipment (OPTIM),pages 873–879, May 2014.

[34] T. Kistel, O. Wendlandt, and R. Vandenhouten.Using distributed feature detection for an assistivework system. In 2014 IEEE International Con-ference on Systems, Man, and Cybernetics (SMC),pages 1801–1802, Oct 2014.

[35] A. D. Paola, P. Ferraro, S. Gaglio, and G. Lo Re.Autonomic behaviors in an ambient intelligencesystem. In 2014 IEEE Symposium on Compu-tational Intelligence for Human-like Intelligence(IEEE SSCI 2014), 2014.

[36] A. Reichman and M. Zwiling. The architectureof ambient assisted living system. In IEEE Inter-national Conference on Microwaves, Communica-tions, Antennas and Electronic Systems, 2011.

[37] A. N. Khan, , D. Rodrguez, Natalia,R. Danielsson-Ojala, H. Pirinen, L. Kauhanen,S. Salanter, J. Majors, S. Bjrklund, K. Rautanen,T. Salakoski, I. Tuominen, I. Porres, and J. Lilius.Smart dosing: A mobile application for trackingthe medication tray-filling and dispensationprocesses in hospital wards. In Juan CarlosAugusto and Klaus-Hendrick Wolf, editors, 6thInternational Workshop on Intelligent Environ-ments Supporting Healthcare and Well-being(WISHWell’14), Lecture Notes in ComputerScience, page 110. Springer, 2014.

[38] H. Guilan, W. Sheng, and Y. Jun-Ping. Appli-cation of Design Pattern in the JDBC Program-ming. Computer Science & Education (ICCSE),Colombo, pages 1037–1040, 2013.

[39] Apache Jena Fuseki. Apache JenaFuseki. https://jena.apache.org/documentation/fuseki2/index.html.

[40] Q. Z. Sheng, X. Qiao, A. V. Vasilakos, C. Szabo,S. Bourne, and X. Xu. Web services composi-tion: A decade’s overview. Information Sciences,280:218–238, 2014.

[41] X. Hu, T. Chu, V. Leung, E.C.-H. Ngai,P. Kruchten, and H. Chan. A Survey on Mo-bile Social Networks: Applications, Platforms,System Architectures, and Future Research Direc-tions. Communications Surveys Tutorials, IEEE,PP(99):1, 2014.

[42] Jersey. RESTful Web Services in Java. https://jersey.java.net/.

[43] Jersey. Chapter 15. Server-Sent Events (SSE)Support. https://jersey.java.net/documentation/latest/sse.html.

[44] Apache. Jena Ontology API. https://jena.apache.org/documentation/ontology/.

[45] M. Ayad, M. Taher, and A. Salem. Real-timemobile cloud computing: A case study in facerecognition. In Advanced Information Networkingand Applications Workshops (WAINA), 2014 28thInternational Conference on, pages 73–78, May2014.

[46] S. Abolfazli, Z. Sanaei, E. Ahmed, A. Gani, Ab-dullah, Buyya, and Rajkumar. Cloud-based aug-mentation for mobile devices: Motivation, tax-onomies, and open challenges. IEEE Communica-tions Surveys and Tutorials, 16(1):337–368, 2014.

[47] Z. Li and K. Yap. Context-aware DiscriminativeVocabulary Tree Learning for mobile landmarkrecognition. Digital Signal Processing, 24:124–134, 2014.

[48] Shimmer. Shimmer Sensing. http://www.shimmersensing.com/, December 2015.

[49] Libelium. Waspmote Plug & Sense.http://www.libelium.com/products/plug-sense/, 2013.

[50] Care Quality Commission. About us. http://www.cqc.org.uk/content/about-us.

[51] K. Dentler, R. Cornet, T. Teije, Annette, D. Keizer,and Nicolette. Comparison of reasoners for large

Triboan et al.: Towards Service-oriented Architecture for a Mobile Assistive System with Real-time Environmental Sensing 17

ontologies in the OWL 2 EL profile. SemanticWeb, 2(2):71–87, 2011.

[52] Stanford University. A free, open-source ontologyeditor and framework for building intelligent sys-tems. http://protege.stanford.edu/.

[53] Google. Products. https://developers.google.com/products/.

[54] R. Faludi. Building Wireless Sensor Networks.O’Reilly Media, Sebastopol, 1 edition, 2010.

[55] T. Igoe. Making Things Talk. Maker Media, Inc,Sebastopol, 2 edition, 2007.

[56] G. Meditskos, S. Dasiopoulou, E. Vasiliki, andI. Kompatsiaris. Sp-act: A hybrid framework forcomplex activity recognition combining owl andsparql rules. In Pervasive Computing and Com-munications Workshops (PERCOM Workshops),2013 IEEE International Conference on, pages25–30. IEEE, IEEE, 2013.

[57] W3C. SPIN - Overview and Motivation.http://www.w3.org/Submission/spin-overview/, February 2011.

[58] R. K. Lomotey and R. Deters. Sensor data prop-agation in mobile hosting networks. In Service-

Oriented System Engineering (SOSE), 2015 IEEESymposium on, pages 98–106, March 2015.

[59] W. Dai and V. Vyatkin. A component-based de-sign pattern for improving reusability of automa-tion programs. IECON Proceedings (IndustrialElectronics Conference), pages 4328–4333, 2013.

[60] X. Xu, Y. Tao, X. Wang, and X. Ding. Researchon architecture of smart home networks and ser-vice platform. In Digital Home (ICDH), 2014 5thInternational Conference on, pages 232–236, Nov2014.

[61] L. Chen, C. D. Nugent, and Wang H. Aknowledge-driven approach to activity recognitionin smart homes. IEEE Transactions on Knowledgeand Data Engineering, 24(6):961–974, 2012.

[62] Amazon Developer. Alexa - Build engag-ing voice experiences for your services and de-vices. https://developer.amazon.com/public/solutions/alex.

[63] W3C. SWRL: A Semantic Web Rule LanguageCombining OWL and RuleML. https://www.w3.org/Submission/SWRL/, 2004.

Darpan Triboan is currently a Ph.D stu-dent at De Montfort University after re-ceiving his BSc (Hons) and MSc at thesame university in 2014 and 2015. Hiscurrent research interests include Seman-tic and Knowledge representation, Wire-less Sensor Networks (WSNs), Pervasivecomputing and Ambient Assisted Living

(AAL).

Liming Chen is Professor of ComputerScience and the Head of the Context, In-telligence, and Interaction Research Group(CIIRG) of the School of Computer Sci-ence and Informatics at De Montfort Uni-versity, United Kingdom. He received hisB.Eng and M.Eng from Beijing Institute ofTechnology (BIT), Beijing, China, and his

Ph.D in Artificial Intelligence from De Montfort University, UK.Liming has extensive research expertise and a wide range of re-search interests in areas such as Artificial intelligence, Semanticand Knowledge representation, Pervasive computing, and Ambi-ent Assisted Living (AAL).

Feng Chen received BSc, Mphil and PhDat Nankai University, Dalian University ofTechnology and De Montfort University in1991, 1994 and 2007. He is a senior lec-turer at De Montfort University. His re-search interests include software engineer-ing, distributed computing, knowledge en-gineering and image processing.

Zumin Wang is a Professor of Com-puter Science at Dalian University, China,and the Head of Dalian Key Lab. ofSmart Medical and Healthcare. He re-ceived his M.Eng from North China Insti-tute of Technology, Taiyuan, China, and hisPh.D from the Institute of Electronics, Chi-nese Academy of Science, China. He has

worked as a lecturer, an associate professor, and a professor atthe college of information engineering, Dalian University. Hiscurrent research areas include Internet-of-Things, Software En-gineering, and Wireless Sensor Networks.