First Prototype - wekit.euwekit.eu › wp-content › uploads › 2017 › 08 › WEKIT_D2.4.pdf ·...

28
Wearable Experience for Knowledge Intensive Training First Prototype Editors: Carl Smith Fridolin Wild Ref. Ares(2017)1742283 - 31/03/2017

Transcript of First Prototype - wekit.euwekit.eu › wp-content › uploads › 2017 › 08 › WEKIT_D2.4.pdf ·...

Wearable Experience for Knowledge Intensive Training

First Prototype

Editors: Carl Smith Fridolin Wild

Ref. Ares(2017)1742283 - 31/03/2017

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 2/28

Revision History Version Date Contributor(s) Modification

0.1 06.02.2017 Fridolin Wild, Alla Vovk and Will Guest (OBU); Jazz Rasool (RAV), Puneet Sharma (UiT), Marcus Specht, Roland Klemke, Bibeg Limbu and Daniele Di Mitri (OUNL); Kaj Helin and Jaakko Karjalainen (VTT); Soyeb Aswat (MP); Mikhail Fominykh (EP)

Review of all code to be included into the first prototype

0.2 21.02.2017 Carl Smith (RAV) Created Initial Structure

0.3 21.02.2017 Mikhail Fominykh (EP) Quality review

0.4 22.02.2017 Daniele Di Mitri and Bibeg Limbu (OUNL); Kaj Helin and Jaakko Karjalainen (VTT)

Recorder section;

Re-enactment section detailed draft

0.5 24.02.2017 Fridolin Wild and Will Guest (OBU); Carl Smith (RAV)

New introduction

0.6 15.03.2017 Will Guest and Fridolin Wild (OBU); Mikhail Fominykh (EP)

New structure and various updates

0.7 28.03.2017 Kaj Helin and Jaakko Karjalainen (VTT)

Completing section 3. WEKIT.one Re-enactment

0.8 30.03.2017 Roland Klemke and Daniele Di Mitri (OUNL); Will Guest and Fridolin Wild (OBU); Soyeb Aswat (MP); Mikhail Fominykh (EP)

Completing sections:

Sensor Processing Unit; Ghost Track / Sensor Annotation;

Data formats; Conclusions and future directions;

0.9 30.03.2017 Carl Smith (RAV); Mikhail Fominykh (EP); Tre Azam (MP)

Final adjustments and polishing, Executive summary

WEKIT Analytics section 4 Biofeedback

1.0 31.03.2017 Fridolin Wild (OBU)

Paul Lefrere (CCA)

Final review

Disclaimer: All information included in this document is subject to change without notice. The Members of the WEKIT Consortium make no warranty of any kind with regard to this document, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The Members of the WEKIT Consortium shall not be held liable for errors contained herein or direct, indirect, special, incidental or consequential damages in connection with the furnishing, performance, or use of this material.

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 3/28

First prototype

WP 2 | D2.4

Editors:

Carl Smith (RAV)

Fridolin Wild (OBU)

Authors:

Fridolin Wild (OBU), Roland Klemke (OUNL), Daniele Di Mitri (OUNL), Kaj Helin (VTT), Jaakko Karjalainen (VTT), Will Guest (OBU), Soyeb Aswat (MP), Tre Azam (MP), Marcus Specht (OUNL), Bibeg Limbu (OUNL), Mikhail Fominykh (EP), Puneet Sharma (UiT), Alla Vovk (OBU), Jazz Rasool (RAV)

Reviewers:

Mikhail Fominykh (EP), Paul Lefrere (CCA)

Deliverable number D2.4

Dissemination level Public

Version 1.1

Status Final

Date 31.03.2017

Due date 28.02.2017

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 4/28

Table of Contents

REVISION HISTORY ..........................................................................................................................................................................2

ABBREVIATIONS...............................................................................................................................................................................5

EXECUTIVE SUMMARY ...................................................................................................................................................................6

1. INTRODUCTION .............................................................................................................................................................7

1.1. ARLEM AS LINGUA FRANCA ...............................................................................................................................................8

1.1.1. Recorder: mapping the Class Model to ARLEM Workplace class...............................................................8 1.1.2. Recorder: mapping Class Model to ARLEM Ac tivity class............................................................................9

2. WEKIT.ONE RECORDER............................................................................................................................................. 10

2.1. RECORDER ...................................................................................................................................................................... 10 2.2. HOWTO: GHOST TRACK (SENSOR ANNOTATION) AND THINK-ALOUD (COMBINED SENSOR ANNOTATION)...................... 16

2.2.1. How to create your Ghost Track / Sensor Annotation .............................................................................. 17 2.2.2. How to record data using the Sensor Annotation ..................................................................................... 17 2.2.3. How to replay data using the Sensor Annotation ...................................................................................... 17 2.2.4. How to edit recorded data ............................................................................................................................. 17

2.3. SENSOR PROCESSING UNIT .............................................................................................................................................. 17

3. WEKIT.ONE RE-ENACTMENT: PREVIEW ................................................................................................................ 19

4. WEKIT.ONE ANALYTICS ............................................................................................................................................ 22

4.1. XAPI LOGGING................................................................................................................................................................ 23 4.2. BIOFEEDBACK SENSORS ................................................................................................................................................... 23 4.3. MACHINE LEARNING ....................................................................................................................................................... 24 4.4. DATA, RECORDING, PLAYBACK, REAL-TIME MONITORING ................................................................................................ 24 4.5. DISCUSSION AND TRIALS - EXPERIMENTAL AND FLEXIBLE ................................................................................................. 25

5. CONCLUSIONS ............................................................................................................................................................ 26

REFERENCES ................................................................................................................................................................................... 27

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 5/28

Abbreviations

API Application Programming Interface

AR Augmented Reality

ARLEM Augmented Reality Learning Experience Model

EEG Electroencephalography

ERM Entity-Relation Model

GUI Graphic User Interface

IoT Internet of Things

JSON JavaScript Object Notation

MQTT Message Queue Telemetry Transport

POI Point of Interest

sEMG Electromyographic

SPU Sensor Processing Unit

UI User Interface

UDP User Datagram Protocol

UWP Universal Window Platform

WLAN Wireless Local Area Network

xAPI Experience API

xCAPI Experience Capturing API

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 6/28

Executive summary The WEKIT Deliverable 2.4 assesses the completion of the first prototype of the WEKIT platform, including the results of technical testing in cycle 1 of development. D2.4 follows the development pattern “MVP” (minimum viable product), also called “DEM: Demonstrator, pilot, prototype”. The main design goals concerned both hard- and software aspects of the MVP-prototype. This report describes how those goals were met and then evaluated in trials. The integrated prototype described in the deliverable contains input from WP2, WP3, WP4, and WP5.

This first prototype has three main parts corresponding to the three main functions planned for at this stage: (1) WEKIT.one Recorder (function: captures expert performance), (2) WEKIT.one Re-enactment (function: augments the trainee performance), and (3) WEKIT.one analytics (function: allows to review captured data and an evaluation after the training has been completed).

The roadmap for developing future stages envisages enhancements to each function that meet the immediate needs of important target groups (e.g., early adopters of capacity-development innovations relevant to the industry 4.0 ecosystem, who are already collaborating with WEKIT partners). We have begun discussions of how the WEKIT platform can interoperate with systems already in use in advanced manufacturing. Looking some years ahead, we are tracking the work of companies trying to go even further (e.g., the US-based performance-enhancement company Neuralink, which is part of the Elon Musk stable of ‘Moonshot’ companies). Since such US work goes far beyond the industry 4.0 state of the art, our current MVP vision is limited to what can be scaled up in the EU and can have a high value to EU R&D on KETs (key enabling technologies).

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 7/28

1. Introduction Objectives of WP2:

Develop an integrated prototype of the WEKIT platform on the basis of cycle 1 input. Assess the completion of the first prototype of the WEKIT platform including the results of

technical testing in cycle 1 To develop an overarching integrated conceptual model and the according data model

specifications for representing activities, learning context and environment (aka ‘workplace’), and potentially other data model components needed for AR-enhanced

learning activities To establish vendor-neutral interoperability between interchangeable components

provided by different technology providers in order to support an immersive learning

experience across the distributed systems To enable the creation of repositories and online marketplaces for augmented reality

enabled learning content To support reuse and repurposing of existing learning contents to cater to ‘mixed’

experiences combining real-world learner guidance with the consumption (or production) of traditional contents such as instructional video material or learning apps and widgets

The first version of the prototype hardware is split into two main parts. The first consists of the Hololens head controller with the integrated Alex Posture and MyndBand EEG. The second part is the WEKIT.one vest that houses the devices making up the SPU, i.e., the Micro Computer, Battery Packs, Leap Motion, and the MYO.

ARLEM is the lingua franca (bridge) between the software components, providing an overarching content model to unify the content-level aspects and terminology of the three different phases foreseen: from recording, to re-enactment, to analytics. ARLEM is documented in D2.2 and was approved by the IEEE standard association working group p1589 for formal balloting on Feb 21, 2017, with a unanimous vote in favour of the spec.

Links to other deliverables (past and to come):

This integrated prototype implements the functional and modular architecture (D2.1). It runs on the hardware and hardware architecture proposed in D3.2.

Most notably, it integrates, through the ARLEM content model, the capturing prototype with sensor fusion API (specified and described in D3.3) with the early work on the re-enactment

system (scheduled for M30, with D4.1 reporting finally on the work on the visualisation

system and D4.3 reporting on the re-enactment system. The link with the analytics (scheduled for M30, as D4.4) is through the xAPI capturing

described in ARLEM and in D3.3. This is closely linked with the development of the community platform.

The software architecture reported in d2.1 is a three-layered architecture:

Presentation layer: the front-end and top-most level of the application, which encompasses

the graphic user interfaces (GUIs) and the sensor components which interact with the user and the external environment;

Service layer: the back-end and middle layer which coordinates the Recorder and Re-enactment clients, the data collection and analysis and the communication and transfer of

these data across the platforms;

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 8/28

Data layer: the bottom layer where the information is stored such that it can be retrieved,

processed, and re-presented to the user.

In addition to the three layers, the architecture combines three main computing units:

Smart Glasses: The main wearable device through which the expert can record his/her learning experiences that the learner can later replay. The type of Smart Glasses chosen for the first prototype was the current version of the Microsoft Hololens. The smart glasses will run the two main applications developed for the prototype: the WEKIT.one Recorder Client, which is responsible for the sensor recording, and the WEKIT.one Re-enactment Client, which is responsible for the experience replay. In addition, the Smart Glasses implement a first phase processing of the data, which prepares and routes the different xAPI calls.

Sensor Processing Unit (SPU): The portable computer device works as hub for the third-party sensors, i.e. those sensors that are not embedded in the smart glasses and that are necessary for more fine-grained capturing of learning experiences. The SPU is responsible only for the recording of the third-party sensors and some preliminary processing. In addition, it offers the necessary API interfaces to allow the Smart Glasses to retrieve and store sensor data.

Cloud Server: The cloud-based server is the place in which the recorded learning experiences are saved and processed for later re-enactment. The cloud-based solution allows for a scalable and distributed data storing over a nearly infinite number of computer nodes, as well as the availability of the data to all the connected and authorised devices.

1.1. ARLEM as lingua franca

The WEKIT.one Recorder client and WEKIT.one Re-enactment client are aligned using the ARLEM standard (see d2.2). The recorder function SaveToARLEM() allows the recorder to loop through all main entities (Scenarios, TaskStations, Annotations, and Users) in order to materialise recorded content into its according Workplace ML and Activity ML representations. The re-enactment client then can retrieve, parse, and play these in order to execute the learning experience.

1.1.1. Recorder: mapping the Class Model to ARLEM Workplace class

The workplace corresponds to the Scenario and relative WorldAnchors.

Workplace.id = “Scenario01”

Workplace.name = “This is the Scenario name” Workplace.things = WorldAnchors

Description: this corresponds to the WorldAnchors, i.e. physical objects that have a link into the virtual space. At current state in the recorder there exist only one WorldAnchor, which

is the zero position where the application is loaded, that creates an abstract thing called “default”. Within each thing several Points of Interest (POI) will be defined, corresponding

to all the TaskStation position. This will allow to place annotations relative to physical objects.

o Workplace.things.poi = TaskStation.position o Workplace.persons = Users

o Workplace.sensors, devices, apps, detectables, predicates, warnings, hazards = not

yet defined

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 9/28

1.1.2. Recorder: mapping Class Model to ARLEM Activity class

The activity corresponds to flow of TaskStation in the Scenario.

Activity.id = "scenario01"

Activity.name = "This is name of Scenario1"

Activity.language = "EN"; Activity.workplace = "http://this.is.me/my-workplace.xml"; Activity.location = thing in the workplace class

Activity.actions = TaskStations o Activity.actions.id = "TS" + <taskstationId>

o Activity.actions.viewport, type , device, location, predicate = not defined o Activity.actions.enters/Activity.actions.exit = Annotations

Description: the enter activations are equal to the exit activation and correspond to the annotations.

Activate.id = "ANN" + <annotationId>

Activate.type = type of annotation Activate.predicate = "point";

Activate.poi = "WorldAnchorName" + "TS" <taskstationId> + "_ANN" + <annotationId>

Activate.option = "down"; Activate.viewport, .url, .state, text , sensor, key, tangible, warning = not

specified

POI Ids:

For taskStation: POI-Id = <WorldAnchor> + "TS" + <taskstationId> For annotation: POI-Id = <WorldAnchor> + "TS" + <taskstationId> + "_Ann" +

<annotationId>

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 10/28

2. WEKIT.one Recorder

Figure 1. WEKIT.one recorder

2.1. Recorder

The WEKIT.one Recorder is the software responsible for unifying the sensor data collection, synchronisation, processing, storing.

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 11/28

Figure 2. WEKIT.one class diagram

Figures 2 shows the class model of the Recorder application. The main entities (classes) of the applications are the Scenarios, TaskStations, Annotations, and Users.

Activities (Recorder Class: Scenarios): The learning activities or experiences that are saved and re-enacted. The Scenarios are in one-to-n relation with the TaskStations. It is important to logically organise the learning tasks into scenarios that better define and delimit a learning experience and correlate it with related learning tasks that can be fetched and downloaded for later purposes. As the execution-order of tasks in a learning activity matters, the TaskOrder pairs each TaskStation to a Scenario and it assigns an integer to specify the desired order of execution for the learner. The TaskOrder is the process model of the task: with this approach, the same TaskStation can be used for different learning Scenarios.

Action (Recorder Class: TaskStations): The interactive objects that are placed to describe a particular learning task to be executed by the learner to accomplish part of the scenario. The TaskStation serves as a hub of annotations, and for this reason is in one-to-n relation with Annotation. Two versions of TaskStations are part of the current prototype: physical position based TaskStations and marker based TaskStations.

1. The physical position based TaskStations have a position as a three dimensional vector (x, y,

z) which is relative to a World Anchor. The World Anchor works as a link between the objects in the virtual world and the physical world and is connected to the automatically created depth

model of the physical environment. A default World Anchor is specified when a new empty scenario is created. This default WorldAnchor is defined at the starting position of the Hololens

application and that is x, y, z = 0. Every TaskStation defines a new WorldAnchor, which will have

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 12/28

a position relative to this.

2. The marker based TaskStations are based on a set of predefined marker images, which can be

placed anywhere in the working environment. As soon as the marker image is optically detected by the onboard camera of Hololens, a TaskStation is initialised at that specific location,

which can then be handled in the same way as physical position based TaskStations (i.e.

annotated), with the only difference, that it cannot be moved or removed from within the application (the marker image needs to be (re-)moved to (re-)move the TaskStation).

Augmentation (Record class: Annotation): The type of augmentations that can be added to demonstrate the execution of the tasks through multiple media. The annotations can be of two types: Sensor and Multimedia annotations.

1. The Sensor annotations (with and without audio) are tacit recordings of the activities, meaning that it is a way to silently keep track of the expert behaviour including movements,

gestures and physiological responses (EEG and Heart Rate Variability). These sensor recordings

are a way to make the tacit behaviour more explicit and therefore explainable to the learner. The initial type of Sensor annotations is also referred to as GhostTracks, which are semi-

opaque (hence Ghost) and which represent the physical path (hence Tracks) of the recorded person through space, including location, orientation, gaze, and hand movements. In

combination with synchronized audio recording, sensor annotations also help to record Think-aloud activities.

2. The Multimedia annotations are instead planned augmentations made by the expert. These

can be of different types:

2.1. Audio annotation: the expert can “speak over” a certain task, to create an audio “post-it” which is a quite straightforward way to convey certain messages to the learner, like “pay

attention to this”, “this should not be like that” etc. 2.2. Photo annotation: the expert can take a snapshot through the Hololens camera and to for

example show what is the expected result of certain tasks (“it should look like this”). 2.3. Text annotation: in case of short messages, it is also possible to add a text annotation;

even though typing with smart glasses is not very comfortable, the advantage of this is the readability of the message for the learner.

2.4. External 3D objects (initially implemented): the expert should be also able to import from

library or from URL some existing content; this can be: pictures, video (e.g. YouTube tutorials), 3D objects or animations.

The different annotation types (and prospectively further types, for example, side-annotations, not yet implemented, that show views and tracks from arbitrary perspectives and positions) can be combined to completely document activities associated with the TaskStation.

Persons / Roles (Recorder class: Users): this entity represents the user of the application: this can be of two different types: the user expert, which can record and edit the learning experiences, and the user-learner who can only replay what has been recorded.

Data Formats: The recorder defines a JSON-based format for storing scenarios, task stations, and annotations. All annotations are stored in additional files using binary formats for audio data and images as well as XML formats for recorded sensor data. The Recorder allows for the export of data into the ARLEM format for exchange.

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 13/28

The following figures illustrate a walkthrough through the recorder application, showing how to setup an initial scenario. The application is started with a new empty scenario. Figure 3 shows how to create initial task stations, to which annotations can be added (Figure 4). When a sensor annotation is added (Fig. 5) sensor recordings can be activated for this task station (Fig. 6). The recorded data can be visualized for rehearsal including physical position, gaze direction and hand positions (Fig. 7). The scenario can be extended with further task stations (Fig. 8), annotations (Fig. 9), and sensor recordings (Fig. 10).

Figure 3. Defining a TaskStation in Space. In this case, the object is associated to a physical location

Figure 4. Selection of different annotations that can be added to a TaskStation

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 14/28

Figure 5. A task station with an added AudioAnnotation and a SensorRecording

Figure 6. Recording functionality of the SensorRecording

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 15/28

Figure 7. Visualisation of a ghost track recording using physical position, orientation, and gaze direction

Figure 8. Adding another TaskStation to the Scenario

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 16/28

Figure 9. Working with annotations (positioning in Space) and visualization of tracked hand position.

Figure 10. Replaying a ghost track for quality check.

2.2. HowTo: Ghost Track (Sensor Annotation) and

Think-aloud (Combined Sensor Annotation)

The sensor-based ghost track recordings represent an extension to the default ARLEM model in utilizing sensor data recordings as additional supporting mean. Consequently, we'll describe major steps in using these recordings as part of a WEKIT Recorder session.

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 17/28

Note: This ‘how to’ also applies to combined sensor annotations that include audio. Select a combined sensor annotation instead of a sensor annotation in step 2 of the first part.

2.2.1. How to create your Ghost Track / Sensor Annotation

1. Tap in an empty spot around you and choose "Add TaskStation" in the following menu 2. Tap the just created TaskStation (a grey sphere) and choose "Add Sensor Annotation"

3. Tap the TaskStation and choose "RemoveAnnotation" if you want to remove the TaskStation again

2.2.2. How to record data using the Sensor Annotation

1. Tap the created Sensor Annotation (a grey cylinder) and choose "Start Recording" to start the recording of the data

2. Tap the annotation and choose "Stop Recording" to stop the recording of data (all other options

should automatically stop an active Recording) 3. Tap the annotation and choose "Start Recording" again if you want to add data to the already

recorded data 4. Tap the annotation and choose "Wipe Recording" if you want to delete everything you recorded

so far; choose "Start Recording" again afterwards to record completely new data 5. Tap the annotation and choose "Save" to permanently store the currently recorded data locally

or to a server

2.2.3. How to replay data using the Sensor Annotation

1. Tap the annotation and choose "Start Recording" to record data to replay; or choose "Load" to load the file associated with this annotation (if one already exists)

2. Tap the annotation and choose "Play" to replay a ghost track of the recorded or loaded data 3. Tap the annotation and choose "Pause" to pause the recording; use "Pause" again to resume or

"Play" to start playing from the start again

2.2.4. How to edit recorded data

1. Tap the annotation and choose "Start Recording" to record data to replay; or choose "Load" to

load the file associated with this annotation (if one already exists)

2. Tap the annotation and choose "Play" to replay a ghost track of the recorded or loaded data 3. If the replay is paused you can tap the record player model to open a new menu

4. Tap the player and choose "Add Start Point" to make the current frame the new starting point of the replay (all previous frames will be cut)

5. Tap the player and choose "Add End Point" to make the current frame the new ending point of the replay (all following frames will be cut)

6. Tap the player and choose "Reduce To Selected Points" to apply the changes and delete all frames out of range. Note: This will not permanently save the changes

7. Tap the annotation and choose "Save" to overwrite previously saved versions of the recording

2.3. Sensor Processing Unit

The Sensor Processing Unit (SPU) is the software that provides a standardised method for the other parts of the WEKIT software to communicate with hardware located on the SPU computer. It

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 18/28

effectively hides the details of the hardware, and provides the data in an easy to use format. See General WEKIT.one architecture overview diagram above (Fig. 11).

Figure 11. SPU

The SPU is written in C++ and targeted at the Windows platform. This is the most common and well-supported combination of drivers available for the Windows-influenced hardware devices that WEKIT are currently using. The SPU software physically runs on the SPU microcomputer to which the third-party hardware devices are connected. Communication with other parts of the WEKIT systems is via a wireless network.

UDP (user datagram protocol) is used to register a listener, and to send data to other parts of the WEKIT system. This allows the other software to be located on other machines, and only have communication via a network. The main Recording software runs on the Hololens and this is able to connect to, and receive data from the SPU.

A listening socket is registered with the SPU by sending the string CONNECT to the appropriate port. The SPU will then respond with data at the specific interval for that device. Sending the string DISCONNECT will stop data being sent to that port.

EEG data is provided in JSON format once per second.

LeapMotion data is sent every 200 milliseconds in the LeapMotion serialised format.

Myo data is sent every 200 milliseconds in JSON format.

This is the second version of the SPU software. The first version was written using the Thrift library to handle communication between SPU and its clients. Unfortunately, it was not possible to get Thrift working on Hololens so the UDP server solution was proposed and developed.

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 19/28

Currently, we are having problems with implementing a UDP client on Hololens. Hololens requires all binaries to follow Universal Window Platform (UWP) requirements, although these are expected to be resolved prior to the first trial delivery.

3. WEKIT.one Re-enactment: preview WEKIT.one Re-enactment is scheduled for M30 in WP4. This section’s initial version of AR visualization is following the main principles of the ARLEM standard. The initial version will be available if needed for trials on WP6.

Figure 12. WEKIT.one Re-enactments system components

Figure 12 illustrates the main system components of WEKIT.one Re-enactments system that has been developed in Unity3D. The whole system is configured around the Activity JSON and the Workplace JSON files. The workplace JSON describes workplace-related information such as point of interest, sensors, etc. It is parsed with the Workplace manager and information is transferred to the data layer. Activity JSON describes all action steps and what content should be active in each of these steps (see Figure 13). It is parsed with the Activity manager and information is transfer to AR layer via local storage. The current version can annotate:

Warning symbols

o Generals in UI (Figure 13) o Location based (Figure 14)

Symbols (Figure 14) 3D model (Figure 15)

3D animations (Figure 15)

Video annotations Audio annotations

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 20/28

Figure 13. ARLEM based user interface located in 3D space including general warning symbols

Figure 14. Activity JSON based warning signs and activity symbols

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 21/28

Figure 15. Activity JSON based 3D model with animation

User can act with WEKIT.one Re-enactment system by exploiting a multi-modal User Interface. The following modalities can be used simultaneously:

Gesture, e.g. doing “Click” gesture to go next work step. Voice commands, e.g. Say “Next” to go next work step or “Show status” / “Hide status”.

Physical HoloLens click button, e.g. “Click” to go next work step. Physical devices which has IoT interface e.g. put switch to “Stand-by” mode enables IoT

constraint in Activity JSON.

Figure 16. IoT test system with MQTT standard

The main objective of IoT demo/test box is to test and demonstrate IoT features in Augmented reality. Figure 16 illustrates the main principles of the IoT demo system. The IoT box includes a BeagleBone microcomputer that concurrently operates the MQTT server and WLAN router. WEKIT.one Re-enactments system connects via the MQTT IoT standard and AR-visualization to show information based on IoT data, Activity JSON, and Workplace JSON (See Figure 17).

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 22/28

Figure 17. Example of IoT systems AR-visualization with HoloLens

WEKIT.one Re-enactment initial version will be updated based on trials users feedback on WP6.

4. WEKIT.one Analytics The WEKIT.one analytics are not yet implemented in this phase of the project, as this only becomes possible with the SPU and re-enactment client being finalised. This section therefore describes working principles and future plans.

Consider this: A number of augmentations to the learner’s experience of the task can be simulated with the WEKIT.one platform. The effectiveness of those augmentations can be deduced through the collection and interpretation of performance data, followed by modifications of augmentations to achieve more of the goals, more efficiently and with lower effort (i.e., acting on the available feedback data, aka “action analytics”). Many of these augmentations provide affordances, allowing the learner to transform the structure of a task by making it more cognitively congenial (Kirsh 1996), providing an opportunity for cognitive offloading (Risko and Gilbert 2016), or providing a better conceptual model of the system (Norman 2013).

The desirable learning outcomes are contextual and are defined when dealing with each use case separately (WP6).

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 23/28

Figure 18. Cognitive Apprenticeship Model’s stages of learning and their relevance to the models used in this project

In order to optimise the connection between benefits to procedural performance and guidance of the type of augmentation for a given situation, steps will be taken to ensure the quality of both inputs (ARLEM content) and outputs (metrics for assessing improvement in performance and/or procedural learning).

The purpose of the analytical model, then, is to determine which of these affordances can be attributed to improvements in the novice’s performance, though clearly what constitutes “performance” will vary, from the speed at which an activity can be completed or learnt to an error-free level, to the degree of precision achieved in a particular action step.

The analytical framework will compare expert practice to that of a novice, whilst also being capable of providing whole-activity summaries, facilitating articulation and reflection (Collins et al. 1989). Quantitative results from activity capture and re-enactment (delivered through an xAPI) will go hand-in-hand with qualitative descriptions of the learner’s participation within their community of practice (Lave and Wenger 1991).

4.1. xAPI Logging

The xAPI, described in deliverable 3.3, allows the capture of aggregate performance data that, following the ARLEM content model, provides a description of the activity’s environment as well as data from the above-mentioned sensors, in relation to each action step. The format allows for quick and easy logging of user activity. For example, a learning experience (activity) could contain an action step ‘push button’, which can, once performed, be automatically logged to the xAPI endpoint to feed into performance analytics of the guided real-world interaction.

4.2. Biofeedback sensors

A key differentiator between the WEKIT platform and project is the integration of biofeedback sensors into the AR/VR technology. The thinking behind the sensors was two-fold, one is functional as in the case of the MYO, Leap and Hololens, these are key to delivering the basic AR/VR experience with the relevant controllers and data required for the capture and playback scenarios. The other factor is more human, relating to potentially improving performance by enhancing

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 24/28

mental awareness, attentional control, managing stress or just monitoring for levels of fatigue or warning when a certain posture is dangerous for any specific period of time. Human factors allow the WEKIT project to combine skills development with mental and emotional development which impacts not just on work and skills efficiency but also efficiency in psychophysiological resources that potentially help manage workload and reduce workplace injuries.

The first version of the prototype includes the majority of the selected biofeedback sensors and devices as proposed in D3.2. This first version of the prototype the hardware is split into two separate parts, the Hololens head controller part with the integrated Alex Posture and MyndBand EEG and the WEKIT.one vest as the second part, which houses the devices making up the SPU (the Micro Computer, Battery Packs, Leap Motion and the MYO). The HRV will not be available for the first round of trials due to difficulties with interoperability with the device selected.

4.3. Machine Learning

Sensors record raw behavioural data of the learning experience. These forms of data require subsequent levels of interpretations (see Figure 19). For example, the Myo armband captures continuous movements from one’s arm - through surface electromyographic (sEMG) readings (Castellini et al. 2009). These chunks of data can be interpreted into meaningful gestures, e.g. ‘rotating a knob’, or ‘pulling a lever”. Such gestures can later be interpreted into higher level action steps of the learning activity. The action steps should be evaluated to determine whether the action was correctly accomplished or not in order to provide consequent formative feedback to the learner.

Figure 19. Raw sensor data should go through a series of interpretations.

The interpretation itself can be seen as matching new data against certain frequent historical patterns and correctly classify them. This process does not exclusively have to be a human, but can be automated using machine learning techniques. Generally, machine driven techniques are more effective at recognising low level patterns, whilst they are less effective in making sense of higher level constructs. For this reason, in most of the applications, especially those that aim to describe human process like learning, a human evaluator is needed for validating the proposed interpretations. As a result, analytics dashboards can be employed to facilitate the task for the human evaluator leveraging on information visualisation theory. Ideally, being the experience coded into processable data, a machine could be trained to perform all the levels of interpretation; in practice however this is not yet reliable, for this reason we need the help of human/expert evaluation in the process. When the human steps into the interpretation process, we believe is a matter of technological readiness.

4.4. Data, Recording, Playback, Real-time monitoring

Each sensor has the potential to provide a large amount of data but this is not always useful and necessary to collect or visualise; however, during the first round of trials it is advised to record as much of the many different data sets as possible from the various sensors.

For each of the sensors there is processed and or raw data, for the EEG and psychological responses the raw data is not data heavy so can be recorded and saved in real time for the entirety of the tasks.

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 25/28

Certain processed data such as real-time attention, calm, zone and brainwave frequencies should be recorded and visualised for internal use. This data is valuable during all phases of recording, playback and analysis and when timestamped to tasks and individuals provides a powerful dataset for understanding the learning curve, strengths and weaknesses of the users.

With the physiological sensors apart from the functional controls they also serve as a feedback loop to help develop a greater understanding of how an expert operates; their movement, placement of their body in relation to the task and object, their posture, etc., all potentially provide information into models of excellence.

One dataset which can also become a control is EMG which can be derived from the EEG device, this allows for the potential to provide another method of clicking or making choices during the task by having the ability to blink or raise the eyebrows in order to send a command.

4.5. Discussion and Trials - Experimental and Flexible

For the first phase of trials it will be essential to capture data from all available sensors to allow us to make an informed decision on how best to refine the capture, playback and requirements of the system. In some of the past experiments, it has been what we were not looking for which was of most interest. As a result, the WEKIT.one trials need to capture as much information during the recording phase of both the expert and during playback with the trainee to have enough data to search for patterns and algorithms that can help relate back to specific positive and negative states and outcomes.

It would also be useful to keep a video log of the user's fatigue, stress, attention, wellbeing and other psychophysiological factors; this would allow us to cross reference the data sets and potentially discover new ways of using the biofeedback system for real-time alerts and warnings whether it be posture, attention, stress, heart rate or some other factor which ultimately affects the quality of the work being carried out.

With the EEG, we should not be concerned with displaying or visualising any information at the moment because we do not want to add a level of distraction to the user. However, having peripheral access to the data in the background allows us not only to capture the user's psychophysiological responses to the work, we can also use the feedback to further refine and improve how the WEKIT.one platform works and displays information so it is available but not a distraction or overwhelming for the user.

To summarise, for the first phase of industrial testing the psychological biofeedback should be recorded and available to analyse internally but does not need to be made available or visualised for the end user. Testing of the WEKIT.one platform should try to capitalise on the biofeedback system to also test its own user experience for the other aspects and features of the system.

Physiological data from the MYO, Leap and Posture trackers are more fundamental to the system operation, interface, control and integration. The data from these sensors will not just be used to record but will be part of the overall system and therefore the sensors must be present, recording and be active during playback.

There is also scope in the future to create personalised biofeedback training programmes based on an individual's weaknesses as determined over a period of time, this would be based on models of neuro-feedback training used in a number of sports to help high skilled workers manage anxiety and reduce the learning curve by ensuring the mind is in the best state for learning and retention. This is a long-term plan.

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 26/28

5. Conclusions The WEKIT Deliverable 2.4 assesses the completion of the first prototype of the WEKIT platform including the results of technical testing in cycle 1 of development. The integrated prototype described in the deliverable contains input from WP2, WP3, WP4, and WP5.

The next step in the development of the prototype will be evaluation in the trials by our industrial partners (Lufttransport, Ebit, and ALTEC) using scenarios described in D6.1, D6.2 and D6.3. All components and functions of the prototype will be evaluated. Technical partners will be at the location of the trial to help setup the hardware, simulate functionality that is missing in cycle 1, and fix any possible errors. In each of the trials, we will go through the following phases:

1. Technical setup of the location and technical testing at the workplace of the training scenario. In

Cycle 1 of the development, we focused on the recorder functionality at this stage as not all of the re-enactment functionalities are quite ready. For the purpose of use evaluation, additional

setup work will help simulate the missing components that are not yet possible in the recorder (e.g., 3D model upload and placement).

2. Evaluate the recorder with participants performing activities defined in training scenario as experts, employing real experts and some additional participants; Capture quantitative data,

focusing on performance data, but also collecting physiological data (e.g., EEG).

3. Prepare the scenario data for re-enactment 4. Evaluate the re-enactment component with participants performing activities as trainees,

employing both experts and non-experts 5. Collect qualitative feedback from all participants - conduct exploration interviews on what sorts

of analytics would useful and possible, what additional or modified use cases are of interest, how do participants feel about the recording and re-enactment.

It is clear that any software tested only represents a snapshot of the functionality packaged into the release version used in the trials, while work on the components continues in the background. For further advancement of the software components, we currently see the following particular future challenges:

Separating between the environment scans (room model) and an abstract space is

important in order to allow learning experiences recorded in one place (e.g. hangar of AW) to be re-enacted in another (e.g. hangar of Lufttransport).

Similar, but slightly different scenario is: same room, same organisation, same person, but

at a different time (with e.g. position of the plane in the hangar changing), as needed to, e.g., bring oneself up to speed again.

Relative addressing more work on interfacing the (Vuforia) computer vision functionality using fiducial markers and image targets with the world anchors from the depth mapping of

the environment is needed. Performance of sensor data capturing: additional sensors will increase load and it stands to

see whether the chosen communication protocols and architecture hold. We do not know yet, how we can replay captured data from some of the sensors. This is

especially the case with psycho-physiological data about the user. We know how to replay

gaze and body direction, but what about EEG data or VHR? How do we best map sensor data recorded to other senses / visualisations / augmentations? For example, body posture

deviations can be highlighted with colours, but they can also be signalled with vibration on

Wearable Experience for Knowledge Intensive Training

WEKIT consortium Dissemination: Public Page 27/28

the affected limb, or with soundscapes. Similarly, to what the expert pays attention could be

signified visually or with sound. Graceful degradation will be needed to allow for scale-down versions of the hardware: Not

necessarily every sensor used for recording is needed in the re-enactment. Moreover, allowing for downgraded versions of, e.g., learners using mobile phone, not smart glasses

(or using smart glasses, but without EEG) would certainly increase potential audience.

Visualisation: We potentially have a lot of data, but which of those data are relevant to the learner (and when)? It is also still unclear, how sensor data recorded can be coded into

visual variables. Scaffolding needs fading: To what extent (and how) do we allow learners to switch of the

guidance (or parts of it), so that they don’t get distracted (or even annoyed), when becoming increasingly competent and confident?

Data formats for sensor data: is storage and exchange efficient? How can it be optimised? For the analytics: Performance metrics and analytics reports need additional work: what

about task correctness, time performance, error rate, time to learn, retention over time, and

user satisfaction? Scalability: On the one hand side, the cloud back-end is still evolving (adding, e.g.,

authentication and selection mechanisms for learning scenarios & LEMs, adding learning profile data). These are essential functionalities for scaling out. On the other hand side, we

have not yet collected experience about how many concurrent users can be served and what data load emerges in practice.

Sensor fusion: The quality of sensor measurements can be increased, if data from several sensors are merged together (e.g. hololens hand tracking + MYO; HRV & EEG stress level).

This is not trivial.

Collaborative experiences: So far, the experience and software support has been focused on single user experiences, but collaborative and team experiences are commonly place in

learning and training. Personalisation (e.g. MYO, EEG), also based on learner profile: Machine Learning could be

used over personal data to e.g. action recognise MYO data or attention level.

References Castellini, C., Gruppioni, E., Davalli, A. and Sandini, G. (2009). "Fine detection of grasp force and posture by amputees via surface electromyography." Journal of Physiology-Paris 103(3–5): 255-262.

Collins, A., Brown, J. S. and Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. Knowing, learning, and instruction: Essays in honor of Robert Glaser. L. B. Resnick. Hillsdale, New Jersey, Lawrence Erlbaum Associates, Inc.: 453–494.

Kirsh, D. (1996). "Adapting the environment instead of oneself." Adapt. Behav. 4(3-4): 415-452.

Lave, J. and Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge, UK, Cambridge University Press.

Norman, D. A. (2013). The design of everyday things: Revised and expanded edition. , Basic books.

Risko, E. F. and Gilbert, S. J. (2016). "Cognitive Offloading." Trends in Cognitive Sciences 20(9): 676-688.

Wearable Experience for Knowledge Intensive Training

WEKIT project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 687669. http://wekit.eu/