Multimedia les 2

Post on 27-Jan-2015

106 views 1 download

Tags:

description

Presented to my #mume13 students on 4 October 2013 in Leuven, B.

Transcript of Multimedia les 2

#mume13Les 2

Erik DuvalDept. Computerwetenschappen

http://www.cs.kuleuven.ac.be/~erikd/1

http

://ar

iadn

e.cs

.kul

euve

n.be

/wik

i/ind

ex.p

hp/M

M-C

ours

e131

4

2

nieuwe studenten?nieuwe groepen?

3

1. recap2. data3. explore4. share5. wrap-up

4

assignment clear?

5

lessons learned?what was hard/easy?

what did you (dis)like about d3.js?no interactivity?

comments on blogs?

6

can be beautiful too..

http://hint.fm/wind/8

http://vimeo.com/75043122

can be beautiful too..

9

1. recap2. data3. explore4. share5. wrap-up

10

data structure?

11

- graphs, networks- hierarchies, trees- text- time series- geo

12

when: temporalwhere: geospatialwhat: topicalwith: trees, networks

13http://ivmooc.cns.iu.edu

14

http

://w

ww

.wik

idat

a.or

g

:-) http://data.gov (-:

http://data.gov.be

http://www.opendataforum.be

http

://w

ww

.ope

nstr

eetm

ap.o

rg/

http://vimeo.com/11526474

http

://m

usic

brai

nz.o

rg

http

://w

ww

.flic

kr.c

om

http

://w

ww

.free

base

.com

/

http://linkeddata.org

27

J. Bollen, H. Van de Sompel, A. Hagberg, L. Bettencourt, R. Chute, M. A. Rodriguez, and L. Balakireva. Clickstream data yields high-resolution maps of science. PloS one, 4(3):e4803, Mar. 2009.

are just by visually looking for the largest number of con-nected nodes. These larger clusters can be a first indicationof where high profile authors are located. However, in thisstate, neither the names of the authors nor the titles of thepapers are visible yet.

When the user wants to look into more details, he can zoomin to a specific part of the publication space. This is whatFigure 3 depicts. The author names become clearly visible,so that the user can identify a particular author. The usercan also click on paper nodes to get more information on thepaper. To make it easier to identify which authors are moreprolific in the field, the node size of the author is directlyproportional to his number of publications. In Figure 3, forexample, author Martin Wolpers has the largest number ofpublications and is a good candidate to use as a landmarkin the exploration process.

4. EVALUATIONIn this section, we describe how we have evaluated our firstiteration. Subsections 4.1 and 4.2 elaborate on the setupof the evaluation. Subsection 4.3 discusses the results ofthe evaluation and finally, in subsection 4.4, we draw ourconclusions from this evaluation.

4.1 DescriptionTo evaluate the application, we deployed our tabletop in themain hall of the ECTEL 2010 conference [42]. This roomwas the main location for co�ee breaks and figure 4 illus-trates the tabletop setup.The evaluation was conceived as a formative evaluation, inorder to gather feedback on the design and implementationof the application from real users in a real life scenario. Wefollowed the think aloud method, where the participantsverbally describe their thoughts during the evaluation. Inthis way, the participants reveal their view on the systemand possibly their misconceptions [28]. It started o� withgeneral questions (age, gender, profession, vision and leftor right handed) about the participants together with theirbackgrounds. The participants were introduced to the ap-plication by asking them if they could explain what theysaw. We also asked them one basic content-related ques-tion to get them started: “Find author x and find out howmany papers he wrote in ECTEL 2007”. When needed, theparticipants were given extra explanation about the appli-cation. After this, the evaluation continued with tasks theyhad to perform. For each task, we noted whether the tasksucceeded, how fluently the task was performed and whetherthe participant needed help or not. Finally, the participantswere asked for some general feedback and they filled in asmall questionnaire about usefulness and ease of use. Eachevaluation took between 20 and 30 minutes.

4.2 ParticipantsThere was a total of 11 participants, aged between 27 and 60.All participants were researchers, right handed and all butone had corrected vision. Only 3 of the participants con-sidered that they had a bit of experience with multitouchinteraction, the other 8 said they had a lot of experience.Regarding experience towards tabletops or multitouch wallshowever, only one person described himself as experienced.To find out how experienced the participants were in the

Figure 4: Setting of the evaluation.

Figure 5: An overview of the number of papers theparticipants have written

research area, they were asked about their years of experi-ence in the Technology Enhanced Learning (TEL) researcharea, the number of papers published and how many of thempublished in TEL. Half of the participants claimed to haveup to 3 years of experience and the other half claimed tohave many years of experience. On average, the partici-pants have published around 32 papers, from which 16 inthe TEL area. Three participants have published more than60 papers, from which 20 or more in the TEL area. Fig-ure 5 shows in detail the number of published papers perparticipant.

4.3 ResultsIn this section, we describe the results of the evaluation.These results are grouped in three parts. First, we reporton the tasks the participants had to perform, second, wesummarize the most important feedback, and third, we takea look at the results from the questionnaire.

4.3.1 Tasks

Vandeputte, Bram; Duval, Erik; Klerkx, Joris. Applying design principles in authorship networks - a case study, CHI2012, Austin, TX, USA, 5-10 May 2012, CHI EA '12: Proceedings of the 2012 ACM Annual Conference Extended Abstracts on Human Factors in Computing Systems Extended Abstracts, pages 741-744, ACM

ResearchTable

31

33

learning analytics

Govaerts, S., Verbert, K., Duval, E., Pardo, A.: The student activity meter for awareness and self-reflection. In: CHI2012, 869–884

Govaerts, S., Verbert, K., Duval, E., Pardo, A.: The student activity meter for awareness and self-reflection. In: Proceedings of CHI2012, ACM (2012) 869–884

37

http://www.snappvis.org

39

http

://w

ww

.cou

rses

mar

t.com

/go/

inst

itutio

ns/a

naly

tics

http://engagor.com/

navi-hci.appspot.com

17 September 2013 | @svencharleer | svencharleer.com

8 May 2013 | @svencharleer | svencharleer.com17 September 2013 | @svencharleer | svencharleer.com

J. Santos, S. Charleer, G. Parra, J. Klerkx, E. Duval, and K. Verbert. Evaluating the use of open badges in an open learning environment. In D. Hernandez-Leo, T. Ley, R. Klamma, and A. Harrer, editors, Scaling up Learning for Sustained Impact, volume 8095 of Lecture Notes in Computer Science, pages 314–327. Springer Berlin Heidelberg, 2013.

17 September 2013 | @svencharleer | svencharleer.com

17 September 2013 | @svencharleer | svencharleer.com

Khaled  Bachour,  Frederic  Kaplan,  Pierre  Dillenbourg,  "An  Interac:ve  Table  for  Suppor:ng  Par:cipa:on  Balance  in  Face-­‐to-­‐Face  Collabora:ve  Learning,"  IEEE  Transac:ons  on  Learning  Technologies,  vol.  3,  no.  3,  pp.  203-­‐213,  July-­‐September,  2010  

open=

opportunity for innovation

http://portal.mace-project.eu/

http://quantifiedself.com/

52

53

54

55

56

57

http://lift.do

contests...

63

http://hubwaydatachallenge.org

http://www.visualizing.org/marathon2012/recap#winners

tell a story...

1. recap2. data3. explore4. share5. wrap-up

68

what kind of data?what do you want? what is available?

what kind of story?what kind of visualization?

69

:-) excel :-(

http://openrefine.org

1. recap2. data3. explore4. share5. wrap-up

74

1. recap2. data3. explore4. share5. wrap-up

75

task for next week

scenariointeractify d3.js app

wetenschappelijk paper76

?@ErikDuval

hOp://erikduval.wordpress.com77