Time-notes

A big part of Human-Computer Interaction research is observation, staring for hours on at how your participants use and abuse your system, or behave without it. Having detailed information on exactly when and what people do, is quite important. The more detailed the logs, the better.

A usual approach is recording the audio during an evaluation session. Or even better, pointing a bunch of cameras to the participants from as many angles as possible. But sometimes, just sometimes, privacy and ethics (or just unwilling participants) get in the way. The next best thing? Taking notes…

time-notes

Strangely enough I couldn’t find a single, simple app that would just let me write notes with timestamps next to it (I’m guessing my Googling skills aren’t what they used to be, but still, I needed a solution quickly!).

Say hello to Atom, the “hackable” editor from the people at GitHub. I quickly threw some JavasScript together et voila, a solution to my problems: the time-notes package.

I know, this is the complete opposite of rocket science. But sometimes simple (it’s just a few lines of code) is enough. You’d be surprised how useful this was when spending 7 hours taking notes in a highly sensitive (personal data discussions with the occasional emotional moment) evaluation. Hey, it even saves it into the tsv format!

Now if someone could automate the process of analysing my notes, that’d be great.

Atom editor: https://atom.io/
Time-notes package: https://atom.io/packages/time-notes

 

Your Emotional Fingerprint on Slack

Update: Due to several requests, I’ve uploaded the code of the visualisation part of the Emotional Fingerprint to GitHub. Enjoy! https://github.com/svencharleer/emo-slack-fingerprint
This article was originally posted on June 10, 2015
 

I’ve been doubting between calling it an emotional footprint or fingerprint. You leave a digital, emotional footprint behind on social networks, so that would’ve made sense. However, your emotional state, and the emotional traces you leave behind, must be quite unique… Fingerprint it is!

This article discusses the steps it took to come up with the final design presented above. For the technical details, if there is enough interest I’ll just throw it on Github and you can all help perfect this rough prototype.

Version 1

Sentiment analysis provides interesting insights into social network behaviour. Our team uses Slack as its main means of communication, the perfect testbed for exploring new and interesting ways of visualising the emotional state and distribution based on our active discussions on Slack.

Version 1. A green square is a positive post, a pink negative. Grey is neutral. Brightness indicates level of positivity.

With about 6 months of data, a week overview gives us interesting insights on both the distribution of activity across weekdays, but also the emotional nature of our posts. The above image shows a first attempt at visualising the sentiment analysis data, from Sunday (left) to Saturday (right).

The small bar indicator above every “day” data shows the general sentimental state, while every small square represents one post: dark green to light green indicates a somewhat positive to very positive post, dark pink to bright pink a bit negative, to an extremely negative post. Shades of grey indicates neutral posts. Posts are ordered by time.

This provides an interesting overview of emotion per day, and also over time. Individual posts are nice but can get quite overwhelming with time (as your Slack community’s activity grows).

Version 2

Version 2: hours of the week are on the X-axis. Every square onthe Y-axis represents the number of posts of a specific polarity level.

Similar to the previous visualisation, the above image represents activity per weekday, but now the X-axis is used to represent the hours of the week (e.g. there is a total of 7 x 24 columns of squares). Color brightness indicates the number of posts for a specific sentimental polarity level: grey to white indicates the number of neutral posts, dark green to bright green indicates the number of positive posts (brighter colors equal more posts). The height represents the level of sentiment e.g. a green dot near the top is a very positive post, while a green dot near the center (near the neutral posts) indicate a somewhat positive post.

While information on individual posts is lost, it is easier to see the distribution of levels of emotions per hour of a specific weekday. Great, but can we do better?

Version 3: The Emotional Fingerprint

(currently) The final version. Every user has a 7 column fingerprint. The larger the Y-range, the more spread your posts are emotionally-wise!

Condensing the data even more, the Emotional Fingerprint visualisation gives every Slack user a unique overview of their emotional state across the entire dataset per weekday (7 columns for 7 weekdays). While giving personal insights, the Emotional Fingerprint presents an easy way of comparing individuals and could help find patterns in larger communities, or even across communities. As mentioned before, we’d love for you, those Slack communities, and any other really, to get in touch!

This is a slightly extended, less “dry” version of my post on https://augmenthuman.wordpress.com/portfolio/emotional-fingerprint/ , our research group’s website.

Visualising European Newspapers for Digital Humanities Researchers

This article was originally published on the Europeana Research blog

Through The European Library website, Digital Humanities researchers are now given access to 10 million European digitised newspaper pages. While the availability and accessibility of this rich material are a great addition to the researcher corpus, the large amount of data can make it hard to find the specifics a researcher is looking for.

We gathered a group of Digital Humanities researchers in Amsterdam to collect ideas in a 1-day workshop on how we could improve the access to the data and what tools could improve the research workflow. A similar, shorter session was organised during the Europeana Cloud Plenary Meeting in Edinburgh. Ideas that came up inlcuded visualising sentiment analysis, how news moves through time and space, being able to compare queries, moving and continuing search results to personal digital spaces, sharing results with other researchers, dealing with language issues, spelling changes through time, visualising the precision of OCR, entity recognition etc. The list is quite long.

Our current prototype focuses on creating a faceted search environment through an interactive visualisation focussing on the time and space aspects. Following and “overview+details on demand” approach, the visualisation provides both an overview to allow researchers to find patterns in the data and gain insights across time and space, while also giving access to each individual newspaper image.

The map, timeline, newspaper and result modules shared on one screen

The prototype consists of 6 modules:

  • a text search widget: supports search on words and sentences in the OCR’d newspaper text;
  • a newspaper title widget: in order to restrict searches to specific newspapers;
  • a timeline widget: in order to restrict searches to a specific time frame while also visualising the number of newspapers in the search resultper year;
  • a map widget: enables a researcher to explore the distribution across Europe while also providing the ability to restrict a search to a specific country (note: due to the metadata lacking country information, language is currently used as a country indicator);
  • a search history widget: visualises the history of search terms/facet selections of the user;
  • a newspaper edition result widget: shows all results within the selection of the widgets above;
  • a newspaper view: shows the actual newspaper scan.

Created using Processing.js for the visualisations, Socket.IO for live communication with a Node.js server, any interaction with a single module updates all other modules, e.g. selecting a country adjusts the timeline to overlay the results of the selected country, selecting a specific time frame shows only the countries and newspapers that are relevant to that time selection. The whole application can run across multiple devices at the same time, enabling set-ups from a single tabletop device to multiple displays on mobile devices. All updates happen cross-device, wherever the devices are located.

Such a setup is very flexible: researchers can not only decide which modules they wish to use, but also how they wish to access them. Large displays can visualise all modules simultaneously, while multiple screens (e.g. multiple computer screens, large TVs, interactive tabletops, tablets and phones) can each provide access to 1 or more modules.

Early version of the prototype: The left iPad shows the results per newspaper, the right iPad shows the timeline. The laptop shows the map module, while the iPhone lets the user do text queries. An action on any device updates all other devices.

A researcher can decide to open multiple tabs in a browser to access the data on smaller screens. Researchers can share live searches, creating a co-located or even remotely shared faceted search environment. This also means the visualisation can be deployed in other settings such as a public library, using a public display where visitors can interact with the visualisation using personal devices.

We are currently in the usability testing phase, where we evaluate both the usability of the modules, but also the viability of the multiple screen setup. Deploying the visualisation on a large interactive tabletop as well as spreading it out on multiple tablets has already shown that faceted search performs equally well from a user point of view on both setups.

A large display presenting the title page of the selected newspaper on the right, while providing the search history on the left.

The results of these evaluations will let us improve the visualisation even further, after which we shall ask Digital Humanities researchers to join in and provide us with expert feedback. If you wish to be part of this process, do let us know!

Designing a Live Discussion Visualization for the Classroom: Part 2/3

Tomorrow I’ll be evaluating a couple of dashboards to visualize the activity of students/teacher in a feedback/discussion session live in the classroom. These dashboards are based on the designs I’ve discussed in this previous post. I’ll start by sketching the situation again, then present the 4 designs that will be tested. The article’s goals is also to familiarize the students with the designs. I’ll keep my explanation to the essentials. After the evaluations are done, I’ll be going more into detail.

Simple interface to allow “likes”.

The class consists of 12 students, split into groups of 3. Each group gives a presentation on their progress of the past week, after which everyone can give feedback and ask questions, including the teacher and teacher assistants (which makes 4 groups of students + 1 group of teacher/TAs). A large TV in the classroom will display the feedback activity. Each group can send a “like” to any other group for their questions or comments, using a simple web interface (see figure above).

Histogram

Feedback given

Likes received

A straightforward way of presenting the distribution of feedback is through a histogram. When a group talks, its “feedback” bar grows. When a group receives a like, an extra like is added behind their name. Equal bar lengths indicate balance between groups.

Trees

Every tree represents a group. Apples represent “likes”. Legends are available in full version.

Very similar to the Histograms, but a more “fun” representation (based on [1]), each group has its own tree. Trees grow as a group talks more. Apples are hung on a group’s tree for every comment they receive.

Network

Presenter is represented by a pink dot. Other groups (teacher/TAs group, and each student group) are green. A line indicates feedback given. The thicker the line, the more feedback the group gave. During feedback, the line of the group blinks. For each “like” a group receives, their dot grows.

Every group is represented by a large dot. The presenter is indicated by the pink dot (Note: this is the only visualization where the presenter is also visualized. The presenter is static, cannot receive likes nor give feedback). A green dot equals one of the groups in the audience. A dot grows as the group receives more comments. A line appears between the group and the presenter when the group is giving feedback (blinks when active), and grows in width according to the amount of feedback.

Zen

Each group is represented by a pink dot. Starting from a balanced state, the top group starts giving feedback, moving their dot to the center. Other dots are pushed outwards.

The top group has received one “like”. The bottom has received 3.

The 4 groups giving feedback are visualized as pink dots. The large circle represents the average of the amount of feedback across all groups. When a group talks, their dot move inwards, away from the average. The others also move away from the average, but outwards. When there is balance, the dots will rest on the outer rim of the circle. Every comment adds a “moon” to the receiving group’s dot.

Development

In case you are interested in the technical details: all visualizations were developed using Processing.js. Node.js was used to create the server application which stores all session data in MongoDB. Socket.IO takes care of the communication between the web interfaces, visualizations and the server.

[1] Nakahara J, Hisamatsu S, Yaegashi K, Yamauchi Y (2005) iTree: does the mobile phone encourage learners to be more involved in collaborative learning? In: Proceedings of the 2005 conference on computer support for collaborative learning: learning 2005: the next 10 years! (CSCL ‘05). International Society of the Learning Sciences, pp 470–478

Designing a Live Discussion Visualization for the Classroom: Part 1/3

The course on Information Visualization at KU Leuven, taught by professorErik Duval, has students choosing their own dataset and developing interesting information visualizations (see the course wiki if you wish to visit their blogs and check their work). As teacher assistant, I’ve decided to join the fun, and combine this with my own research.

My choice of dataset, which will come as no surprise, is Learning Analytics related. The goal is to involve the students as much as possible (hi students, hope you are reading this!), to help me create something useful for them (so the data will also have to come from them), while also presenting them with a real example of how we design, develop and evaluate our visualizations.

We’ve already been “tracking” some of the students’ work through a simple online spreadsheet they maintain themselves regarding time spent on activities. I presented a quick and dirty visualization hack, showing how these time entries per student can already give some simple insights. See the figure below, which shows clearly that the second activity required most of their time (in this case, learning D3.js). Quite a simple example of course, but more data such help us create better visualizations, right?

Quick D3.js hack to visualize the spreadsheet info: Activity ordered by time, from left to right. Circle size represents hours spent. Green = more than 5 hours. Totals are visualized at the right, with the pink line indicating average.

As always, it felt very Big Brother-ish to the students. The Quantified Self idea doesn’t translate well (or I’m really bad at explaining it!) when someone grading you is watching this closely to your data. We’ll look at anonymous visualizations some other time, but it’s important to note that students are scared their data will be misinterpreted. More effort, for example, doesn’t always lead to better results.

Typical setup in the classroom. 1 group presents their work, 3 groups + the professor provide feedback. For my first prototype, the two guys at the bottom are manually tracking group activity. A large display is positioned top right with a live visualisation.

So on to the idea I’ll be implementing this week. In the InfoVis class, each group of students (4 groups of each 3 students) is required to present their work to the class. Every group not presenting, including the professor/teacher assistant/… can ask questions and give feedback. Contribution to such a discussion is useful for everyone, and thus visualizing the amount of contribution by each group can be interesting. It might just help create a better balance between time spent talking by each group, which is a good thing (?). See the image below for a similar application, which has inspired us for this idea.

Bachour, K.; Kaplan, F.; Dillenbourg, P., “An Interactive Table for Supporting Participation Balance in Face-to-Face Collaborative Learning,” Learning Technologies, IEEE Transactions on , vol.3, no.3, pp.203,213, July-Sept. 2010

Students did see problems with visualizing the amounts. People would give feedback just… to give feedback! We could end up with a lot of mediocre, “filler” feedback (“That’s great”, “I like that”, “I totally agree”). Visualizing the amount of feedback says nothing about the quality, therefore a rating system could also be beneficial.

That raised another alarm bell. “Negative rating will make me feel as if my feedback isn’t appreciated”, a student reacted. It was suggested that positive feedback alone, similar to Facebook’s “like”, could be experienced less negatively. So that’s what we’ll go for!

I’ll spare you the details, but Node.js and Socket.IO are going to be my best friends to make all of this happen by next Monday. I’ll create an interface to manually log who is talking (if the entire thing makes sense after deployment, I can look into microphones or noise sensors…), I’ll give each group a “like” button, and focus the rest of my efforts making some (hopefully) interesting, oh and did I say live?, large display visualizations.

To give you an idea of what I have in mind, here are some sketches of a few examples I came up with… Comments much appreciated of course!

Distribution of time spent giving feedback, per group. Colors indicate rated feedback.
Interaction between groups, sequence of group feedback. Colors indicate feedback
Size of tree indicates amount of feedback by group. “Likes” are visualized by adding apples to the tree. Based on iTree (Nakahara J, Hisamatsu S, Yaegashi K, Yamauchi Y (2005) iTree: does the mobile phone encourage learners to be more involved in collaborative learning? In: Proceedings of the 2005 conference on computer support for collaborative learning: learning 2005: the next 10 years! (CSCL ‘05). International Society of the Learning Sciences, pp 470–478)
Size of group indicates ratings received. Lines between group X and Y indicate how many times group X gives feedback after group Y, and vice versa. Alternative, and probably better, would be to visualize who gives feedback to whom. Based on (Roberto Martinez Maldonado, Judy Kay, Kalina Yacef, and Beat Schwendimann. 2012. An interactive teacher’s dashboard for monitoring groups in a multi-tabletop learning environment. In Proceedings of the 11th international conference on Intelligent Tutoring Systems (ITS’12), Stefano A. Cerri, William J. Clancey, Giorgos Papadourakis, and Kitty Panourgia (Eds.). Springer-Verlag, Berlin, Heidelberg, 482–492.)
Similar to above, this time using arcs to visualize interactivity between groups. Based on (Nagel, T., Duval, E., Vande Moere, A., Kloeckl, K., Ratti, C.: Sankey Arcs — Visualizing edge weights in path graphs. Eurovis 2012, Vienna, Austria, 5–8 June 2012, Eurovis 2012, pp. 55–59, Eurographics Association)
A bit trickier to explain when not animated. The visualization adds a square for every 5 seconds talked. A new square is drawn in the direction of the group talking e.g. if group 3 only talks, a line of squares will be drawn in a 45 degree angle. Other groups’ activity will pull the new squares to their location e.g. left you see that after a while the professor starts talking, making the new squares move to the bottom. Like tug of war! Overlapping squares turn orange, and when overlapped even more green. In the left image, the green square means there was a lot of overlap between group 3 and the professor, meaning there was a nice balance between the two (lots of back and forward talk maybe). Group 1 and 2 didn’t participate at all though. In the right image, there is a nice balance between all 4 groups!