A big part of Human-Computer Interaction research is observation, staring for hours on at how your participants use and abuse your system, or behave without it. Having detailed information on exactly when and what people do, is quite important. The more detailed the logs, the better.
A usual approach is recording the audio during an evaluation session. Or even better, pointing a bunch of cameras to the participants from as many angles as possible. But sometimes, just sometimes, privacy and ethics (or just unwilling participants) get in the way. The next best thing? Taking notes…
Strangely enough I couldn’t find a single, simple app that would just let me write notes with timestamps next to it (I’m guessing my Googling skills aren’t what they used to be, but still, I needed a solution quickly!).
Say hello to Atom, the “hackable” editor from the people at GitHub. I quickly threw some JavasScript together et voila, a solution to my problems: the time-notes package.
I know, this is the complete opposite of rocket science. But sometimes simple (it’s just a few lines of code) is enough. You’d be surprised how useful this was when spending 7 hours taking notes in a highly sensitive (personal data discussions with the occasional emotional moment) evaluation. Hey, it even saves it into the tsv format!
Now if someone could automate the process of analysing my notes, that’d be great.
I’ve been doubting between calling it an emotional footprint or fingerprint. You leave a digital, emotional footprint behind on social networks, so that would’ve made sense. However, your emotional state, and the emotional traces you leave behind, must be quite unique… Fingerprint it is!
This article discusses the steps it took to come up with the final design presented above. For the technical details, if there is enough interest I’ll just throw it on Github and you can all help perfect this rough prototype.
Sentiment analysis provides interesting insights into social network behaviour. Our team uses Slack as its main means of communication, the perfect testbed for exploring new and interesting ways of visualising the emotional state and distribution based on our active discussions on Slack.
With about 6 months of data, a week overview gives us interesting insights on both the distribution of activity across weekdays, but also the emotional nature of our posts. The above image shows a first attempt at visualising the sentiment analysis data, from Sunday (left) to Saturday (right).
The small bar indicator above every “day” data shows the general sentimental state, while every small square represents one post: dark green to light green indicates a somewhat positive to very positive post, dark pink to bright pink a bit negative, to an extremely negative post. Shades of grey indicates neutral posts. Posts are ordered by time.
This provides an interesting overview of emotion per day, and also over time. Individual posts are nice but can get quite overwhelming with time (as your Slack community’s activity grows).
Similar to the previous visualisation, the above image represents activity per weekday, but now the X-axis is used to represent the hours of the week (e.g. there is a total of 7 x 24 columns of squares). Color brightness indicates the number of posts for a specific sentimental polarity level: grey to white indicates the number of neutral posts, dark green to bright green indicates the number of positive posts (brighter colors equal more posts). The height represents the level of sentiment e.g. a green dot near the top is a very positive post, while a green dot near the center (near the neutral posts) indicate a somewhat positive post.
While information on individual posts is lost, it is easier to see the distribution of levels of emotions per hour of a specific weekday. Great, but can we do better?
Version 3: The Emotional Fingerprint
Condensing the data even more, the Emotional Fingerprint visualisation gives every Slack user a unique overview of their emotional state across the entire dataset per weekday (7 columns for 7 weekdays). While giving personal insights, the Emotional Fingerprint presents an easy way of comparing individuals and could help find patterns in larger communities, or even across communities. As mentioned before, we’d love for you, those Slack communities, and any other really, to get in touch!
Through The European Library website, Digital Humanities researchers are now given access to 10 million European digitised newspaper pages. While the availability and accessibility of this rich material are a great addition to the researcher corpus, the large amount of data can make it hard to find the specifics a researcher is looking for.
We gathered a group of Digital Humanities researchers in Amsterdam to collect ideas in a 1-day workshop on how we could improve the access to the data and what tools could improve the research workflow. A similar, shorter session was organised during the Europeana Cloud Plenary Meeting in Edinburgh. Ideas that came up inlcuded visualising sentiment analysis, how news moves through time and space, being able to compare queries, moving and continuing search results to personal digital spaces, sharing results with other researchers, dealing with language issues, spelling changes through time, visualising the precision of OCR, entity recognition etc. The list is quite long.
Our current prototype focuses on creating a faceted search environment through an interactive visualisation focussing on the time and space aspects. Following and “overview+details on demand” approach, the visualisation provides both an overview to allow researchers to find patterns in the data and gain insights across time and space, while also giving access to each individual newspaper image.
The map, timeline, newspaper and result modules shared on one screen
The prototype consists of 6 modules:
a text search widget: supports search on words and sentences in the OCR’d newspaper text;
a newspaper title widget: in order to restrict searches to specific newspapers;
a timeline widget: in order to restrict searches to a specific time frame while also visualising the number of newspapers in the search resultper year;
a map widget: enables a researcher to explore the distribution across Europe while also providing the ability to restrict a search to a specific country (note: due to the metadata lacking country information, language is currently used as a country indicator);
a search history widget: visualises the history of search terms/facet selections of the user;
a newspaper edition result widget: shows all results within the selection of the widgets above;
a newspaper view: shows the actual newspaper scan.
Created using Processing.js for the visualisations, Socket.IO for live communication with a Node.js server, any interaction with a single module updates all other modules, e.g. selecting a country adjusts the timeline to overlay the results of the selected country, selecting a specific time frame shows only the countries and newspapers that are relevant to that time selection. The whole application can run across multiple devices at the same time, enabling set-ups from a single tabletop device to multiple displays on mobile devices. All updates happen cross-device, wherever the devices are located.
Such a setup is very flexible: researchers can not only decide which modules they wish to use, but also how they wish to access them. Large displays can visualise all modules simultaneously, while multiple screens (e.g. multiple computer screens, large TVs, interactive tabletops, tablets and phones) can each provide access to 1 or more modules.
Early version of the prototype: The left iPad shows the results per newspaper, the right iPad shows the timeline. The laptop shows the map module, while the iPhone lets the user do text queries. An action on any device updates all other devices.
A researcher can decide to open multiple tabs in a browser to access the data on smaller screens. Researchers can share live searches, creating a co-located or even remotely shared faceted search environment. This also means the visualisation can be deployed in other settings such as a public library, using a public display where visitors can interact with the visualisation using personal devices.
We are currently in the usability testing phase, where we evaluate both the usability of the modules, but also the viability of the multiple screen setup. Deploying the visualisation on a large interactive tabletop as well as spreading it out on multiple tablets has already shown that faceted search performs equally well from a user point of view on both setups.
A large display presenting the title page of the selected newspaper on the right, while providing the search history on the left.
The results of these evaluations will let us improve the visualisation even further, after which we shall ask Digital Humanities researchers to join in and provide us with expert feedback. If you wish to be part of this process, do let us know!
Tomorrow I’ll be evaluating a couple of dashboards to visualize the activity of students/teacher in a feedback/discussion session live in the classroom. These dashboards are based on the designs I’ve discussed in this previous post. I’ll start by sketching the situation again, then present the 4 designs that will be tested. The article’s goals is also to familiarize the students with the designs. I’ll keep my explanation to the essentials. After the evaluations are done, I’ll be going more into detail.
The class consists of 12 students, split into groups of 3. Each group gives a presentation on their progress of the past week, after which everyone can give feedback and ask questions, including the teacher and teacher assistants (which makes 4 groups of students + 1 group of teacher/TAs). A large TV in the classroom will display the feedback activity. Each group can send a “like” to any other group for their questions or comments, using a simple web interface (see figure above).
A straightforward way of presenting the distribution of feedback is through a histogram. When a group talks, its “feedback” bar grows. When a group receives a like, an extra like is added behind their name. Equal bar lengths indicate balance between groups.
Very similar to the Histograms, but a more “fun” representation (based on ), each group has its own tree. Trees grow as a group talks more. Apples are hung on a group’s tree for every comment they receive.
Every group is represented by a large dot. The presenter is indicated by the pink dot (Note: this is the only visualization where the presenter is also visualized. The presenter is static, cannot receive likes nor give feedback). A green dot equals one of the groups in the audience. A dot grows as the group receives more comments. A line appears between the group and the presenter when the group is giving feedback (blinks when active), and grows in width according to the amount of feedback.
The 4 groups giving feedback are visualized as pink dots. The large circle represents the average of the amount of feedback across all groups. When a group talks, their dot move inwards, away from the average. The others also move away from the average, but outwards. When there is balance, the dots will rest on the outer rim of the circle. Every comment adds a “moon” to the receiving group’s dot.
In case you are interested in the technical details: all visualizations were developed using Processing.js. Node.js was used to create the server application which stores all session data in MongoDB. Socket.IO takes care of the communication between the web interfaces, visualizations and the server.
 Nakahara J, Hisamatsu S, Yaegashi K, Yamauchi Y (2005) iTree: does the mobile phone encourage learners to be more involved in collaborative learning? In: Proceedings of the 2005 conference on computer support for collaborative learning: learning 2005: the next 10 years! (CSCL ‘05). International Society of the Learning Sciences, pp 470–478
The course on Information Visualization at KU Leuven, taught by professorErik Duval, has students choosing their own dataset and developing interesting information visualizations (see the course wiki if you wish to visit their blogs and check their work). As teacher assistant, I’ve decided to join the fun, and combine this with my own research.
My choice of dataset, which will come as no surprise, is Learning Analytics related. The goal is to involve the students as much as possible (hi students, hope you are reading this!), to help me create something useful for them (so the data will also have to come from them), while also presenting them with a real example of how we design, develop and evaluate our visualizations.
We’ve already been “tracking” some of the students’ work through a simple online spreadsheet they maintain themselves regarding time spent on activities. I presented a quick and dirty visualization hack, showing how these time entries per student can already give some simple insights. See the figure below, which shows clearly that the second activity required most of their time (in this case, learning D3.js). Quite a simple example of course, but more data such help us create better visualizations, right?
As always, it felt very Big Brother-ish to the students. The Quantified Self idea doesn’t translate well (or I’m really bad at explaining it!) when someone grading you is watching this closely to your data. We’ll look at anonymous visualizations some other time, but it’s important to note that students are scared their data will be misinterpreted. More effort, for example, doesn’t always lead to better results.
So on to the idea I’ll be implementing this week. In the InfoVis class, each group of students (4 groups of each 3 students) is required to present their work to the class. Every group not presenting, including the professor/teacher assistant/… can ask questions and give feedback. Contribution to such a discussion is useful for everyone, and thus visualizing the amount of contribution by each group can be interesting. It might just help create a better balance between time spent talking by each group, which is a good thing (?). See the image below for a similar application, which has inspired us for this idea.
Students did see problems with visualizing the amounts. People would give feedback just… to give feedback! We could end up with a lot of mediocre, “filler” feedback (“That’s great”, “I like that”, “I totally agree”). Visualizing the amount of feedback says nothing about the quality, therefore a rating system could also be beneficial.
That raised another alarm bell. “Negative rating will make me feel as if my feedback isn’t appreciated”, a student reacted. It was suggested that positive feedback alone, similar to Facebook’s “like”, could be experienced less negatively. So that’s what we’ll go for!
I’ll spare you the details, but Node.js and Socket.IO are going to be my best friends to make all of this happen by next Monday. I’ll create an interface to manually log who is talking (if the entire thing makes sense after deployment, I can look into microphones or noise sensors…), I’ll give each group a “like” button, and focus the rest of my efforts making some (hopefully) interesting, oh and did I say live?, large display visualizations.
To give you an idea of what I have in mind, here are some sketches of a few examples I came up with… Comments much appreciated of course!