Designing a Live Discussion Visualization for the Classroom: Part 2/3

Tomorrow I’ll be evaluating a couple of dashboards to visualize the activity of students/teacher in a feedback/discussion session live in the classroom. These dashboards are based on the designs I’ve discussed in this previous post. I’ll start by sketching the situation again, then present the 4 designs that will be tested. The article’s goals is also to familiarize the students with the designs. I’ll keep my explanation to the essentials. After the evaluations are done, I’ll be going more into detail.

Simple interface to allow “likes”.

The class consists of 12 students, split into groups of 3. Each group gives a presentation on their progress of the past week, after which everyone can give feedback and ask questions, including the teacher and teacher assistants (which makes 4 groups of students + 1 group of teacher/TAs). A large TV in the classroom will display the feedback activity. Each group can send a “like” to any other group for their questions or comments, using a simple web interface (see figure above).

Histogram

Feedback given

Likes received

A straightforward way of presenting the distribution of feedback is through a histogram. When a group talks, its “feedback” bar grows. When a group receives a like, an extra like is added behind their name. Equal bar lengths indicate balance between groups.

Trees

Every tree represents a group. Apples represent “likes”. Legends are available in full version.

Very similar to the Histograms, but a more “fun” representation (based on [1]), each group has its own tree. Trees grow as a group talks more. Apples are hung on a group’s tree for every comment they receive.

Network

Presenter is represented by a pink dot. Other groups (teacher/TAs group, and each student group) are green. A line indicates feedback given. The thicker the line, the more feedback the group gave. During feedback, the line of the group blinks. For each “like” a group receives, their dot grows.

Every group is represented by a large dot. The presenter is indicated by the pink dot (Note: this is the only visualization where the presenter is also visualized. The presenter is static, cannot receive likes nor give feedback). A green dot equals one of the groups in the audience. A dot grows as the group receives more comments. A line appears between the group and the presenter when the group is giving feedback (blinks when active), and grows in width according to the amount of feedback.

Zen

Each group is represented by a pink dot. Starting from a balanced state, the top group starts giving feedback, moving their dot to the center. Other dots are pushed outwards.

The top group has received one “like”. The bottom has received 3.

The 4 groups giving feedback are visualized as pink dots. The large circle represents the average of the amount of feedback across all groups. When a group talks, their dot move inwards, away from the average. The others also move away from the average, but outwards. When there is balance, the dots will rest on the outer rim of the circle. Every comment adds a “moon” to the receiving group’s dot.

Development

In case you are interested in the technical details: all visualizations were developed using Processing.js. Node.js was used to create the server application which stores all session data in MongoDB. Socket.IO takes care of the communication between the web interfaces, visualizations and the server.

[1] Nakahara J, Hisamatsu S, Yaegashi K, Yamauchi Y (2005) iTree: does the mobile phone encourage learners to be more involved in collaborative learning? In: Proceedings of the 2005 conference on computer support for collaborative learning: learning 2005: the next 10 years! (CSCL ‘05). International Society of the Learning Sciences, pp 470–478

Designing a Live Discussion Visualization for the Classroom: Part 1/3

The course on Information Visualization at KU Leuven, taught by professorErik Duval, has students choosing their own dataset and developing interesting information visualizations (see the course wiki if you wish to visit their blogs and check their work). As teacher assistant, I’ve decided to join the fun, and combine this with my own research.

My choice of dataset, which will come as no surprise, is Learning Analytics related. The goal is to involve the students as much as possible (hi students, hope you are reading this!), to help me create something useful for them (so the data will also have to come from them), while also presenting them with a real example of how we design, develop and evaluate our visualizations.

We’ve already been “tracking” some of the students’ work through a simple online spreadsheet they maintain themselves regarding time spent on activities. I presented a quick and dirty visualization hack, showing how these time entries per student can already give some simple insights. See the figure below, which shows clearly that the second activity required most of their time (in this case, learning D3.js). Quite a simple example of course, but more data such help us create better visualizations, right?

Quick D3.js hack to visualize the spreadsheet info: Activity ordered by time, from left to right. Circle size represents hours spent. Green = more than 5 hours. Totals are visualized at the right, with the pink line indicating average.

As always, it felt very Big Brother-ish to the students. The Quantified Self idea doesn’t translate well (or I’m really bad at explaining it!) when someone grading you is watching this closely to your data. We’ll look at anonymous visualizations some other time, but it’s important to note that students are scared their data will be misinterpreted. More effort, for example, doesn’t always lead to better results.

Typical setup in the classroom. 1 group presents their work, 3 groups + the professor provide feedback. For my first prototype, the two guys at the bottom are manually tracking group activity. A large display is positioned top right with a live visualisation.

So on to the idea I’ll be implementing this week. In the InfoVis class, each group of students (4 groups of each 3 students) is required to present their work to the class. Every group not presenting, including the professor/teacher assistant/… can ask questions and give feedback. Contribution to such a discussion is useful for everyone, and thus visualizing the amount of contribution by each group can be interesting. It might just help create a better balance between time spent talking by each group, which is a good thing (?). See the image below for a similar application, which has inspired us for this idea.

Bachour, K.; Kaplan, F.; Dillenbourg, P., “An Interactive Table for Supporting Participation Balance in Face-to-Face Collaborative Learning,” Learning Technologies, IEEE Transactions on , vol.3, no.3, pp.203,213, July-Sept. 2010

Students did see problems with visualizing the amounts. People would give feedback just… to give feedback! We could end up with a lot of mediocre, “filler” feedback (“That’s great”, “I like that”, “I totally agree”). Visualizing the amount of feedback says nothing about the quality, therefore a rating system could also be beneficial.

That raised another alarm bell. “Negative rating will make me feel as if my feedback isn’t appreciated”, a student reacted. It was suggested that positive feedback alone, similar to Facebook’s “like”, could be experienced less negatively. So that’s what we’ll go for!

I’ll spare you the details, but Node.js and Socket.IO are going to be my best friends to make all of this happen by next Monday. I’ll create an interface to manually log who is talking (if the entire thing makes sense after deployment, I can look into microphones or noise sensors…), I’ll give each group a “like” button, and focus the rest of my efforts making some (hopefully) interesting, oh and did I say live?, large display visualizations.

To give you an idea of what I have in mind, here are some sketches of a few examples I came up with… Comments much appreciated of course!

Distribution of time spent giving feedback, per group. Colors indicate rated feedback.
Interaction between groups, sequence of group feedback. Colors indicate feedback
Size of tree indicates amount of feedback by group. “Likes” are visualized by adding apples to the tree. Based on iTree (Nakahara J, Hisamatsu S, Yaegashi K, Yamauchi Y (2005) iTree: does the mobile phone encourage learners to be more involved in collaborative learning? In: Proceedings of the 2005 conference on computer support for collaborative learning: learning 2005: the next 10 years! (CSCL ‘05). International Society of the Learning Sciences, pp 470–478)
Size of group indicates ratings received. Lines between group X and Y indicate how many times group X gives feedback after group Y, and vice versa. Alternative, and probably better, would be to visualize who gives feedback to whom. Based on (Roberto Martinez Maldonado, Judy Kay, Kalina Yacef, and Beat Schwendimann. 2012. An interactive teacher’s dashboard for monitoring groups in a multi-tabletop learning environment. In Proceedings of the 11th international conference on Intelligent Tutoring Systems (ITS’12), Stefano A. Cerri, William J. Clancey, Giorgos Papadourakis, and Kitty Panourgia (Eds.). Springer-Verlag, Berlin, Heidelberg, 482–492.)
Similar to above, this time using arcs to visualize interactivity between groups. Based on (Nagel, T., Duval, E., Vande Moere, A., Kloeckl, K., Ratti, C.: Sankey Arcs — Visualizing edge weights in path graphs. Eurovis 2012, Vienna, Austria, 5–8 June 2012, Eurovis 2012, pp. 55–59, Eurographics Association)
A bit trickier to explain when not animated. The visualization adds a square for every 5 seconds talked. A new square is drawn in the direction of the group talking e.g. if group 3 only talks, a line of squares will be drawn in a 45 degree angle. Other groups’ activity will pull the new squares to their location e.g. left you see that after a while the professor starts talking, making the new squares move to the bottom. Like tug of war! Overlapping squares turn orange, and when overlapped even more green. In the left image, the green square means there was a lot of overlap between group 3 and the professor, meaning there was a nice balance between the two (lots of back and forward talk maybe). Group 1 and 2 didn’t participate at all though. In the right image, there is a nice balance between all 4 groups!

Spreading the word

We’ve had the opportunity of presenting our work on Learning Analytics visualisations and dashboards at quite some events lately. Here’s a quick rundown.

In June we visited Harvard to attend the Learning Analytics Summer Institute. During the first day, al phd candidates could give a 1 minute quick intro of their work. My 1 minute of fame, together with the poster session, caught the attention of quite some people. Among them, Zachary Pardos, a professor at Berkeley, who is working with MOOC data and was interested in combining our visualisations with their data. More on that soon!

We also gave a half-day workshop on Visual Learning Analytics, explaining both the basics and pitfalls of data visualisation techniques, applied to Learning Analytics. Quite a nice turn-up, and a very active, enthusiastic group.

LASI workshop

In August we presented our teacher dashboard at OBIE2014 at ICWL2014. Sadly I couldn’t go to the event, so Jose volunteered to present our work. More details on his experiences on his blog.

September was Graz, Austria. At the weSPOT consortium meeting we discussed the progress of the European project. Lots of interesting discussions about our current dashboards, and a date for at least one really interesting evaluation setting (our tools are evaluated in different pilots, but when we get to do them ourselves we’re more excited ;)). That same week was EC-TEL2014, where we demoed some of our more recent work at the ARTEL2014 workshop.

LARAe.chi14

These events resulted in lots of interesting discussions: how does one evaluate these visualizations, what do we want to evaluate (we’re computer scientists aka not pedagogical folk), how do we make visualisations more accessible for teachers and students in schools,… Lots of questions, few answers. At least everyone agreed.. it’s not that simple.

Anyway, that leaves room for lots of research. So at least I know what I’ll be doing these next few years … ;)

Slides/papers/etc of these events:

2nd Learning Analytics Summer Institute (LASI2014)
Visual Learning Analytics workshop
slides part 1 / part 2

1st International Workshop on Open Badges in Education (OBIE2014) @ 13th International Conference on Web-based Learning (ICWL2014)
slides / paper

4th Workshop on Awareness and Reflection in Technology-Enhanced Learning (ARTEL2014) @ 9th European Conference on Technology-Enhanced Learning (ECTEL2014)
slides / paper

Engaging students in Learning Analytics

This article was originally posted on Medium.

LARAe.chi14

Learning Dashboards, which visualize Learning Traces left behind by students, have many applications, e.g. they help teachers keep track of their students (with all privacy concerns that follow) and they assist students in keeping track of their progress.

Teachers love these dashboards. The more data available, the better it can help them steer a course, find struggling students and assist in grading. Students, however, tend to experience elaborate dashboards as a Big Brother tool straight out of 1984 (even though none of them probably read the book or saw the movie).

But it is important to get students aboard. The Quantified Self (QS) movement has shown that tracking and being aware of your own data can help change and improve your ways, and learning should be no different.

But students are busy. They have no time for this QS nonsense. They have exams, tasks, projects, … they are pretty much the “busiest” people you will ever meet. So introducing “one more thing” that, to them, has no direct impact on their ultimate goal, a good grade? Probably not going to be a success.

I believe one way of solving this problem is by taking a little detour. Figure out what it is students want, what is lacking in their workflow and how we can help. And if we find a problem we can solve, we might as well do it in a Learning Dashboards kind of way. (Of course, if the problem cannot be solved by putting Learning Traces to a good use, find another problem)

Meet LARAe (Learning Analytics Awareness & Reflection environment), a dashboard created specifically for teachers and students of our User Interface course of 2014. From our previous work and by talking to both teachers and students, we pinpointed a bunch of issues in their workflow and created a dashboard that attempts to do just that: improve their workflow.

In this course, students generate lots of data in the form of blog posts, comments and tweets. 13 groups of 3 students, blogging 3-5 times a week with each group commenting on posts of the others, difficult for both teacher and student to keep track of. Many use RSS readers, some manually explore the blogs on a weekly basis. A workable process, but not ideal for everyone.

So LARAe tries to make this workflow a bit more straightforward. When it does, teachers start using it, and so do students. And suddenly everyone is looking at Learning Analytics data…

Being regularly confronted with these visualizations of Learning Traces, students might start visiting the dashboard for reasons other than their original intent. They become curious and start to explore, gain awareness of their activities and that of others. And maybe they learn something along the way, improve their learning process and who knows, get that better grade.

Learning Dashboards have many applications. Taking them beyond mere visualizations of numbers and statistics will not only help broaden their range of applications, but also get them in the hands of more people. Like students. And in the end, it is all about the students.

Privacy Concerns with Learning Dashboards

demoday_larae3

Let me first get the name LARA.e out of the way. As we keep developing new learning dashboards, we constantly need to create new names which can get confusing very quickly. Therefore we rebranded the entire thing as LARA.e, which stands for Learning Analytics Reflection & Awareness environments. We’ve mentioned Navi Badgeboard (LARA.e1), a dashboard helping students become more aware of their activities and achievements in class and Navi Surface (LARA.e2) which creates a discussion environment by playing the role of a catalyst for reflection. The latest dashboard to join the family is called just plain LARA.e3.

LARA.e3

LARA.e3 (see screenshot above) focusses on the teachers and provides them with a detailed overview of all activity traces of the class. It displays a massive amount of information regarding (in our case) blog posts, comments, tweets and badges, but can be modified to visualise any type of learning traces through multiple visualisations. Global filtering options allow the teacher to drill down on time frames, students and group. Students and groups can be easily compared and with the addition of course grades (left out of the screenshots for privacy reasons), student activity can be compared to their actual results in class. LARA.e3 can be displayed on large displays, smart boards, digital tabletops and more, allowing the teachers to consult the data individually, with colleagues as a base of discussion or in class with the students.

LARA.e3 is all about awareness and reflection regarding the teacher: awareness about activities across the entire class can help teachers intervene regarding group or individual student issues. Reflection about the course structure and content helps teachers get a deeper understanding which in turn helps find and address issues with the course.

Any student reading this blog post will most likely cry Big Brother, something I’ve tried to address in the previous post. We do have access to all their data and with LARA.e3, we do see everything. And while we can tell them it’s all about self-awareness and this doesn’t affect their grades, it’s impossible to really convince them not to worry or be wary of our motives.

It’s a trust issue, present in student-teacher relationships just as it is in employee-employer relationships (We’ve all heard the company badge/fire security excuse before right?). The only real solution I see is anonymization.

To convince students of their anonymity, they need full control over their identity. The teacher nor other students should be able to figure out the true identity of the users displayed on the dashboards by any means. Only then might we create a real trust and convince them of a true Quantified Self learning environment, where the focus is self-awareness without the underlying fear of being watched.

Anonymous data can still provide us with interesting insights. While LARA.e3 will not help find the individual straggler, intervention can still be possible at a class level. But every other benefit of the dashboard remains.  Personal student dashboards (e.g. LARA.e1) do not lose any of their functionality. Another plus can even be better data as less students might attempt to game the system.

To end this post, I’d like to ask if any teachers reading this have any interesting stories of students’ reactions to the introduction of learning analytics in their courses and how they handle this trust issue in case there was any. To the students, feel free to share your thoughts on anonymous dashboards (or the not so anonymous LARA.e3).