Measuring Engagement: Learning Analytics in Online Learning by Griff Richards
Measuring students engagement has been the discourse for the past few years. There has been a great deal written about it, presented on, published. Including books. Volumes have been published on engagement strategies in the classroom, on how to conduct online discussions, what makes online discussions successful, how do we measure success? How do we measure the impact of engagement on learning? Many answers have been suggested. All aligned with the recent talks of data, and data analytics.
Thompson River University scholars published a paper on measuring student engagement in 2011. Things have not changed much since the publication. We are faced with the same challenges. The tools and technologies evolved but not the framework. We have not changed the angle from which we measure. The paper provides an overview of what it means to measure students engagement, what it means to be engaged, especially in online courses. How do we accurately and precisely measure if certain pedagogical approach was successful in term of engagement, alignment with and achievement of outcomes.
The National Survey of Student Engagement (NSSE) is a survey instrument that has been administered each year to hundreds of thousands of students across North American campuses. The survey asks questions about the frequency of academic, and social activities over the course of a term and enables institutions to compare their levels of student engagement from year to year, and with schools of similar size. According to the paper, the NSSE is the instrument used on a micro level across the nation.
The NSSE survey is used for measuring students’ engagement in the University of Wisconsin – Madison as well. The focus was in three main categories: (1) participation in dozens of educationally purposeful activities, (2) institutional requirements and the challenging nature of coursework, (3) perceptions of the college environment, (4) estimates of educational and personal growth since starting college, and (5) background and demographic information.
In Indiana University, a study conducted focused on measuring Online Student Engagement scale (OSE) by correlating student self-reports of engagement (via the OSE) with tracking data of student behaviors from an online course management system. It hypothesized that reported student engagement on the OSE would be significantly correlated with two types of student behaviors: observational learning behaviors (i.e., reading e-mails, reading discussion posts, viewing content lectures and documents) and application learning behaviors (posting to forums, writing e-mails, taking quizzes).
On micro level, according to Richards (2011) using LMS data analytics to analyze asynchronous transactions – measuring engagement by tracing high and low value messages was proved to be an inaccurate and invalid way when attempting to measure engagement. Mining such data indicates active vs inactive participants. The analysis was done using Snapp. Snapp creates a visualization of learners as nodes, linked by lines representing the number of interactions between them. The following challenges were faces: the generated plot only indicated who was active but it was not a measurement of the the intellectual value of the postings.
Specifically to Blackboard LMS, what Thomson River concluded was that the “hits” indicator is highly misleading to measuring engagement. In a comparative study of LMS’s it was found that Blackboard Learning management system generated more clicks than any other LMS. That was concluded to be due to inefficient access architecture of Bbl.
The conclusion was that analytics was not the most appropriate approach to measuring engagement. Data from tracking systems are not inherently intelligent and hits and access patterns do not provide a valuable explanation to behavior nor motivation. However, Richards (2011) states that regardless, measuring something in this case is still better than measuring nothing.