ACCORDING TO the 2012 Horizon Report, the next big thing in education and training is learning analytics. There’s quite a lot of talk about it in the elearning community as well, in various blogs, conferences and research papers.
Learning Analytics is analysing data (“breadcrumbs”, “data exhaust”) that is captured by learners when using digital systems. This can be collected when they use a learning management system or email, social networking, SMS messaging, web searches, etc. The technology is available to capture and store large quantities of this data.
The data can include information such as how many times a student logs in to a system, what pages they visit, how often and who they communicate with, how many drafts they create for a written piece, how many times they attempt a multiple choice question and so on. Outside the learning environment, it’s possible to capture data on the size and extent of their social networks, the frequency of status updates, emails and SMS messages, the number of blog and forum posts. We could also record the time taken to undertake any one of these activities. It’s all quantitative data.
Putting to one side all the privacy issues that are associated with this data ‘mining’, let’s have a closer look at what all this means for teaching and learning.
The questions we need to ask are: How worthwhile is it to gather this information? Does it really tell us what we need to know? How much does it help the teacher and the student?
The notion behind collecting all this data is that we will know more about the learner – what they find easy or difficult to do or comprehend, their preferred learning styles, their level of participation and engagement, their preferred method and style of communication, and more generally, to contract a social profile of their learning behaviour.
According to the LA proponents, the objective is fourfold: prediction, intervention, personalisation and adaption. Based on the data collected, a teacher can predict a student’s academic performance, and armed with this information, can intervene in the student’s learning and tailor the training more specifically, even changing the content and training methodology.
How different is digital metrics from what we routinely capture now?
A large amount of information is gathered on enrolment. This is identity data – who the student is – age, gender, location, reasons for enrolling, previous work or education experience and results, and so on. This forms a massive information store that can be correlated against other data, giving a global picture of student performance.
Interviews with students are rich with information, but are subjective and not so easy to conduct if the learning is entirely online or class sizes are large. But based on other quantitative data, interviews provide an opportunity to ask and understand: why?
Questionnaires are another data collecting tool, and highly subjective, although questions can be formally composed to weed out inpatient and ill considered responses.
Perhaps the most powerful tool is observation (demonstration). In a classroom, this is easy to do. This is more difficult to achieve online but it highlights the use of technologies that enable this, such as video, online conferencing, virtual classroom, photos, etc.
Finally, testing (under pressure) is the ultimate indicator of a student’s progress in skill and understanding.
So, data on a learner is routinely captured. What is new with digital metrics is that it is quantifiable and objective. This is often compared to the doctor who questions how the patient feels, which is subjective. The doctor then checks the heart rate which reveals more objective information on which to make a correct diagnosis.
The problem is with the sheer volume of raw digital data generated. It has no purpose or value unless it is processed and analysed. By whom? The learning analytics fantasists believe that this will be compiled by the digital collecting software, and accordingly churn out ready made graphs and charts of individual learner performance. It could spawn a whole industry. But until that time arrives, it’s the teacher or administrator that will need to trawl though the bits and bytes.
LMSs already generate a lot of data, but this is not often used other than for ‘administrivia’. For example, it’s a requirement for online learning auditing that there is a measure not only of student participation, but also the proof of level of engagement, such as whether documents are read (which can’t actually be proved – only that the document was downloaded or opened). This data is collated as ‘evidence’, but it is rare for the data to be processed and analysed by a teacher for an individual student.
Let’s face it, digital metrics is a starting point only. The point is not just to capture the data, but to analyse it routinely and to do something about the results – change the curriculum focus, the activities, assessment or intervene directly into the learner’s progress and make adjustments, and provide assistance and feedback. The data is generally useful for the struggling student, to discover why, and how their learning experience can be improved.
For example, we find the data that logins to the learning system falls off after a few weeks, and results of tests decline. Is this happening generally with all students? Is it happening to a certain age group, gender, etc? This might lead to a more general conclusion about the course itself, and require some changes. Individually, we would need to speak to the student to find out why their involvement has dropped of, which could be due to general disinterest, having to work, trouble at home, etc. and solutions may lie outside the teaching environment.
So, let’s not get too carried away by all this. Digital metrics is just one more piece of the learning information jigsaw that, time permitting, can be used by teachers to moderate and improve their practices and technology systems, and provide advice and support to learners to achieve their aims and ambitions.