Streaming Media

 
Streaming Media on Facebook Streaming Media on Twitter Streaming Media on LinkedIn Streaming Media on YouTube
Sponsors

Educators Need Better Video Analytics to Measure Learning
Measuring quality of service is big business in the streaming video world, but educational video metrics lag far behind. Part of the reason is that educators need completely different metrics.
Learn more about the companies mentioned in this article in the Sourcebook:
{0}
{0}

From the 2016 and 2017 editions of Streaming Media magazine’s list of 100 Companies That Matter Most in Online Video, I count three that provide innovative viewership data analysis services as their core business, in addition to the many others that offer analytics peripheral to other services. Those three are Conviva, Mux, and Vidyard, which have collectively raised at least $100 million in publicly announced venture capital investment over the past few years. Viewership analysis is big business—content creators and owners place high value on insight into their content consumers.

When it comes to instructional media, viewership metrics are of serious interest to the extent that they inform companies about how viewing habits influence learning. We shouldn’t yearn for huge viewership numbers for the sake of huge viewership numbers: If a lecture video is watched over and over again by the same captive audience, that could indicate a major problem with the video. Students want to learn the material and continue to the next topic, so if they’re watching a video more than once or twice, it is reasonable to conclude that they’re having a hard time grasping the explanation, rather than assume they’re choosing to spend their time rewatching it for pleasure.

Unfortunately, teachers only have access to basic information, if any metrics at all—how many times the video was played, how often a play continued until the end of the video, and possibly which students watched. This data is of limited use to a conscientious educator. What sort of interventions are effective and appropriate if a teacher only knows which videos aren’t being watched to completion, which are being watched unexpectedly frequently, and possibly which individual students are watching them too much (and may benefit from a possibly unwanted check-in)?

What viewership behaviors for instructional video can we measure that may be more actionable? More interesting than how many times a student watched a video is what they did during each video viewing session. If students conclude that they’re lost, they’ll seek back to some point in the video where they felt they were not yet lost. Since we can trivially record “seek events,” we may find that, in aggregate, students tend to frequently seek back to rewatch a particular portion of a video, so we may be able to infer from that behavior what the specific problem with the video was. We may alternatively conclude that the portion students are rewatching is indeed difficult material, and we may decide to provide additional material to assist students struggling with that concept. The point is that for educational video, it’s more important to know what parts of the video students are spending time on rather than how much total time they’re spending.

There are products for educational video that attempt to blur the line between instruction and assessment by pausing the video at specified places to pop-quiz students, either to assess their comprehension or to prime their focus for the upcoming content. These “in-video” questions are supported natively by many learning management systems and video management platforms as well as offered as a core product by companies like PlayPosit (formerly eduCanon) or EDpuzzle that allow videos hosted by YouTube and other popular streaming platforms to be augmented with in-video quizzes.

This is a welcome augmentation and gives teachers a mechanism to very directly gauge student engagement and comprehension. I would caution against allowing that line between instruction and assessment to blur entirely, however. There should be a difference between the kinds of questions you would place in the middle of an instructional video and those you would put on a test. A very difficult question to assess mastery of material—like one you would put on a formal test—could derail a student’s learning process if used in-video. In-video questions ought to be relevant and reinforce the instruction in the video rather than prematurely challenging the students to demonstrate the ability to reflect and build on the teaching.

[This article appears in the January/February 2018 issue of Streaming Media Magazine as "Analytics to Measure Learning."]

Related Articles
Why do long-form video lectures get dull in a hurry, while long-form podcasts remain engaging? Because podcasts are built on conversations.
By combining Panopto's video platform and Ramp's enterprise CDN, the two offer unicast and multicast solutions for large organizations.
Smaller companies are moving out, and big companies are muscling into their old space. Look for monolithic learning management systems to give way to more agile solutions.
Live video helps companies save on travel costs, and leads to patients getting better medical care. Here are the three areas succeeding the most with live video.