On Adaptive Learning: A Partial Response to Audrey Watters

In “The Algorithmic Future of Education”  Audrey Watters offers a sweeping critique of adaptive learning, arguing that “robot tutors” (her phrase) don’t benefit learners, they are not anything new under the sun, and that, worst of all, they represent a cunning ploy by industry (in league with administrators and managers) to “subjugate labor” and to create “austerity”.  According to Watters, adaptive learning and “algorithms” propel us towards a mechanized world which devalues learning, devalues labor, and devalues “caring”. Her latest piece on adaptive learning is part of her general skepticism and ongoing criticism of the educational technology industry.  

Watters is one of the few observers and critics struggling with some of the deeper questions about educational technology. What is it’s shape? Who benefits? Where is it going? Who controls it?  What does it mean for privacy and autonomy? Of late, she has also set her sights on some of the utterly nonsensical claims coming out of the adaptive learning world. (No researcher in their right mind would make such claims, nonetheless the claims are out there and unfortunately cast a shadow over the entire industry.)  Audrey Watters is a gadfly in the best sense of the term and her arguments need to be taken seriously.

If Watters is asking the right questions, her answers are at times questionable. In a future post I hope to provide a fuller response to “The Algorithmic Future of Education.” In this post I have a narrower aim, which is to point out that Watters characterization of adaptive systems suffers from a factual error and this leads to sloppy generalizations:

“What makes ed-tech programming “adaptive” is that the AI assesses a student’s answer (typically to a multiple choice question), then follows up with the “next best” question, aimed at the “right” level of difficulty. This doesn’t have to require a particularly complicated algorithm, and the idea actually based on “item response theory” which dates back to the 1950s and the rise of the psychometrician. Despite the intervening decades, quite honestly, these systems haven’t become terribly sophisticated, in no small part because they tend to rely on multiple choice tests.”

The assertion is simply incorrect. It’s also not the first time Watters has portrayed adaptive systems as based primarily on multiple-choice questions; powered algorithmically by item response theory (IRT); and, having seen no advances in the “intervening decades”.  She repeats it in her TedxNYED talk and the theme runs as a current through her other publications.

Why is this incorrect? For example, Aleks, which is one of McGraw-Hill Education’s adaptive learning platforms, has never used multiple choice questions. It has been around for nearly a decade. Second, the algorithmic theory (Knowledge Space Theory) behind Aleks is unrelated to Item Response Theory (IRT). Third, unlike Knewton, Aleks is not a black box and never has been. Its algorithmic basis was developed by mathematicians and cognitive scientists working at UC Irvine and University of Brussels. As a result, there is an extensively published and peer-reviewed research trail on how it works and how it aims to advance learning outcomes.

The same can be said of modern Intelligent Tutoring Systems (ITS). Even a cursory investigation of ITS reveals that a shift occurred in the 1970s and 1980s away from Computer Assisted Instruction (CAI) systems, which were based on behaviorist assumptions, to ITS which tried to incorporate advances in computer science, cognitive psychology, and artificial intelligence. Intelligent Tutoring Systems have never been about multiple-choice questions nor have they drawn primarily on IRT as their underlying pedagogical framework.

As a cultural critic and cultural anthropologist Watters will appreciate, I am sure, the importance of genealogy.  IRT emerged in response to the increasing emphasis on high-stakes testing in the US. IRT has also been associated with regimes for measuring “intelligence”.  Aleks, and systems like it, have an entirely different genealogy. The were designed to provide learners with feedback. The world of IRT is the world of summative assessments. The world of well-designed adaptive systems such as Aleks, on the other hand, is the world of formative assessments. There is overwhelming evidence in learning science that formative assessments are among the most important levers we have for improving learning outcomes. Testing is not Learning. Researchers working in this space know it and get it. They have also known it for a long time.

The best research also shows that our goal should never be to replace teachers. Our goal should be to empower teachers and support student-centered learning environments. Some of us believe that technology has a place in realizing this goal. Some of us also believe that which technologies are effective, and in which contexts, should be demonstrated by research and evidence, not anecdotes. 

[Note: The views stated in this blog are my own. I am not speaking on behalf of my employer, McGraw-Hill Education. This post has also not been reviewed or cleared by my employer.]

Share this post:

Related Posts

One Comment

  1. John Warner says:

    “Robot tutors” is not Waters’ phrase. It is a claim made by Knewton CEO Jose Ferreira. http://www.npr.org/sections/ed/2015/10/13/437265231/meet-the-mind-reading-robo-tutor-in-the-sky

Leave a Comment