When are machines better teachers?

Discussions with cognitive psychologists, neuroscientists, and a review of emerging research and tech leads us to the following ideas about machine teaching agents.

Where Human Agents Excel: Human tutors have excelled in a couple specific areas in the past: finishing, scaffolding, and feedback. While human tutors almost always get learners to finish a problem correctly, machines have been less successful. Also scaffolding, or ‘guided prompting’, is common in human tutoring. A critical element of scaffolding is the transfer of the burden of performing a skill gradually from the teacher to the student, and it’s highly effective. This nuanced interaction requires sophisticated but not impossible programming for machine agents. It is possible it’s the lack of scaffolding integration by machines that explains why human tutoring has been more effective than computer tutoring in the recent past.

Where Human Agents Fail: Human tutors miss many opportunities to help students learn. When we discuss tutoring, we’re often thinking about the top 10–20% of tutors, not considering a large majority of novice tutors who are not examples of excellence. It takes years for humans to improve their performance, and critical feedback on their performance is not systemic in education today.

Also, humans have a biological and psychological response to other humans. In tutoring, this means learners can be scared of tutors, they can perceive the tutor as biased, as having an agenda, or even of not being truthful.

Advances in machine learning has recently enabled a rapid evolution cycle for not only the machine agent, but also for the practice of tutoring everywhere. In fact, some believe that fixing the pedagogical mistakes of existing tutoring systems (human and machine) ‘may produce a 2 sigma effect size’, and re-engineering tutor–student interactions (VanLehn, Jones & Chi, 1992).

Where Machine Agents Excel:

  • Tutor domain expertise, not experience, matters most. Deep analysis of tutoring outcomes finds no relationship between a tutor’s experience and effectiveness, as long as the tutor is a domain expert.

  • No Turing test needed. Machine tutors don’t need to be conversational. Studies found that learner control of dialogue during tutoring is quite low. This largely alleviates the requirement to have a fully conversational agents that pass a low-level Turing test — a notional requirement that many strive for, but which studies indicate do not dictate better learning outcomes.

  • A machine agent that demonstrates empathy, but that is ‘without a human agenda’, maybe be perceived as more truthful, less biased, and less scary by students. From a neuroscience perspective, machine agents might be the solution to scaling math teaching for anxious math learners who experience high amygdala response.

Previous
Previous

Odd Job AI

Next
Next

AI Augmented Learning