From Predictive Models to Instructional Policies
At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student, and the policy uses the student model to individualize instruction. Different policies require different properties from the student model. For example, the mastery threshold policy requires the student model to be able to predict the probability that the student has mastered the current skill. A large amount of work has been done on building student models that can predict student performance on the next question. In this paper, we leverage this prior work with a new and simple when-to-stop policy that is compatible with any predictive student model. Using the expected number of learning opportunities as a metric, we compare this new policy to the mastery threshold policy and also investigate how the choice of model affects decision making. Our results suggest that this new policy acts similarly to the mastery threshold, but stops providing questions to students who will not master the subject. We also found that policies based on models with similar predictive accuracies can be quite different in the amount of practice they predict will be needed, suggesting that predictive accuracy is not a good enough metric on its own.