Adaptive Learning Tool for Stanford University project
This project is an experiment of adaptive learning impact that involves 10,000 online users.
The owner of the project is Emma Brunskill from Stanford University.
The project’s aim is to test the positive impact of Bayesian Knowledge Tracing (BKT) principles implementation in online education.
The basis of the project is Emma’s previous experiment involving 100 students, but Adapt code was not ready for working with thousands of students to provide statistical grounding for the theory.
Our task was to make sure we have a brand-new infrastructure for the new experiment which will presumably engage up to 10K students in Stanford and CMU.
The experiment environment:
- Pre-assessment: to measure the starting level of knowledge
- In-course assessments: to improve the assimilability of the material
- Post-course assessment: to measure the level of knowledge quality
The hypothesis:
“In-course assessment adaptivity reduces the number of tests, keeps the students engaged and simultaneously keeps the level of quality of achieved knowledge”
Functionalities:
Essential Improvements:
We needed to create the engine that would randomly divide the students into experimental groups
We needed to align the test sequence inside the LMS with the Adapt logic
We needed to create the system for raw data collection for further analysis
We provided load testing and code optimization to ensure smooth and stable Adapt functionality. The basic logic of the Adapt can be illustrated as follows:
The explanation of the infographic:
- Adapt knows the answers on a pool of tests divided by some skills set
- Based on test success/failure, Adapt calculates the probability of the student mastering the corresponding skill.
As an example: success on tests 1 & 2 increases the probability that student mastered the skill; failure with test 3 makes the statistics worse
- Adapt keeps giving tests from some particular skill set till the probability of subject mastery reaches some level. This level is titled “threshold”
- When the “threshold” inside particular skill set is reached, Adapt blocks related tests. Hence, if the student is smart he can be given about 3-4 tests for each skill set instead of 10-15 to prove that he mastered the subject
We developed the algorithm that randomly divides all students of the experimental online course into 4 clusters with different characteristics:
These parameters are fully configurable, so any amount of groups and any threshold level can be set. Our particular experiment is going to tell:
We made the system generic, not specific to Open edX so it can be applied to any other LMS. As an example CMU’s OLI integration is on its way
There are currently 3,000 course participants that are involved in the experiment.
We’ve built the system able to hold tens of thousands of participants, gather all the necessary data and ensure stable work of all the other features and functions.
We’ve optimized all the calculation, statistics gathering and internal processes, so the system increased its performance by about 70%