How students' performance improved on Tassomai - 2022 update
At Tassomai, we are always analysing data to measure where the software is most effectively supporting learning. One of our internal metrics analyses how students’ understanding of difficult questions changes throughout their time on Tassomai; this is the statistic which in 2019, for the first time, we published in our year-end summaries.
How does the ‘performance improvement’ measure work?
This is commonly termed as a “pre & post” analysis. We looked at how students’ accuracy on a certain category of questions changed between the time that they finished using Tassomai and the time when they started on the course. The questions we chose to analyse were what we call “stretch questions” - these are questions with a high empirical measure of difficulty.
When students start on Tassomai, they only see a few stretch questions - it is only after the initial calibration phase, when they have completed 10-15% of the course, that they begin to see stretch questions more regularly. This is because we only want students to unlock these questions when they have demonstrated a decent level of basal-knowledge in each topic.
As they progress through the course, the frequency of stretch questions - and the difficulty of these questions - gradually increases. This is based on a principle which is seen in game design: flow-state building, “Goldilocks-zone” differentiation. Students are given a level of challenge which is just right for them, and this is calculated and reassessed in every topic for each user. Therefore when a student is rapidly improving in a topic, we will increase the question difficulty more quickly, whereas the question difficulty is capped in topics where they are struggling. This ensures that students are always challenged, but never disheartened, keeping them engaged - while simultaneously scaffolding their learning, ensuring they always see relevant content.
This therefore suggested to us that, since difficulty increased as students progressed through Tassomai, students would show a similar level of performance in stretch questions at the beginning and end of the course.
Subsequent research conducted in September 2019 confirmed that, for students with little or no use of Tassomai over the course of the academic year, the change in attainment on stretch questions was, in fact, slightly negative - to be expected in that the content was getting slightly harder, but they themselves were not doing a great deal of practice on the platform. This helped us validate our baseline.
Which students did we look at?
Our study aimed to study the whole cohort, normalising for prior attainment and school contexts. However, in the case of students with low usage, the data was invalid.
We filtered out students who had not passed the initial calibration period - students who had completed less than 10% of the course. This was because those students would have insufficient data to compare between when they started and when they finished using Tassomai, and would therefore have insufficient engagement to be part of a meaningful analysis.
After removing these students, we looked at all the answers given to stretch questions by the remaining students across all 500+ UK schools that used Tassomai in the 21/22 academic year. We then took the first time they answered these questions, and compared them to their final answers, to see if there was a net change in accuracy. We looked at this on a per-student basis, and then averaged that for each school.
Elsewhere, we have analysed these datasets on a per-student basis, removing the school context, and instead analysing how the change in attainment varied with increased use of the platform - the results of that study can be found on our impact page.
What were the results?
Inevitably, there is some noise when looking at this data on a per-student basis - not every student improved compared to when they first answered stretch questions - but this is to be expected. Some students would have been trying harder when they started the course, or would have been focusing on their worst topics later in the academic year.
However, when we looked at the trends in the data they told a compelling story: every school with a statistically significant number of students showed an improvement in performance.
The most important thing for us was to be able to tell schools how much use their students had had from Tassomai, how many thousands of questions they had answered, how many hours’ practice and - in particular - how this work had translated into learning gains.
Beyond that, we could look at the national data set. Not only had we seen more than 430 million questions answered (almost DOUBLE the previous year’s tally), with over 1,400,000 hours’ work completed on the platform, we found that students’ academic performance continued to show marked improvement through sustained practice on the platform.
The average increase in performance across all schools was a staggering 10.3%. Considering the majority of students were in their second or third year’s use of Tassomai and building on the work done in previous years, this sustained, continued improvement in standards of recall and attainment exceeded all of our expectations.
What this tells us is that, by engaging in this type of regular, personalised practice with feedback - and thanks to the amazing efforts of teachers to embed Tassomai as homework and part of their school’s culture - the results have paid off for students as they prove their ability to tackle harder material with equal or increasing success. It also demonstrates that far from having diminishing returns, the sustained implementation of Tassomai over the course of secondary school life supports students in reaching ever higher standards.
As engagement and implementation of Tassomai in schools continues to improve, we’re thrilled to see the program helping to inform teaching, intervention and self-directed study - and to see it deliver more on its promise and do more for students’ learning.
If you’re interested in learning more about our data and research, please get in touch with us!