Attracted to Distractors
Learn more about the theory behind multiple choice question design and why a ‘good distractor’ helps us to assess gaps in students’ understanding.
By Anahita Parikh, Junior Data Analyst
Introduction
At Tassomai we analyse each student's understanding of every syllabus topic and identify their individual areas of weakness to focus, teach and improve the student’s learning outcomes. Our preferred choice of evaluation is Multiple Choice Questions (MCQs) as they are a non-subjective and content focused method of testing. MCQs use fast mental processing times, thus students are able to take in greater amounts of information in one sitting, making them a great study tool!
The prevailing technique used for evaluating the performance and assessment quality of MCQs is known as "Item Analysis", this quantifies the difficulty and discrimination index of individual MCQ questions.
However, at Tassomai our interest is not just limited to checking the quality of the questions, we also want to probe the extent of each student’s understanding of the topic and how easy it is to sway them from the correct answer. We do this by evaluating the quality of the alternative answers (distractors).
In the MCQ model of testing, students are typically only shown a question one time and hence the measure of a ‘good’ distractor is simply based on the number of students that answer that distractor as a percentage of the total answers. A good distractor can help to assess gaps in students’ understanding.
We propose two dynamic measures of distractors. The first is what we call a ‘Powerful Distractor’ and the second is called a ‘Persistent Distractor’. We believe these measures will help us to develop a deeper and richer profile of our students and their learning journey.
Powerful Distractors
The first measure is called a Powerful Distractor, which we define as a distractor which is selected by a (statistically significant) majority of students who get the question wrong.
An identified Powerful Distractor can provide a number of interesting insights. Firstly, it could indicate that the question is poorly worded and hence is confusing a large number of students. We can validate if this is the case by checking if otherwise good students are also getting confused, likely reflecting on the poor quality of the question.
Once we validate that the question is not poorly worded, it then indicates that the topic is difficult, and this distractor is similar to the correct answer and thus confusing weaker students. A question with a powerful distractor is considered desirable as it helps differentiate between good and bad students. So it would be helpful to have a powerful distractor for every question!
Persistent Distractors
But can we go further? Our focus is on ‘teach’ rather than ‘test’. We probe our students repeatedly to make sure that what they already know is reinforced and what they don’t know is quickly corrected and imbibed. Therefore, our focus is not limited to evaluating the student’s response in isolation, but rather identifying the pattern of responses to the same question over time. We have this data for every student and for every response recorded in our database. This allows us to develop a unique profile and gain insight into each student’s understanding of the topic and their learning journey.
Towards this, we introduce the measure of ‘Persistent Distractor’, which we define as a student answering the same distractor twice in a row, despite being told the correct answer i.e. they are getting stuck!
A question that is persistently tripping up students over and over indicates that the students are weak in their understanding and not able to learn as they are making the same mistake. A good learning outcome can be defined as a question that is powerful i.e. where many students get the question wrong the first time, but has low persistence i.e. students are able to learn and correct themselves the second time.
Example of Student Analysis
We illustrate below one such example of how to develop this learning profile for each student for a given question.
We first identify the powerful distractor for the given question. We then overlay the student’s answers over time to this question. We then draw some inferences on what that pattern likely represents.
Every answer by a student gives us insight into the student’s understanding. As we can see above, this analysis helps us not identify students who are weak on this topic but also provides some insight into the underlying cause. These insights allow us to effectively profile the student’s understanding of every topic and deliver targeted learning that helps them with their specific areas of improvement.
Going to the next level
What makes distractors even more interesting to look at is its ability to be applied to so many different dimensions of the data. We can cluster the question or student-level data by their natural groups such as schools, teachers, textbooks, question formats etc. as a method of identifying patterns of weakness within the groups. For example, are students of the same school struggling with similar concepts? Is a particular question format more difficult to comprehend? Our powerful and persistent distractor analysis could provide insightful answers to such questions too.
Overall, our distractor analysis not only allows us to evaluate the quality of our questions but also to map and track the progress of our students. Ultimately, through this analysis, we are able to identify and target the student’s strengths and weaknesses. This enables us to truly go beyond the surface and partner with the student on their learning journey.