Random assessments. The word random alone can make it feel like you’re losing control over your assessment. After all, how can you be sure what ends up in an assessment if the questions are only selected at the moment of delivery?
In reality, the opposite is true. When you work with a well-structured item bank and a carefully designed test blueprint, random assessments become the foundation for fair, efficient, and high-quality testing.
In this blog, you’ll discover why.
Efficiency: build once, use often
A well-designed item bank pays off significantly. When multiple questions meet the same criteria, the system can generate a valid and equivalent assessment every time.
This results in:
- Less work when creating assessments and resits
- No need to rebuild an entire assessment if one question needs replacing
- Higher quality, because you develop questions more deliberately against clear standards
It does require an upfront investment in building a well-filled and properly tagged item bank. You need enough questions for the system to choose from and to create different combinations based on the test blueprint.
To make this possible, each question must be tagged with metadata—information about the question. In addition to learning objectives, this could include question type and taxonomy level. For example, for learning objective X, an open-ended question at the analysis level is required.
By tagging questions correctly, you enable the system to select questions based on these criteria. And that is essential for random assessments. They can only be implemented when the item bank:
- Contains a sufficient number of questions
- Is properly tagged with metadata
- Includes multiple alternatives for each combination of criteria
While this requires extra effort upfront, it leads to better-quality questions and saves time in the long run, as you no longer need to rebuild assessments from scratch each time.
Every delivery is unique.
A random assessment does not consist of a fixed set of questions. Instead, the system selects questions from the item bank based on predefined criteria.
This means each candidate receives a unique version of the assessment. As a result, sharing or leaking questions becomes far less effective. It also makes organising resits much easier, since there is no need to create an entirely new assessment.
You might you wonder: Are random assessments fair for resits?The answer is yes, arguably even more so than fixed assessments.
Each version follows the same structure and criteria, ensuring equivalence. Candidates also gain no advantage from memorizing questions from a previous attempt, as they will receive a new variation.
You no longer need to create separate resit assessments. At the same time, randomisation increases assessment security on multiple levels:
- Candidates receive different questions.
- It becomes harder to collect or share questions.
- Any potential leakage has less impact.
Whether due to exposure or outdated questions, you don’t need to replace an entire assessment. You can simply remove old questions and add new ones, continuously and without disruption.
Semi-random: controlled flexibility
Many assume random assessments are completely uncontrolled, but in practice, you can guide them very precisely.
For example:
- Fixed sequence of topics
- Fixed opening or closing questions
- Randomisation within subcategories
This allows you to combine structure with flexibility, ensuring both control and variation.
So what’s holding you back?
With random assessments, you lose everything except control over quality. In fact, you gain more control.
They force you to critically evaluate your questions:Do they measure what they’re supposed to measure? Do they meet the defined criteria?
As a result:
- Assessment quality improves
- Fairness increases
- Fraud is reduced
- Maintenance becomes easier
Random assessments don’t reduce control—they strengthen it.




