02-06-2026

Feedback and digital assessments

5
min read time
Lobke Spruijt

In many digital assessments, candidates are only shown whether their answer was correct or incorrect, and—if permitted—the correct answer itself. This turns out to be rather thin feedback from which candidates learn very little. With concepts such as learning-oriented assessment, programmatic assessment, and formative evaluation gaining traction, there is increasing attention on providing meaningful feedback—feedback that actually has a positive impact on learning.

In a review article by the Dutch National Institute for Education Research, written by Judith Gulikers (Wageningen University & Research), Tamara van Schilt-Mol (HAN University of Applied Sciences), and Liesbeth Baartman (Utrecht University of Applied Sciences), an overview is presented of research findings on feedback and the conditions under which it is effective.

Several of the research outcomes discussed in the article are also highly relevant to digital assessment. Below, we highlight a few key insights.

1. Formative assessment: feedback works while learning is still in progress

Substantive feedback is most effective when the learning process is still ongoing. Once a learning trajectory has been completed, candidates generally make little use of feedback.

This makes in-depth feedback particularly suitable for formative assessments—assessments designed to support learning and improvement. In summative assessments, which serve as a final evaluation or as the basis for decisions, feedback tends to have less learning impact and is mainly perceived as an explanation or justification of the result.

2. Content-rich feedback: more than right or wrong

Corrective feedback (right/wrong) has only a limited positive effect on learning. Research shows that content-rich feedback has a much stronger impact. This type of feedback focuses on why an answer is correct or incorrect.

With open-ended questions, this comes naturally: assessors can provide targeted feedback on a candidate’s response. However, content-rich feedback is also possible for closed question types—for example, feedback at the level of a specific answer option.

When closed questions are well designed, they include plausible distractors. Candidates who understand the material recognise the correct answer. If a distractor is chosen, this presents an opportunity not only for task-level feedback (“this is incorrect”), but also for process feedback: an explanation of the underlying reasoning error or misconception. In this way, candidates learn principles they can apply in other contexts as well.

3. Timing of feedback: when it is given matters

Not only what feedback you provide matters, but also when you provide it.

For assessments aimed at practising basic skills, immediate feedback—given right after answering a question—can strengthen the learning process. Procedures and rules can be automated more quickly this way. Many assessment platforms offer specific settings to support this.

For more complex knowledge, delayed feedback is often more effective. Candidates are first challenged to think independently and make connections before receiving feedback. This timing can also usually be configured flexibly within digital assessment environments.

Richer feedback as part of the learning process

By implementing richer forms of feedback in digital (knowledge) assessments, candidates can actively use feedback throughout their learning process. For programmes that use learning-oriented or programmatic assessment, these assessments also serve as valuable data points within the broader body of information needed to make well-informed decisions about candidates.

In short, it pays to think carefully about feedback in digital assessments.

Are you already using the feedback capabilities in your assessment software?