The first finding from the sandpits was that all of the lecturers found merit in giving away some of the role in the assessment process to the learner. For the sake of this research, this was an important finding, as it has been pointed out how important learner agency is, within the discourse of agency, to have power transmission from the lecturer to the learner (Nieminen & Hilppö, 2020). Without the lecturer giving away some of this power, the learner would struggle to have an agentic role in the assessment:
(…) There is a balance of power. The assessment process seems to be loaded quite often on the part of the tutors… we should be looking at a scenario where you give me some feedback and I challenge you and you challenge me back (participant, sandpit 6).
Although there were clear references to an increased dialogue in the assessment process, an area also addressed by Nicol (2010) and Winstone and Boud (2019), concerns were raised about the time required to monitor learners’ engagement in assessment and feedback. There was a perception that the more lecturers engaged in dialogue with the learner the more time an assessment would take to assess, and this would add to an already busy workload. An example of encouraging student agency was suggested by one participant in sandpit 8 who felt that students should be able to ‘have the possibility of choosing their preferred method of feedback, if they prefer feedback to be verbal, or if they wanted video or audio’.
As they were exploring the different tools and features in the mock-up, almost all of the lecturers involved felt that technology could be used to encourage learner agency. The exception was one lecturer from healthcare, who said that a one-to-one meeting or a group meeting to discuss the feedback would be more relevant and less time-consuming. Perhaps these findings will result in the wider adoption of digital assessment in both institutions and a perception that this may have been the ‘right’ answer to give. One may ask whether if a different method had been used to collect the data, different findings would have been obtained.
Below, we present the three clusters of themes identified during the data collection, and we also discuss how they can be included as part of DAFS design and assessment practices.
Preparing for the assessment
The mock-ups included some references to ensure engagement with the assessment brief from the outset, for example, a criteria matrix (also called an assessment rubric) and a message box, in which learners could ask the lecturer or their peers questions about the assessment. Increasing learner agency before submission was seen as a critical part of the assessment process. Lecturers felt that mistakes are made by learners due to misinterpretation of the assessment brief or the grading criteria (sandpits 1, 2, 3, 6, 7 and 8).
For example, a DAFS should have a checklist in the assessment brief area where learners are required to respond to a set of questions regarding their understanding of the assessment:
What might be nice is to … include a checklist that could be personalised by the lecturer, not just personalised, but tailored. So for example, you know, make sure you make all your references in this format, make sure that you have included the ethics form, you know, whatever might be additionally needed … just to provide a little bit of scaffolding. (Participant, sandpit 7)
Participants suggested that checklist questions could be related to referencing, similarity, the length of the assessment, their understanding of the assessment brief and the assessment criteria, spell checking or even reflections on the feedback provided in a previous similar assessment (sandpits 2, 6, 7 and 8). A participant, in ‘sandpit’ 6, reinforced this by saying that this approach would be ‘an intervention to change behaviour’.
And we liked the idea of the fit to submit box. And we thought that box could maybe be used for the learners to do a self-assessment exercise to check that they had got everything ready. (Participant, sandpit 6)
This would, on the one hand, ensure that learners confirmed themselves whether they had dealt with typical mistakes and, on the other hand, start the submission process earlier. Equally, this could be used as a mean to ensure that learners engage with the assessment criteria before starting to write their submission, which was the main area of concern shared by the participants in sandpit 6.
Other suggestions were also made (i) to provide exemplars of previous assessments to enable learners to improve their understanding of the assessment criteria (sandpit 2), (ii) to self-assess their work against criteria (sandpits 1, 2, 6 and 8), and (iii) to participate in peer-feedback tasks (sandpit 2). These exercises would aim to ensure that learners engaged with the assessment criteria and improve their assessment literacy:
… what I have done in class with my learners is I’ve taken a sample paper from a previous group and then anonymised it and then having them mark it, using the rubric and the assessment brief. Some of them did it okay. Others struggled. But it was starting to make more sense when they could actually have a go at doing it themselves. (Participant, sandpit 2)
The use of exemplars to promote a further clarity of what is intended by the assessment and to allow the learner to engage with this by assessing the piece of work using the assessment criteria and rubrics is a widely discussed practice (Carless & Chan, 2017; Dixon, Hawe, & Hamilton, 2019; Jonsson, 2013).
Using assessment rubrics to aid learners in self or peer-assessing work was supported in four sandpits. All of the participants in these ‘sandpits’ agreed that the use of assessment rubrics allowed learners to compare the assessment against specific criteria and, by doing so, they could engage more actively with the assessment; they had a clear understanding of how they would be assessed and what they were required to do. Recent research (Nieminen & Tuohilampi, 2020) highlights the importance of promoting self-assessment especially in summative models of assessment (high-stake), as only at those moments are learners fully engaged with the process.
Whilst supporting and valuing the exercise of engaging with the assessment from the outset, one of the participants was concerned about the feasibility of this approach: who would monitor the exercise? Why not do it in the classroom? (participant, sandpit 2). This participant believed that discussing feedback and assessment was more effective when done in a face-to-face environment. This concern was amplified, as it was associated with heavy academic workload pressures and increased time pressure to release grades to learners—the timescale for this ranged from 2 to 3 weeks in both institutions.
As part of this cluster, we recommend that DAFS promote the opportunity for students to self-assess themselves against the grading criteria. To achieve, lecturers will need to publicise and discuss the marking criteria with their learners. That in turn will provide learners with a more in-depth understanding of the assessment brief. DAFS can support this by developing a mechanism that requires the student to self-assess their work before submission, and this task should be monitored by the lecturer to ensure there is a proper engagement with the assessment brief.
Formative feedback
An area that is widely discussed in the literature is the opportunity for feedback dialogue between the lecturer and the learner. This dialogue is a fundamental step to ensure that learners acquire feedback literacy, which refers to ‘learners’ ability to understand, utilise and benefit from feedback processes’ (Molloy, Boud, & Henderson, 2019, p. 2). Some authors have designed learner-led frameworks for feedback literacy where learners can gradually develop their own competence about acknowledging and acting upon the formative feedback received (Carless & Boud, 2018; Molloy et al., 2019; Winstone et al., 2017). Importantly, this feedback cannot be given only at the end of a sequence of learning, without the time or opportunity for the learner to use it to improve their performance in related tasks (Molloy et al., 2019).
When developing the mock-up, we included different elements to encourage the existence of formative feedback. We deliberately included the ability for learners to comment on each iteration of feedback and ask questions. Equally, as part of the narrative, we included references to the possibility of submitting different versions of the assignment.
Opportunities for formative feedback were perceived as an important step to promote learners’ feedback literacy by the different groups (sandpits 1, 2, 4, 7 and 8). However, concerns were raised about what this should entail. For example, in sandpit 2, it was suggested submitting shorter drafts and the feedback could be used to address some of the assessment criteria as a strategy to avoid the submission of a full draft. Feedback on full draft submissions, although recognised as useful for learners, was thought to be the wrong approach (sandpits 2 and 6), as it might lead to complaints about the inconsistency between the feedback provided on the draft and final versions.
When they show us a draft, again, the University current guidelines are that you just see, like a plan and maybe a sample with a couple of paragraphs, you’re not supposed to see the whole assignment, because that’s seen as giving them an unfair advantage, because you are in effect, marking it but not marking it. And then if they have problems, then it comes back to bite you because they say, but you looked at it, and you didn’t flag anything up. (Participant, sandpit 2)
Furthermore, concerns were raised about the number of iterations on feedback that were allowed in the submission process, as the lecturer could significantly improve the quality of the assessment even if this was not intentional:
When you say that this could be running 4 or 5 times? I guess there is a question about at the end of the day, how much of that work is the learners work? And how much of it is my work? You know … Yes if you’re doing it as part of the formative process, you are writing it for them. (Participant, sandpit 2)
It was also made apparent that there were issues in terms of the time required to do this. Reflecting on this perceived insecurity of lecturers during the assessment process, it is interesting to address the issue of feedback literacy from lecturers’ point of view, as they were not confident about how consistent their feedback would be in the different iterations of the assessment.
To overcome this issue, the suggestion was made for formative feedback to be part of the full learning experience at a course level. This is an interesting finding as currently, this is not an available feature in the majority of DAFS, which tend to be designed at a modular level. This innovation would depend on a programme of study overview of feedback, an area discussed by Boud and Molloy (2013) and Winstone, Nash, Rowntree, and Menezes (2016). The latter argue that modularisation leads to ‘fragmentation’ of the topic covered and disjointed thinking, making modules isolated silos and the feedback on the assessment less able to provide value for future modules.
To address these points, participants suggested that all of the feedback given to learners should be stored somewhere in the DAFS (sandpits 4 and 7); interestingly, this has also been discussed by other authors (Burrows & Shortis, 2011; Parkin et al., 2012; Winstone, 2019). Concerns were raised about the lack of usefulness of the feedback written, and therefore, it was suggested that there should be a mechanism to allow learners to engage with, and action upon the feedback received over time. Feedback would be written and labelled by the lecturer. The labelling system would allow the learner to easily navigate from themes of collected feedback such as referencing, critical thinking or sentence structure. Then, it would be stored in a library of feedback. Learners would then be able to access all of their feedback using metadata and a labelling system. Learners would be able to access the feedback they received before the submission or explore how they could improve the areas of development that had been identified more often as part of the feedback collected during all of their course assignments.
Regarding the possibility of learners being able to extract the feedback received for future assignments or potentially feedback appears on the checklist before learners submit (almost like a nudge). (Participant, sandpit 7)
There was also a suggestion in sandpit 7 that the assessor could check the feedback received by the learners in previous assignments so that they could use this previous feedback to inform their judgement whilst grading. There were also requests for a library of comments to be available for staff to use, allowing them to see past comments made in previous assessments organised by themes and options to appear as they typed (also in sandpit 7). This could also allow for greater consistency whilst grading an area, which some were concerned about.
Creating opportunities for learners to engage with pre-existing feedback was seen as a key area for development for DAFS. The creation of a library of feedback would allow learners to read previous feedback against their current assessment and, potentially, improve areas by signposting to previous mistakes done in earlier submissions. DAFS are typically designed at a module/course level. We recommend that all the feedback is available at a programme level so that both students and lecturers can use it as part of the learning experience.
Feedback post-submission
Research has suggested that learners are not reading their feedback and are logging into the DAFS only to check their grades (Winstone et al., 2020). This was widely discussed during the sandpits, as lecturers felt pressured to write ‘good’ feedback. Nevertheless, they stated that feedback is often not actioned by the learners (sandpits 3, 4, 6, 7, 8, 9 and 10). We purposely created a button with the title ‘agree with feedback’, which learners would need to press to confirm their grade. This element was the most controversial topic of discussion, as lecturers felt that the system was giving too many powers to the learner. The power balance was therefore questioned at this point.
And also, we felt where it says ‘agree with feedback’ that was a bit controversial… So could it have something that would require somebody to accept the feedback after they’ve read it, because this indicates when you go in straight away, the learner gets their mark, that as we know, learners aren’t reading feedback. So to encourage that, could we have something where they have to see their feedback first go through, tick a box to say they’ve actually read it? And then they get access to their mark following that? (Participant, sandpit 3).
So, coming back to the mark agreed or not agreed if the mark wasn’t valid until the learners have read the feedback that would be a very powerful way to ensure that learners do read the feedback. (Participant, sandpit 4).
You could make that part of the, you know, it’s getting them to interact with feedforward, basically, and getting some form of acknowledgement from them that they understand what you’re talking about. … At some point, they would need to write or reflect on the feedback before the mark is released … But it is not just pressing the button to say that they have read it but there has to be some form of proof that they did, in fact, engage with the feedback (Participant, sandpit 8)
Participants agreed with the need for the learner to act and engage with the end-process, but they did not like the requirement for learners to agree with the feedback, as this would create tensions between the learner and the lecturer (sandpits 3, 4, 6, 7, 8, 9 and 10). There was a feeling among participants that learners frequently confused feedback with the grade and, thus, when considering the feedback received, they were arguing about the grade. This is an argument embedded into the marketisation culture of HE, which is particularly prevalent across some countries, where learners pay high tuition fees and therefore feel empowered, often seeing themselves as consumers (Woodall, Hiller, & Resnick, 2014). Nevertheless, participants did see some value in promoting discussions about the feedback, particularly if learners had to act upon the feedback before the release of their grade, for example, by reflecting on changes they would need to make for future similar assignments (sandpits 4 and 8), and/or providing a rationale for not agreeing with the feedback (sandpits 4, 9 and 10). This strategy has been discussed in the literature by Nicol (2007) and Parkin et al. (2012), who found that learners are more likely to engage with the process of reflection when they have been told explicitly that they will be required to reflect on their feedback before receiving their grade. This ensures that learners engage more actively with the feedback by trying to make sense of it and creating their pathway for development.
Although the majority of the lecturers involved in the sandpits were encouraged by a solution of encouraging learners to engage with the feedback before the release of the grade, this is something that DAFS does not have built-in at present, and it is difficult to replicate pedagogically. DAFS have been typically developed to allow grades and feedback to be released simultaneously; there is no separation between these two elements of assessment. The consequence is that often learners do not read the feedback as they rather prefer to concentrate on the grade which is what they value. To ensure learners use and engage with the feedback, we recommend that the feedback should be sent to the learner before the grade, i.e. to be seen as a separate element. Only after the student reads and engages with the feedback (through reflection or setting up an action plan for improvement in the future) should the grade be released to the student. This will ensure that the feedback written by the lecturer is well understood by the students and eventually acted upon for future assessments. It will encourage a new and more holist culture of assessment which suggests that students also have an active role in the quality of the feedback process.