Context
The context of our study was third-year undergraduate medical education at a medical school in Germany. In this setting, we gave special attention to perception and analysis tasks rather than synthesis tasks (e.g., making a diagnosis). The underlying rationale for this was that the majority of medical students will not pursue radiology as their future profession and will become primarily users of radiology services. For effective communication with service providers (radiologists), however, non-radiology clinicians need a basic understanding of radiology and should be able to discuss their findings and interpretations in an accurate way.
A specially designed, computer-supported in-class scenario was piloted in November 2016 as a single case study with a regular seminar group (n = 26) of third-year undergraduate medical students. A detailed description of this complex scenario with its tasks and learning dashboard is given below.
Pedagogical scenario and task design
The scenario was a 90-min in-class session comprising individual computer tasks and class-wide, face-to-face discussions. Two clinical cases with radiographs and a total of 50 questions were first individually elaborated in the computer program VQuest (active and constructive engagement) and then successively discussed in plenum (interactive engagement). No time limitations were given for answering the questions, and all questions of a clinical case were discussed when the majority of the students had answered them. The class-wide discussions were moderated by a radiologist who participated actively as an instructor in these discussions. During the moderation, the instructor could operate the learning dashboard that is described in the “Learning dashboard and visualization background”section.
The learning objective of systematic viewing of radiologic images was addressed by computer tasks asking students first to check the quality of chest radiographs (e.g., completeness and overlaps) and then to evaluate findings at seven anatomical landmarks (e.g., chest soft tissues, chest skeletal system, and lungs). The response type for these computer tasks were dropdown-menu multiple-choice questions (long-menu questions, Fig. 1).
Although image interpretation was already indirectly addressed by the systematic viewing tasks, the perception and analysis components of this interpretation were more explicitly addressed by two additional computer tasks. Students had first to identify salient findings in radiological images with marker questions (Fig. 2) and then to describe the findings in free-text questions (constructive engagement).
Learning dashboard and visualization background
The learning dashboard had to support class-wide face-to-face discussions based on individual, aforementioned computer tasks. The large volume and wide variety of data gathered during the individual tasks as well as the changing of aims and perspectives during the class-wide discussions meant that representations with only static images would not suffice. Additional interaction techniques within the learning dashboard were needed to give the moderator of discussions the ability to manipulate the representations. Such techniques should enable discussion groups to identify the most [useful/relevant] data to discuss (interactive engagement) and the most valuable data to use for building a common knowledge base (constructive engagement). In the following paragraphs, we describe the functionalities that were built into the learning dashboard, based on user intent and structured along the lines of the “general categories of interaction techniques” identified by Yi et al. (Yi, Kang, & Stasko, 2007).
Filter function: change the set of data items being presented
The moderator could specify the students whose data were presented based on semester enrollment, peer group (6 students) organization during their medical study, and student names. This could be combined with the time period in which the computer tasks were elaborated. These values were specified with checkboxes within a treemap visualization, a calendar, and combo boxes (Fig. 3). To facilitate a secure class-wide discussion, the filtered dataset was presented anonymously in the learning dashboard and was not presented at all when the number of students was less than 5.
Within the learning dashboard, the moderator had several additional visualization features with which to further customize the data presentation: tabs for different cases, tabs for different tasks within a clinical case (check of quality, review of anatomical landmarks, perception, and analysis), buttons to expand or collapse answers to questions, and buttons to show or hide the “hotspot diagrams” for marker questions (Fig. 4).
The aforementioned functions were designed to support the moderator in discussions with the group on issues considered to be of interest.
Explore function: examine different subsets of data cases
The moderator could select different views of an image (for instance a frontal or side view of a chest radiograph) to examine a subset of a data case. Within a specific view, a panning function, allowing an image to be grabbed and moved with the mouse, enabled the user to present different parts of a larger image (cursor in Fig. 4).
Abstract/elaborate function: adjust the level of abstraction
The moderator could change the scale of an image with a zooming function that was controlled by the scroll button of the mouse. An overview (zoom-out) or a more detailed view (zoom-in) could be presented in this way.
Select function: mark data items of interest
The moderator could make descriptions written by students visually distinctive by highlighting specific text cards with a mouse click (Fig. 4). This enabled the group to keep track of interesting descriptions and compare them during a discussion.
Reconfigure function: change the spatial arrangement
The moderator could sort the lists of student answers in alphabetical or frequency order by clicking on the column headings. This made it easier to find and discuss answers of high importance or to identify common errors. To prevent cueing, correct answer(s) were only highlighted after a “show correct answer” button (eye icon in Figs. 4 and 5) had been clicked.
Data collection
Timestamps that are automatically generated by the VQuest program when users answer questions were collected to calculate the time students spent on the different computer tasks. Video recordings of the two class-wide discussions were used for interaction analysis of these discussions. Finally, a focus group discussion with 12 of the 26 students directly following the in-class session was used to explore the students’ own perceptions of their cognitive engagement.
Data analysis
To quantify the different kinds of cognitive engagement, the percentage of total elaboration time spent by students on answering questions on image quality and on anatomical landmarks, as recorded in the log files, was used as a measure of the degree of active engagement. The degree of constructive engagement was deduced from log file data showing the percentage of total elaboration time spent by students on assignments for marking and describing salient findings of a case. Video recordings of the class-wide discussions were used to quantify the degree of interactive engagement.
To get an impression of the quality of the constructive cognitive engagement, texts entered by students in response to description questions were analyzed. An insight into the quality of the interactive engagement was obtained through an analysis of video recordings. As our focus was on the readily observable, surface elements in the discussion (e.g., speaker allocation, speech act, elicitation-response patterns) an interaction analysis could be applied directly to the videotaped material without the necessity of a prior transcript of the dialogue. For interaction analysis, we used the software program Transana. The unit of analysis was an “utterance,” an individual message unit expressed by one subject (e.g., teacher or student), and had one single communicative function (e.g., question or answer). Utterances of interest were identified and coded based on the Transcript Analysis Tool (Fahy et al., 2000), because the main categories (questioning, statements, and quotations) of this coding scheme refer to speech acts that are easy to identify by non-specialist raters. In addition, some inductive codes (marking and describing images, comparing images) were generated through an iterative process of interpretation, negotiation, and discussion between the researchers. Based on this mixture of deductive and inductive coding, interesting parts of the dialogs were selected and transcribed literally (Fig. 6).
To explore student perceptions of their own cognitive engagement during the in-class scenario, half of the group were invited to talk about their experiences in a focus group discussion, directly following the in-class session. They were asked to express their thoughts about the individual computer tasks and class-wide discussions and invited to give examples of what had worked out well and what had gone wrong. The sessions were audio-recorded and transcribed literally.