Skip to main content

The use of learning dashboards to support complex in-class pedagogical scenarios in medical training: how do they influence students’ cognitive engagement?

Abstract

This study aims to contribute to empirical and interdisciplinary knowledge on how visual learning analytics tools can support students’ cognitive engagement in complex in-class scenarios. Taking a holistic approach, instructional design, learning analytics, and students’ perceptions were examined together. The teaching of systematic viewing and image interpretation in radiology education was used to exemplify a complex in-class scenario, and a specific learning dashboard was designed as a support tool. The design was based on both educational and visualization theories and aligned with a pedagogical scenario integrating individual and class-wide activities. The quantity and quality of the cognitive engagement of a group of 26 students were explored. A mixed method approach was used, including computer log file analyses of individual work, analysis of video recordings of in-class small group discussions, and a focus group discussion with the students involved. The in-class scenario with the learning dashboard resulted in a good balance between individual tasks and group discussions and a high degree of active cognitive engagement. Constructive and interactive forms of cognitive engagement were, however, less evident. In addition, the products of these constructive (description of findings) and interactive (type of dialogue) cognitive engagements were of mediocre quality and therefore not optimal for knowledge transfer. The study also showed that the way the students and teacher understood their respective tasks and used the available interaction techniques of the learning dashboard highly influenced the students’ cognitive engagement. Finally, several ideas emerged that could help to overcome the deficits found in the training of participants and to improve the tasks set and the learning dashboard itself.

Introduction

Visual learning analytics tools for classroom teaching

Learning analytics defined as “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (Siemens & Gasevic, 2012) are increasingly recognized as a promising means to optimize the conditions for students’ learning. However, there is a scarcity of empirical studies elaborating on how learning analytics can be of benefit in classroom teaching, for instance in scenarios that integrate individual computer tasks and class-wide face-to-face discussions. A recent systematic literature review focused on the intersection of learning analytics and visual analytics (Vieira, Parsons, & Byrd, 2018). Visual analytics integrate automated data analyses with interactive visualization to extend human cognitive and perceptual abilities to explore, reason, and discover data features visually (Keim, Munzner, Rossi, & Verleysen, 2015). Vieira et al. referred to the abovementioned intersectional domain as “visual learning analytics” and concluded that till now little work has been done to bring visual learning analytics tools into the classroom setting. In addition, they revealed a lack of interdisciplinary research connecting educational theory and visualization practices (Vieira, Parsons, & Byrd, 2018).

Visual learning analytics could potentially support the provision of custom-made feedback to groups on their former individual assignments, the selection of valuable themes for review sessions, and the planning of follow-up learning or teaching activities. This empirical and interdisciplinary study therefore aims to further our understanding of how visual learning analytics tools can support students’ learning in scenarios integrating individual computer tasks and class-wide face-to-face discussions in a classroom setting. Such integrated pedagogical scenarios are of high interest for undergraduate radiology education where lectures are still the predominant teaching format. This is remarkable, given that systematic viewing and image interpretation are often identified as essential learning goals in radiology education (Kondo & Swerdlow, 2013; Rathan M. Subramaniam, Beckley, Chan, Chou, & Scally, 2006; R. M. Subramaniam, Sherriff, Holmes, Chan, & Shadbolt, 2006) and it is hard to imagine how lectures can adequately address these learning goals (Kalaian & Kasim, 2017; Spector et al., 2016).

Pedagogical scenario for radiology education and cognitive engagement as a predictor for knowledge development

In general, a scenario integrating in-class individual tasks with class-wide discussions is seen as a good alternative to lectures (Akcayir & Akcayir, 2018; De Hei, Sjoer, Admiraal, & Strijbos, 2016; Schmidt, Wagener, Smeets, Keemink, & van der Molen, 2015). An in-class pedagogical scenario was therefore developed for teaching the systematic viewing and image interpretation in undergraduate radiology education. To facilitate the complex in-class scenario, a learning dashboard was developed based on both educational and visualization theories. Adopting Verbert’s definition of a learning dashboard as an “application that captures and visualizes traces of learning activities in order to promote awareness, reflection, and sense-making” (Verbert et al., 2014, p. 1499), we used one as an example of a visual learning analytics tool. In the chosen educational setting, the learning dashboard had to aggregate and present data generated in individual computer tasks to facilitate successive class-wide, face-to-face discussions.

In order to explore the effects on knowledge development of the computer-supported in-class scenario, we chose cognitive engagement as the outcome variable, because it is a strong predictor for students’ knowledge development (McCune & Entwistle, 2011). Cognitive engagement is operationalized as the overt behaviors engaged in by students while learning: interactive (I), constructive (C), active (A), and passive (P) (Chi, 2009). In this so-called ICAP framework, it is assumed that different kinds of cognitive engagement bring about different levels of understanding. Passive student engagement yields knowledge that can be recalled verbatim in an identical context, active engagement builds a fund of knowledge that can be applied to similar contexts, constructive engagement results in knowledge that can be applied to a novel context, and finally, interactive student engagement yields a knowledge base that allows partners to invent new ideas.

Since medical students have to apply systematic viewing and the interpretation of radiologic images on a wide range of pathologies of a variety of tissues and at many locations of the body, transfer of this knowledge and these skills is important. It can be inferred from the ICAP framework that constructive and interactive student engagement during learning are the most effective forms of engagement for fostering such transfer of knowledge.

Theoretical framework for design of tasks and learning dashboard

The framework of (Van der Gijp et al. 2017) was used in the study to decide what type of tasks to design and what information to present in the learning dashboard. The framework distinguishes three components of the knowledge and skills involved in interpreting radiologic images: perception, analysis, and synthesis. The perception component contains skills such as “discriminating normal from abnormal findings” and “recognizing patterns,” while the analysis component includes skills such as “characterizing findings,” “comparing findings with those in previous images,” and “discriminating relevant from irrelevant findings.” The synthesis component, finally, comprises skills such as “integrating findings,” “generating differential diagnosis,” and “deciding about follow-up actions.”

Design for learning, analytics, and inquiry into students’ learning are typically researched as separate topics, although studying them together is considered a more promising approach (Mor, Ferguson, & Wasson, 2015). Our study therefore examines the three activities together. Through this approach, we wanted to extend the knowledge base for instructional designers and highlight important factors to consider when designing computer-supported in-class scenarios for teaching the analysis and interpretation of images in medical training. We expected the learning dashboard to facilitate class-wide face-to-face discussions (interactive engagement) of previous, individually executed computer tasks (active and constructive engagement).

Methods

Context

The context of our study was third-year undergraduate medical education at a medical school in Germany. In this setting, we gave special attention to perception and analysis tasks rather than synthesis tasks (e.g., making a diagnosis). The underlying rationale for this was that the majority of medical students will not pursue radiology as their future profession and will become primarily users of radiology services. For effective communication with service providers (radiologists), however, non-radiology clinicians need a basic understanding of radiology and should be able to discuss their findings and interpretations in an accurate way.

A specially designed, computer-supported in-class scenario was piloted in November 2016 as a single case study with a regular seminar group (n = 26) of third-year undergraduate medical students. A detailed description of this complex scenario with its tasks and learning dashboard is given below.

Pedagogical scenario and task design

The scenario was a 90-min in-class session comprising individual computer tasks and class-wide, face-to-face discussions. Two clinical cases with radiographs and a total of 50 questions were first individually elaborated in the computer program VQuest (active and constructive engagement) and then successively discussed in plenum (interactive engagement). No time limitations were given for answering the questions, and all questions of a clinical case were discussed when the majority of the students had answered them. The class-wide discussions were moderated by a radiologist who participated actively as an instructor in these discussions. During the moderation, the instructor could operate the learning dashboard that is described in the “Learning dashboard and visualization background”section.

The learning objective of systematic viewing of radiologic images was addressed by computer tasks asking students first to check the quality of chest radiographs (e.g., completeness and overlaps) and then to evaluate findings at seven anatomical landmarks (e.g., chest soft tissues, chest skeletal system, and lungs). The response type for these computer tasks were dropdown-menu multiple-choice questions (long-menu questions, Fig. 1).

Fig. 1
figure 1

Individual computer task in VQuest program with multiple-choice question in which answers could be selected from a long list with answer options

Although image interpretation was already indirectly addressed by the systematic viewing tasks, the perception and analysis components of this interpretation were more explicitly addressed by two additional computer tasks. Students had first to identify salient findings in radiological images with marker questions (Fig. 2) and then to describe the findings in free-text questions (constructive engagement).

Fig. 2
figure 2

Individual computer task in VQuest program with perception question in which students could navigate, zoom, and place markers (crosses) in images

Learning dashboard and visualization background

The learning dashboard had to support class-wide face-to-face discussions based on individual, aforementioned computer tasks. The large volume and wide variety of data gathered during the individual tasks as well as the changing of aims and perspectives during the class-wide discussions meant that representations with only static images would not suffice. Additional interaction techniques within the learning dashboard were needed to give the moderator of discussions the ability to manipulate the representations. Such techniques should enable discussion groups to identify the most [useful/relevant] data to discuss (interactive engagement) and the most valuable data to use for building a common knowledge base (constructive engagement). In the following paragraphs, we describe the functionalities that were built into the learning dashboard, based on user intent and structured along the lines of the “general categories of interaction techniques” identified by Yi et al. (Yi, Kang, & Stasko, 2007).

Filter function: change the set of data items being presented

The moderator could specify the students whose data were presented based on semester enrollment, peer group (6 students) organization during their medical study, and student names. This could be combined with the time period in which the computer tasks were elaborated. These values were specified with checkboxes within a treemap visualization, a calendar, and combo boxes (Fig. 3). To facilitate a secure class-wide discussion, the filtered dataset was presented anonymously in the learning dashboard and was not presented at all when the number of students was less than 5.

Fig. 3
figure 3

Learning dashboard used for a class-wide discussion showing a treemap with checkboxes to specify the students whose data is presented on the left. For the students and timeframe selected, 26 VQuest cases are available for presentation

Within the learning dashboard, the moderator had several additional visualization features with which to further customize the data presentation: tabs for different cases, tabs for different tasks within a clinical case (check of quality, review of anatomical landmarks, perception, and analysis), buttons to expand or collapse answers to questions, and buttons to show or hide the “hotspot diagrams” for marker questions (Fig. 4).

Fig. 4
figure 4

Learning dashboard used in a class-wide discussion showing a hotspot diagram of student responses on a marker question in prior computer tasks. Cursor indicates panning. Numbers in the upper left margin of the radiograph are for switching between images. The green areas in the radiograph indicate the correct target area and can be made invisible with the “hide correct answer” button (eye icon)

The aforementioned functions were designed to support the moderator in discussions with the group on issues considered to be of interest.

Explore function: examine different subsets of data cases

The moderator could select different views of an image (for instance a frontal or side view of a chest radiograph) to examine a subset of a data case. Within a specific view, a panning function, allowing an image to be grabbed and moved with the mouse, enabled the user to present different parts of a larger image (cursor in Fig. 4).

Abstract/elaborate function: adjust the level of abstraction

The moderator could change the scale of an image with a zooming function that was controlled by the scroll button of the mouse. An overview (zoom-out) or a more detailed view (zoom-in) could be presented in this way.

Select function: mark data items of interest

The moderator could make descriptions written by students visually distinctive by highlighting specific text cards with a mouse click (Fig. 4). This enabled the group to keep track of interesting descriptions and compare them during a discussion.

Reconfigure function: change the spatial arrangement

The moderator could sort the lists of student answers in alphabetical or frequency order by clicking on the column headings. This made it easier to find and discuss answers of high importance or to identify common errors. To prevent cueing, correct answer(s) were only highlighted after a “show correct answer” button (eye icon in Figs. 4 and 5) had been clicked.

Fig. 5
figure 5

Learning dashboard used in a class-wide discussion showing the frequency of student responses on a long-list question in the prior computer tasks. The correct answers are highlighted and can be made invisible with the “hide correct answer” button (eye icon)

Data collection

Timestamps that are automatically generated by the VQuest program when users answer questions were collected to calculate the time students spent on the different computer tasks. Video recordings of the two class-wide discussions were used for interaction analysis of these discussions. Finally, a focus group discussion with 12 of the 26 students directly following the in-class session was used to explore the students’ own perceptions of their cognitive engagement.

Data analysis

To quantify the different kinds of cognitive engagement, the percentage of total elaboration time spent by students on answering questions on image quality and on anatomical landmarks, as recorded in the log files, was used as a measure of the degree of active engagement. The degree of constructive engagement was deduced from log file data showing the percentage of total elaboration time spent by students on assignments for marking and describing salient findings of a case. Video recordings of the class-wide discussions were used to quantify the degree of interactive engagement.

To get an impression of the quality of the constructive cognitive engagement, texts entered by students in response to description questions were analyzed. An insight into the quality of the interactive engagement was obtained through an analysis of video recordings. As our focus was on the readily observable, surface elements in the discussion (e.g., speaker allocation, speech act, elicitation-response patterns) an interaction analysis could be applied directly to the videotaped material without the necessity of a prior transcript of the dialogue. For interaction analysis, we used the software program Transana. The unit of analysis was an “utterance,” an individual message unit expressed by one subject (e.g., teacher or student), and had one single communicative function (e.g., question or answer). Utterances of interest were identified and coded based on the Transcript Analysis Tool (Fahy et al., 2000), because the main categories (questioning, statements, and quotations) of this coding scheme refer to speech acts that are easy to identify by non-specialist raters. In addition, some inductive codes (marking and describing images, comparing images) were generated through an iterative process of interpretation, negotiation, and discussion between the researchers. Based on this mixture of deductive and inductive coding, interesting parts of the dialogs were selected and transcribed literally (Fig. 6).

Fig. 6
figure 6

Interaction analysis of video-recorded, class-wide discussion with Transana. The colored bars in the upper left window depict the different codes assigned to utterances in the discussion. The bottom left window shows the verbatim transcript of the selected segment of the discussion

To explore student perceptions of their own cognitive engagement during the in-class scenario, half of the group were invited to talk about their experiences in a focus group discussion, directly following the in-class session. They were asked to express their thoughts about the individual computer tasks and class-wide discussions and invited to give examples of what had worked out well and what had gone wrong. The sessions were audio-recorded and transcribed literally.

Results

Quantitative analysis: time spent on different tasks

Table 1 presents the time students spent on different activities during their individual computer work and in the class-wide discussions. The second column shows the type of cognitive engagement (P = passive, A = active, C = constructive, I = interactive) assigned to these activities. The following two columns show the mean times and within brackets the standard deviations of the time-on-task for each clinical case separately. A row with subtotals shows the summed time-on-task with the percentage of the total time in brackets for the three subsuming categories. For both cases together, 10% of the time (7.5 min) was spent on checking image quality, 24% of the time (18.5 min) on reviewing anatomical landmarks, 17% of the time (13.3 min) on identifying and diagnosing salient findings, and 49% of the time (38 min) on the class-wide discussions. The 13-min introduction to the in-class scenario was not included in these calculations.

Table 1 Students’ time on task during the in-class scenario

The computer log files indicated that students spent about 25% of the time (10 of the 40 min) during individual tasks on describing findings, an activity classified as constructive cognitive engagement. The video analysis of the class-wide discussions revealed that about 15% of the time during these sessions was used by students to actually ask or answer questions, an activity classified as interactive cognitive engagement.

Qualitative analysis

Textual descriptions

An analysis of the texts entered by students in the description questions during the individual computer tasks showed that the descriptions of findings within the images were often incomplete, quite superficial, and mere diagnostic impressions. Complete and meaningful descriptions were rare. In addition, the moderator often failed to address these mediocre descriptions in the subsequent class-wide discussions. When the descriptions were discussed, the interaction technique of highlighting the small text carts in the learning dashboard (Fig. 4) was seldom used.

Content of class-wide discussions

The interaction analysis of the video recordings of the class-wide discussions showed that interactions were in general infrequent and of short duration. They were mostly initiated by the supervisor and often involved one student at a time in a question, answer, and feedback sequence. Dialogs in which several students responded together and built on each other’s contributions were seldom seen. Interactivity was especially scarce at the end of class-wide discussions when marking, description, and diagnosing tasks were discussed.

In the interactions that did occur, there were regular discussions of misconceptions when all selected responses to a multiple-choice question (together with their frequencies) were presented in the learning dashboard (Fig. 5). In such situations, incorrect answers that were frequently chosen were also discussed. The way the moderator used the interaction techniques of a representation could, however, considerably influence a discussion. When for instance the correct answer was directly highlighted (with the “show” button), other answers were often no longer discussed. When the most frequent response was correct, the discussion was best served when other answers were addressed first without confirming the correctness of the one most frequently chosen. When, however, many students had chosen a wrong answer, it seemed best to directly state that this option was incorrect.

To illustrate how some very modest and other more worked-out interactions in the class-wide discussions looked, Tables 2 and 3 present some observations and transcripts of the dialog arising from the review of anatomical landmarks and identification of salient findings, respectively.

Table 2 Examples of verbal interactions during class-wide discussions on systematic viewing tasks
Table 3 Examples of verbal interactions during class-wide discussions on identifying salient findings in images

Students’ perceptions in focus group discussions

In the focus group discussion, the students (n = 12) made remarks that indicated they perceived active, constructive, and interactive cognitive engagement during the in-class scenario. Table 4 presents sample quotations from students’ remarks, displaying every type of cognitive engagement. Concerning the learning dashboard, most students agreed that it was useful for debriefing purposes and highly appreciated the anonymized presentation of the data.

Table 4 Examples of students’ utterances on how they perceived their cognitive engagement during the in-class session

Discussion

The aim of this study was to gain insight into how a visual learning analytics tool can support learning processes in complex in-class scenarios. A learning dashboard was designed based on both educational and visualization theories and implemented for radiology education. The learning dashboard integrated individual and class-wide activities for the systematic viewing and interpretation of radiologic images.

Quantity of students’ cognitive engagement

On average, students spent half of the in-class session on individual computer tasks and the other half on class-wide face-to-face discussions. The scenario with learning dashboard thus managed to distribute individual tasks and group discussions in a balanced way throughout the in-class session. Although the relative times spent on the different activities were comparable for the two clinical cases, the total time spent on the second case was less than half the time spent on the first. From direct observations, students’ comments during the focus group discussion and the fact that the two clinical cases had comparable content with the same number and type of questions, we concluded that this finding is the result of time pressure: because the pedagogical scenario was new to both students and moderator, they spent too much time on the first case.

Thus, the individual computer tasks in the pedagogical scenario considerately limited the amount of passive cognitive engagement that is prevalent during traditional lectures. The relatively low level of constructive engagement during the individual tasks (about 25% of the time) is however disappointing, because constructing your own meaning from experiences is considered to be effective for acquiring integrated knowledge essential for transfer of knowledge (Novak, 1993). One reason for this mediocre constructive engagement could be the relative abundance of long-menu questions reviewing anatomical landmarks in the scenario used. These questions do not require students to formulate their own understanding as much as the description-questions for image interpretation. In future scenarios, it might therefore be wise to reduce the number of questions on anatomical landmarks in favor of description questions on image interpretation.

The learning dashboard seemed to stimulate class-wide discussions based on preceding individual viewing and interpretation tasks. The limited number of overt expressions of interactive cognitive engagement during the class-wide discussions (about 15% of the time) is however disappointing, because social exchange is assumed to foster higher-order cognitive processes and with that knowledge transfer (Wertsch & Smolka, 1993). One of the reasons for this mediocre interactive engagement might be that students have too little experience in collaborative learning. This could be countered by training students on effective discussion techniques for collaborative learning prior to the in-class scenarios.

Quality of students’ cognitive engagement

We had the impression that the anonymous presentation of the responses positively influenced student participation in the discussions. The interaction technique used to explore different views of an image stimulated the systematic viewing procedure by which frontal and side views of chest radiographs are compared. The effect of interaction techniques like “hotspot diagrams” of responses to marker questions and the button to hide correct answers to both multiple-choice and marker questions was dependent on the context and strategy of its use by the moderator.

The textual descriptions students made during the individual tasks show that their constructive cognitive engagement was mediocre in quality. A disadvantage in this regard might be that the learning dashboard failed to display the connection between a marker in an image and the accompanying description. This could be solved for instance by adding an interaction technique that highlights the description when hovering with the mouse over a hotspot in the diagram or the other way around. The function for marking interesting descriptions in the learning dashboard was hardly used. Perhaps, the moderator was not aware of the availability of this interaction technique.

Although during the individual computer tasks involving description questions and multiple-choice diagnostic questions a clear line is drawn between characterizing and diagnosing findings, students still seemed to find it hard to make such a distinction. In the descriptions of their findings, students often made a one-step abstraction from an image to a diagnostic impression. This might be appropriate for expert radiologists, but for students with little experience in interpreting radiologic images, it is of more value to make an intermediate abstraction step. Such abstractions will enable novices to discuss with more knowledgeable persons the things they see and how to interpret them, fostering meaningful learning of the perception and analysis steps within the diagnostic process. This “jumping” to a diagnostic impression during description questions might be countered in future scenarios by focusing more on image patterns than on disease diagnoses. Perhaps, students also missed a basic vocabulary to describe radiologic images. Preparatory instruction with for instance a simplified version of the glossary of the Fleischner Society (Tuddenham, 1984) might improve this situation.

In addition, we know from the clinical reasoning literature (G Bordage, 1994; G. Bordage, 2007) that the use of abstractions and oppositions can present a powerful strategy for organizing and retrieving medical knowledge. Therefore, the use of so-called semantic qualifiers uniformly descriptive medical terminology employing binary and oppositional descriptors such as acute-chronic and local-systemic is regularly recommended for students presenting patient problems (Nendaz & Bordage, 2002). In radiology education, we would envisage encouraging students to use descriptors such as “lucency-opacity,” and “diffuse-segmental” as semantic qualifiers when describing findings in radiologic images.

The learning dashboard also underlined the difference between characterizing and diagnosing findings during the class-wide discussions, as these data were presented on different pages. Nevertheless, the moderator seemed to find it hard to address this difference during the discussions. The expert moderator might not have been aware of the importance of this intermediate step for the novices or did not have the time to address this point in an adequate way. Teacher training and a reduction of the volume of questions reviewing the anatomical landmarks in favor of description questions might resolve these shortcomings.

The interaction analysis of the class-wide discussions also revealed that the sparse interactions during the discussions were often of the initiation-response-follow-up pattern of conventional classroom talk (Wells & Arauz, 2006), whereby the moderator asks the group a question (initiation), a student provides an answer (response), and the moderator gives feedback (follow-up). This is a kind of “pedagogical dialog,” in which someone who knows and possesses the truth instructs someone who is ignorant of it (Skidmore, 2000). A more “dialogical pedagogy” in which participants alternately pose questions, respond to each other in turn, and build on each other’s contributions might facilitate more interactive engagement.

Moderators should be informed about effective ways to facilitate interaction in small groups. Video clips of effective and less effective ways of using the information presented in the learning dashboard could form part of such instruction. In addition, individual computer tasks could be adapted to better accommodate discussions during group discussions. We would envisage case comparisons (Kok, de Bruin, Leppink, van Merrienboer, & Robben, 2015) in which different students elaborate on different pathologies with a common radiologic pattern.

Limitations

Our work is an effort to explore the relatively new fund of empirical knowledge on how visual learning analytics can support complex in-class teaching. Deriving a concrete tool from educational and visualization theories (learning dashboard) and applying this to a specific, authentic practice (radiology education) has provided a valuable means of generating the real-world empirical data we are looking for, although our results cannot be applied to other visual learning analytics tools or extrapolated to other knowledge domains. Research on other tools and applications in further knowledge domains is needed to derive a more general understanding of the possibilities of visual learning analytics in supporting complex in-class teaching.

Furthermore, studies such as this in which innovative approaches are applied for the first time in practice and explored using qualitative research methods such as interaction analysis are often based on small sample sizes. This makes it hard to draw strong conclusions. Future replication studies involving more subjects will eliminate this shortcoming.

Despite the aforementioned limitations, we think the descriptions, examples, and findings reported here have the potential to inspire educators to design, develop, and experiment with visual learning analytics tools to increase our knowledge and understanding of how these technologies can be used to support complex in-class scenarios.

Conclusion

The current study shows that a visual learning analytics tool, exemplified by a learning dashboard, can support complex in-class pedagogical scenarios integrating individual computer tasks and class-wide face-to-face discussions in radiology education. Educational and visualization theories informed us well on how to design such a learning dashboard, but the quantitative and qualitative effects of the computer-supported scenario on students’ cognitive engagement were found to be highly influenced by the way the students and teacher understood their tasks and used the available interaction techniques of the learning dashboard. Fine-tuning of the functionalities of the learning dashboard and the individual tasks it builds upon combined with increased attention to student and teacher training for such complex in-class scenarios should solve many of the problems encountered.

Availability of data and materials

Data supporting the findings of this study are available from the author on request. Please note that this is a collation of anonymized results.

References

  • Akcayir, G., & Akcayir, M. (2018). The flipped classroom: A review of its advantages and challenges. Computers & Education, 126, 334–345.

    Article  Google Scholar 

  • Bordage, G. (1994). Elaborated knowledge: A key to successful diagnostic thinking. Academic Medicine, 69(11), 883–885.

    Article  Google Scholar 

  • Bordage, G. (2007). Prototypes and semantic qualifiers: From past to present. Medical Education, 41(12), 1117–1121.A.

    Article  Google Scholar 

  • Chi, M. T. H. (2009). Active-constructive-interactive: A conceptual framework for differentiating learning activities. Topics in Cognitive Science, 1(1), 73–105.

    Article  Google Scholar 

  • De Hei, M. S. A., Sjoer, E., Admiraal, W., & Strijbos, J. W. (2016). Teacher educators’ design and implementation of group learning activities. Educational Studies, 42(4), 394–409.

    Article  Google Scholar 

  • Fahy, P. J., Crawford, G., Ally, M., Cookson, P., Keller, V., & Prosser, F. (2000). The development and testing of a tool for analysis of computer mediated conferencing transcripts. Alberta Journal of Educational Research, 46(1), 85–88.

    Google Scholar 

  • Kalaian, S. A., & Kasim, R. M. (2017). Effectiveness of various innovative learning methods in health science classrooms: A meta-analysis. Advances in Health Sciences Education, 22(5), 1151–1167.

    Article  Google Scholar 

  • Keim, D. A., Munzner, T., Rossi, F., & Verleysen, M. (2015). Bridging information visualization with machine learning (Dagstuhl Seminar 15101). Dagstuhl reports (vol.5, No. 3) Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik).

  • Kok, E. M., de Bruin, A. B. H., Leppink, J., van Merrienboer, J. J. G., & Robben, S. G. F. (2015). Case comparisons: an efficient way of learning radiology. Academic Radiology, 22(10), 1226–1235.

    Article  Google Scholar 

  • McCune, V., & Entwistle, N. (2011). Cultivating the disposition to understand in 21st century university education. Learning and Individual Differences, 21(3), 303–310.

    Article  Google Scholar 

  • Mor, Y., Ferguson, R., & Wasson, B. (2015). Editorial: Learning design, teacher inquiry into student learning and learning analytics: A call for action. British Journal of Educational Technology, 46(2), 221–229.

    Article  Google Scholar 

  • Nendaz, M. R., & Bordage, G. (2002). Promoting diagnostic problem representation. Medical Education, 36(8), 760–766.

    Article  Google Scholar 

  • Novak, J. (1993). Human constructivism: A unification of psychological and epistemological phenomena in meaning making. International Journal of Personal Construct Psychology, 6, 167–193.

    Article  Google Scholar 

  • Schmidt, H. G., Wagener, S. L., Smeets, G. A. C. M., Keemink, L. M., & van der Molen, H. T. (2015). On the use and misuse of lectures in higher education. Health Professions Education, 1(1), 12–18.

    Article  Google Scholar 

  • Siemens, G., & Gasevic, D. (2012). Guest editorial - Learning and knowledge analytics. Educational Technology and Society, 15(3), 1–2.

    Google Scholar 

  • Skidmore, D. (2000). From pedagogical dialogue to dialogical pedagogy. Language and Education, 14(4), 283–296.

    Article  Google Scholar 

  • Spector, J. M., Ifenthaler, D., Sampson, D., Yang, L., Mukama, E., Warusavitarana, A., et al. (2016). Technology enhanced formative assessment for 21st century learning. Journal of Educational Technology & Society, 19(3), 58–71.

    Google Scholar 

  • Tuddenham, W. J. (1984). Glossary of terms for thoracic radiology- Recommendations of the nomenclature committee of the Fleischner-society. American Journal of Roentgenology, 143(3), 509–517.

    Article  Google Scholar 

  • Van der Gijp, A., van der Schaaf, M. F., van der Schaaf, I. C., Huige, J. C. B. M., Ravesloot, C. J., van Schaik, J. P. J., & Ten Cate, T. J. (2014). Interpretation of radiological images: Towards a framework of knowledge and skills. Adv Health Sci Educ Theory Pract, 19(4), 565–580.

    Article  Google Scholar 

  • Verbert, K., Govaerts, S., Duval, E., Santos, J. L., Van Assche, F., Parra, G., & Klerkx, J. (2014). Learning dashboards: An overview and future research opportunities. Personal and Ubiquitous Computing, 18(6), 1499–1514.

    Google Scholar 

  • Vieira, C., Parsons, P., & Byrd, V. (2018). Visual learning analytics of educational data: A systematic literature review and research agenda. Computers & Education, 122, 119–135.

    Article  Google Scholar 

  • Wells, G., & Arauz, R. M. (2006). Dialogue in the classroom. Journal of the Learning Sciences, 15(3), 379–423.

    Article  Google Scholar 

  • Wertsch, J., & Smolka, A. (1993). Continuing the dialogue: Vygotsky, Bakhtin, and Lotman. In H. Daniels (Ed.), Charting the agenda: Educational activity after Vygotsky (pp. 69–92). London: Routledge.

    Google Scholar 

  • Yi, J. S., Kang, Y. a., & Stasko, J. (2007). Toward a deeper understanding of the role of interaction in information visualization. IEEE Transactions on Visualization and Computer Graphics, 13(6), 1224–1231.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the teacher and students who participated in the study as well as Marieke van der Schaaf, PhD, for her thoughtful comments on the manuscript.

About the authors

Bas de Leng is an Assistant Professor and Chair of the E-learning Competence Centre of the Medical Faculty Educational Institute (IfAS) at Münster University (WWU). His research interests lie in the field of Technology-Enhanced Learning, with an emphasis on computer-supported collaborative learning and computer-mediated orchestration of activities in blended learning settings for medical education. Friedrich Pawelka is a researcher and software developer also at the E-learning Competence Centre of the Medical Faculty Educational Institute (IfAS) at Münster University (WWU). His work focuses on the application of information technology and artificial intelligence to medical education.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

The first author BdL designed the study and conducted the experiment together with FP. BdL primarily analyzed the videos and transcripts and drafted the manuscript. The second author FP provided the learning dashboard, collected and analyzed the computer log files, gave feedback during the development of the inductive codes, verified BdL’s analysis, and performed the statistical analysis. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Bas de Leng.

Ethics declarations

Ethics approval and consent to participate

We followed the ethics rules and regulations—all students and the teacher gave their consent to participate in the study and to video recording of the class-wide discussions. They were informed that data from their work and from the video recordings could be processed for research and educational (professional development) purposes. Participation could not result in any detrimental effects, and no incentives were offered for participating. Privacy was guaranteed, as all data were anonymized and findings were discussed only within the research team. Students participating in the focus group discussions gave separate written consent and were ensured that data analysis and publishing would follow the modes of action endorsed by the research community.

Competing interests

There are no conflicts of interest involved in this study.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

de Leng, B., Pawelka, F. The use of learning dashboards to support complex in-class pedagogical scenarios in medical training: how do they influence students’ cognitive engagement?. RPTEL 15, 14 (2020). https://doi.org/10.1186/s41039-020-00135-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41039-020-00135-7