- Open Access
Do self-created metacognitive prompts promote short- and long-term effects in computer-based learning environments?
Research and Practice in Technology Enhanced Learning volume 16, Article number: 3 (2021)
Students must engage in self-regulated learning in computer-based learning environments; however, many students experience difficulties in doing so. Therefore, this study aims to investigate self-created metacognitive prompts as a means of supporting students in their learning process and improving their learning performance. We conducted an experimental study with a between-subject design. The participants learned with self-created metacognitive prompts (n = 28) or without prompts (n = 29) in a hypermedia learning environment for 40 min while thinking aloud. In a second learning session (stability test), all participants learned about a different topic without prompts. The results showed no clear effect of the self-created metacognitive prompts on the learning process and performance. A deeper analysis revealed that students’ prompt utilization had a significant effect on performance in the second learning session. This study contributes to the research investigating how students can be supported in ways that enhance their learning process and performance.
Self-regulated learning (SRL) is important for successful learning in computer-based learning environments (CBLEs; Sambe, Bouchet, & Labat, 2017), and it is considered to be a crucial skill in lifelong learning (Anthonysamy, Koo, & Hew, 2020). Due to an increased need for flexible learning settings that could be addressed using CBLEs, an increasing number of approaches have been developed to address the research in self-regulated learning, in particular, instructional support in CBLEs (e.g., Azevedo, & Aleven, V. (Eds.)., 2013; Hsu, Wang, & Zhang, 2017). SRL describes students’ strategic behavior in moving towards a learning goal (Schunk & Zimmerman, 1998). In the cyclic model of self-regulated learning (Zimmerman, 2008), three phases of self-regulated learning are described, each specifying metacognitive activities such as goal setting and strategic planning (forethought phase), metacognitive monitoring and control (performance phase), and self-evaluation and reflection (self-reflection phase).
In an ideal setting, students would be able to engage in self-regulated learning in flexible CBLEs during their journey to achieve learning goals throughout their lifetime. However, many students experience difficulties in adequately self-regulating their learning, particularly in complex CBLEs and across multiple settings (e.g., Azevedo, Moos, Johnson, & Chauncey, 2010; Azevedo, Taub, & Mudrick, 2018; Daumiller & Dresel, 2018; Jansen, van Leeuwen, Janssen, Conijn, & Kester, 2020; Pieger & Bannert, 2018). Furthermore, these difficulties are associated with suboptimal learning outcomes (e.g., Azevedo & Cromley, 2004; Bannert, Sonnenberg, Mengelkamp, & Pieger, 2015; Kizilcec, Pérez-Sanagustín, & Maldonado, 2017; Lai & Hwang, 2016). Metacognitive prompts were established as instructional support to help students better self-regulate their learning and thereby achieve higher learning outcomes (e.g., Bannert, 2009; Hsu et al., 2017). Metacognitive prompts are hints, clues, or questions that target the learners’ metacognition. Evidence suggests that helping students engage in self-regulated learning successfully fosters their learning outcomes. For example, in a relevant meta-analysis, Zheng (2016) found a medium effect size of d = 0.44 of self-regulated learning scaffolds in CBLEs on academic performance compared with learning without support.
Thus, the overall helpful effects of metacognitive prompts appear to be undisputed. Nevertheless, the extent of the usefulness of the metacognitive prompts within studies on metacognitive prompting seems to vary (Daumiller & Dresel, 2018; Zheng, 2016). A possible explanation may be that the effects of the metacognitive prompts on learning depend on how the students utilize the offered prompts. To date, the utilization of prompts has scarcely been investigated in the common prompting research. To the best of our knowledge, only Daumiller and Dresel (2018) thematized prompt use, whereby the authors conceptualized it in terms of the subjectively assessed frequency of generally following the “incitement” of prompts. Accordingly, the focus of the mentioned study was instead on the handling of prompts in terms of “whether” rather than “how.” We sought to close the identified research gap by investigating how students’ prompt utilization, which we define in this study as the “way to deal with each specific prompt”, affect their learning outcome measured directly after learning and, moreover, in a subsequent learning session 3 weeks later. Moreover, we advance the common prompting approach with so-called self-created metacognitive prompts. By helping students to create their own prompts for a learning session in a CBLE, we transfer one of the most interesting opportunities that CBLEs provide to instructional support: the flexible use of learning time and place and a wider range of options in deciding how to learn.
Supporting self-regulated learning and performance in computer-based learning environments with metacognitive prompts
One approach to facilitating self-regulated learning in CBLEs is metacognitive support, for example, in the form of prompts that induce students’ metacognitive activities during learning. Such activities include but are not limited to orientation, goal specification, planning, monitoring and control, and evaluation strategies (Bannert, 2007; Veenman, 1993). According to available meta-analyses, prompts show a significant, moderate positive effect on academic performance (Belland, Walker, Olsen, & Leary, 2015; Zheng, 2016).
However, there are only a few empirical studies to date that also examine student behavior, such as the number of page visits in an online learning environment, in addition to more descriptive attributes (e.g., Bannert et al., 2015; Daumiller & Dresel, 2018; Lallé, Taub, Mudrick, Conati, & Azevedo, 2017; Wong, Khalil, Baars, de Koning, & Paas, 2019). Bannert et al. (2015), for example, investigated the navigation behavior of students in regulating their learning to gain insight into whether and how students select pages with relevant content from learning websites. In their study, they found nonlinear page selections by students as indicators of strategic navigation behavior (Astleitner, 1997). Moreover, understanding how students use prompts provides deeper insight into the learning effectiveness of prompts (e.g., Bannert et al., 2015; Bannert & Mengelkamp, 2013; Bannert & Reimann, 2011; Moser, Zumbach, & Deibl, 2017). Bannert et al. (2015) found, for example, that within an experimental group that received metacognitive prompts, just under half of the students complied with the prompts as intended, and the others were less or not at all compliant. Their analysis showed an improved learning outcome regarding transfer performance for compliant students.
The relevant research mainly focuses on the effectiveness of prompts with regards to short-term effects in learning outcomes, which means that the learning outcomes are measured at the end of the learning session (e.g., Delen, Liew, & Willson, 2014; Kim & Pedersen, 2011; Moser et al., 2017; Müller & Seufert, 2018; Renner, Prilla, Cress, & Kimmerle, 2016; van Alten, Phielix, Janssen, & Kester, 2020). Only a few studies have analyzed the long-term effects of metacognitive prompts in a second learning session following some days or weeks later. A few studies have found sustainable effects for metacognitive prompts (Stark & Krause, 2009; Stark, Tyroller, Krause, & Mandl, 2008); however, in another study, the effect of metacognitive prompts could not be retained in a follow-up session (Hilbert et al., 2008).
Thus far, the long-term effects of prompts have been researched less and the few studies have been conducted in quite varied ways. The classical approach is characterized by presenting another delayed knowledge test on the same learning topic (s. Table 1). For example, Daumiller and Dresel (2018) supported the students of three experimental groups with prompts (only metacognitive prompts vs. only motivational prompts vs. metacognitive and motivational prompts) whereas students in the control group learned without prompts about a topic on psychological research methods. Immediately after learning, there was a knowledge test, and the results showed better learning outcomes for all groups of prompted students compared to the nonprompted students. In addition, there was one delayed knowledge test 1 week later and another delayed knowledge text (exam) 10 weeks later on the same learning topic. At least the first of these delayed tests showed better test scores for the prompted groups whereas the second showed better test scores only for the motivational prompts. The significant results of the delayed knowledge tests were interpreted as long-term effects (see also the study by Stark & Krause, 2009). In general, this classical approach verifies that the knowledge gained by prompting still exists after a few days and thus measures the long-term effect with regard to knowledge of the content.
Another approach uses a repeated design. Müller and Seufert (2018) prompted students of the experimental group in a learning session about empirical research methods and conducted a knowledge test immediately after learning. One week later, they “repeated” the same design by prompting the students of the experimental group again. The learning topic was different; however, the session also addressed the basics in empirical research methods. They found a short-term effect on transfer performance but no long-term effects in the second learning session. With this repeated design, it can be investigated whether the instructional prompts work better with regard to higher learning outcomes in a second learning session using another learning topic under the same learning conditions. In general, this repeated design does not investigate the preservation of the prompts after a period of time during which the participants were not exposed to prompts. Instead, the repeated design tests the degree to which prompts affect learning at a particular learning session, in more than one consecutive learning situation.
The design by Bannert et al. (2015) was similar with regard to the two learning sessions; however, they did not prompt the students in the second learning session to investigate whether the strategies induced by self-directed metacognitive prompts (presented only in the first learning session) would be accessed in another follow-up learning session. They found better navigation behavior and transfer performance not only in the first learning session but also in the second (unsupported) learning session. This stability approach investigates not only the stability of knowledge gain in a similar learning topic but also the stability of the strategy use induced by prompts. In general, this stability approach investigates the long-term effects with regard to self-regulation strategies cued by the metacognitive prompts.
For the purpose of our study, we focused on the stability approach to investigate the long-term effects of self-created metacognitive prompts. By doing so, we analyzed whether the positive effects of self-created metacognitive prompts measured directly after learning could be perpetuated in a follow-up learning session without any additional instructional support, thus affecting learning beyond the learning session in which the prompt appeared in future learning sessions. In summary, the evidence in terms of both understanding the effect of self-created metacognitive prompts on the learning process and on learning outcomes as well as the long-term effects of self-created metacognitive prompts will be investigated in this study.
Modifying the design of metacognitive prompts
The individual studies mentioned (e.g., Bannert et al., 2015; Müller & Seufert, 2018; Stark & Krause, 2009) as well as the review and meta-analysis (Belland et al., 2015; Zheng, 2016) paint an overall positive picture of the effect of metacognitive prompts; however, the results are not consistent. The inconsistency could be due to the function of prompts—such as differentiating between prompts supporting students’ reflection of learning behavior and prompts supporting students’ cognitive processes—as well as the timing of the support. Both aspects have gained attention in research as potentially influencing the success of prompts (e.g., Azevedo, Cromley, Moos, Greene, & Winters, 2011; Berthold, Nückles, & Renkl, 2007; Kauffman, Ge, Xie, & Chen, 2008; Molenaar, Roda, van Boxtel, & Sleegers, 2012). In most of the studies reported thus far, one type of prompt was implemented in fixed time intervals. Some studies have developed adaptive prompts that enable adjustments to the content and/or timing of the prompts (e.g., Bouchet, Harley, & Azevedo, 2016; Kramarski & Friedman, 2014; Schwonke, Hauser, Nückles, & Renkl, 2006; Thillmann, Künsting, Wirth, & Leutner, 2009). To summarize this research area, the extent to which the prompts should be adapted to gain the best learning results is still an open question; in particular, the content of the prompts requires more research since the time intervals of prompts have been investigated previously. For example, in the experimental study by Bannert et al. (2015), shortly before the learning session, students could personalize the time intervals at which metacognitive prompts would appear during the learning phase. Similarly, the prompts in the study by Bouchet et al. (2016) were faded out in different ways, meaning that the time interval between the presentations of prompts was adjusted by the students. However, they found no significant effect of fading on learning gains, as expected.
The idea of giving students the ability to create and configure their own prompts is not only driven by the transfer of the flexibility of modern CBLEs directly to their integrated support, as mentioned in the introduction, but also by the phenomenon of students’ poor compliance with instructional tools (e.g., Clarebout & Elen, 2006; Lallé et al., 2017; Schworm & Gruber, 2012), particularly with metacognitive prompts (e.g., Bannert & Mengelkamp, 2013). Students in past studies have complained that the prompts restricted their learning or indicated that they (partly) did not use the prompt in the intended manner (e.g., Bannert et al., 2015). Furthermore, the research shows that giving students the opportunity to influence their own learning is positively correlated with students’ development of self-regulated learning (Randi & Corno, 2000). With regard to instructional prompts, however, there is hardly any research to date that addresses adaptable prompts and their effects on learning processes and learning outcomes (Bannert et al., 2015; Kramarski & Friedman, 2014). Based on the scarce research available, adaptable prompts have been found to support learning (e.g., Bannert et al., 2015). In the study by Bannert et al. (2015), students were able to personalize their metacognitive prompts not only with regard to presentation time as mentioned above but also with regard to the content of the prompts. This means that the sequence in which the reasons for the learning activities had to be carried out for prompting could be chosen freely. As expected, learning with such self-directed metacognitive prompts significantly increased students’ navigation behavior and transfer performance when compared with another group of students learning without prompts. The goal of this study is to further increase the freedom in personalizing one’s own learning support and to investigate the effects of such a greater learner control.
Similar to self-directed (personalized) metacognitive prompts, self-created (personalized) metacognitive prompts are characterized students’ ability to determine the timing of the prompts’ occurrence themselves before beginning the learning process. Nonetheless, self-created prompts differ from self-directed prompts in that they give students more freedom with regard to the personalization of their prompts: As implied by their name, self-created metacognitive prompts are written by the students themselves in advance to the learning process, based on one example of a metacognitive prompt (e.g., Müller & Seufert, 2018). In addition, the learners themselves determine the number of learning activities in which they will engage when self-creating their prompts (within the restrictions of the learning setting).
By asking students to create their own prompts before the learning session and to receive those prompts during the learning session, we expect them to feel supported without being restricted by prompts that have been imposed on them. In this case, prompts could achieve their aim to support the SRL process and performance (as shown in the research summarized by Zheng, 2016) without being hindered by students’ reluctance. According to the SRL research, this prompting approach appears to help students overcome production deficits (Azevedo & Cromley, 2004; Marschner, Thillmann, Wirth, & Leutner, 2012; Nückles, Schwonke, Berthold, & Renkl, 2004; Veenman, 2007; Veenman, Van Hout-Wolters, & Afflerbach, 2006). Prompts will induce the students’ metacognitive activities that they usually do not execute spontaneously during learning situations. The prior research in which medium effect sizes were obtained for different types of metacognitive prompts on transfer performance was compared with a control group learning environment without prompts (0.42 < d < 0.59, Bannert & Mengelkamp, 2013).
The extant research leads us to develop self-created metacognitive prompts: Self-created metacognitive prompts are prompts that students create themselves before they begin to study learning materials. Such prompts allow students to gain even more control over their learning process. Our hypothesis is that self-created prompts are more adapted to the (perceived) needs of individual students and will lead to higher compliance with metacognitive prompts during the learning phase.
Research questions and hypotheses
In general, prompting self-regulated learning in CBLEs is successful in supporting short-term learning outcomes (Belland et al., 2015; Winters, Greene, & Costich, 2008; Zheng, 2016). The adaption of prompts has been investigated as an approach for addressing the suboptimal utilization of prompts during learning processes (e.g., Bannert et al., 2015). Thus far, the adaption of prompts has mainly been based on the timing (e.g., Bouchet, Harley, & Azevedo, 2018) or content of the prompts (e.g., Schwonke et al., 2006). The current research investigates prompts created by students themselves, whereby, in addition to the timing of the prompts, the students are able to determine the content of the prompts themselves. Thus, this paper adds to the research in metacognitive prompts by investigating prompts that give students more freedom in creating their prompts than any other comparable study has ever done. This study investigates the effect of these self-created prompts on the learning process and learning outcomes as well as on the stability of the potential effects. Moreover, we explore the actual utilization of prompts during the learning process to better understand self-created prompts. Thus, this research poses three research questions:
To what extent do self-created prompts affect the learning process, and can the potential effect be maintained long-term?
To what extent do self-created prompts affect learning performance, and can the potential effect be maintained long-term?
To what extent does prompt utilization influence short- and long-term effects?
The first and second research questions are based on prior research showing, on average, the positive effects of prompts on self-regulated learning processes and learning outcomes (e.g., Bannert et al., 2015; Bannert & Reimann, 2011; Winters et al., 2008; Zheng, 2016). Thus, with regard to the first two research questions, we hypothesize that positive effects can also be found when learning with self-created prompts. Accordingly, the hypotheses for the first two research questions are as follows:
Self-created prompts facilitate the learning process: Students learning with self-created prompts will visit relevant webpages more frequently, spend more time on relevant webpages, and navigate the learning environment less linearly than students learning without prompts. Moreover, these effects will be maintained long-term.
Self-created prompts improve learning performance: Students learning with self-created prompts will increase their recall, comprehension, and transfer performance compared to students learning without prompts. Furthermore, these effects will be maintained long-term.
The third research question is also based on research findings suggesting that students utilize prompts differently, and not always to their advantage, which affects their learning outcome (e.g., Bannert et al., 2015; Bannert & Mengelkamp, 2013; Clarebout & Elen, 2006; Randi & Corno, 2000; Schworm & Gruber, 2012). While the meta-analysis by Zheng (2016) differentiated between different functions of prompts, there has been no prior analysis of prompt utilization with regard to how the function of the prompt was interpreted by the students. Hence, prior studies have analyzed either the function of prompts from a different perspective or different aspects of prompt utilization; therefore, the analysis in this paper is more exploratory and thus nondirected.
Thus, this paper is closely related to the current research investigating the short- and log-term effects of different types of prompts (e.g., Bannert et al., 2015; Müller & Seufert, 2018). It goes beyond the existing research by implementing prompts that are influenced by the learners themselves to a higher degree than in prior studies, and it investigates how these prompts are utilized by students.
Sample and design
Sixty-six German-speaking undergraduate university students participated in the study. The final sample comprised n = 57 participants (Mage = 19.9 years, SDage = 1.58; 72% female) because we had to eliminate 9 participants due to (a) very poor compliance in participating in the study and (b) reducing the number of extreme values in prior knowledge by removing two participants with very low prior knowledge (below 4 on a scale from 1 to 25) and three participants with very high prior knowledge (above 15 on a scale from 1 to 25). The experiment was conducted in a between-subject design comprising three sessions. As with Bannert et al. (2015), the first session was used to measure learner characteristics. Additionally, based on Bannert et al. (2015), the other two sessions (learning session 1 and learning session 2) took place approximately 1 week and 4 weeks later. The manipulation of the independent variable self-created prompts was undertaken only in the first learning session. The manipulation was implemented with two conditions: the experimental condition of learning with self-created prompts (n = 28) and the control condition of learning without prompts (n = 29). The students were randomly assigned to one condition by following a randomized list of the two conditions.
The experiment took place in a laboratory over three sessions, separating the pretests, the first, and the second learning session. The total time of the experiment was approximately 5 h. It resulted from the duration of the tests and further from around 40 min of learning time per learning session, a time that was chosen to represent a typical teaching time in class. Figure 1 presents an overview of the procedure.
In the first session, the learner characteristics were measured. Since we found no difference in the learner characteristics between both groups, we did not consider the learning characteristics further.
The first learning session was structured into three phases that, in total, lasted approximately 2 h. In the first phase, the participants were introduced to the hypermedia learning environment and randomly assigned to the experimental or control condition as described above. Afterwards, the participants received training that lasted approximately 15 min. The content of the training was dependent on the condition: the participants in the experimental condition received a short introduction to metacognitive prompts and a model of what a metacognitive prompt looks like, and they subsequently created metacognitive prompts for themselves; the participants in the control condition received alternative training (on ergonomics) to ensure an equal workload for the participants in both conditions. In a second phase, the participants engaged in self-regulated learning in the hypermedia learning environment for 40 min with or without self-created prompts depending on the condition. Afterwards, the participants were trained to think aloud. The participants learned about the basic concepts of operant conditioning and were free to navigate through the hypermedia learning environment as they wished. All page visits as well as their time and duration were recorded in a log file. Additionally, the participants were asked to think aloud during the entire learning task. The think-aloud protocols and the computer screen were recorded in a video file. In a third phase, directly after the learning phase, the recall, comprehension, and transfer performance of the participants were measured.
The second learning session (stability test) was similar in structure to the first learning session but without the instruction phase, and it lasted approximately 1.5 h. In this session, the participants learned about the basic concepts of motivation psychology. Unlike the first learning session, during the second learning session, there was no manipulation of the independent variable. Every other aspect of the learning phase was similar to the procedure of the learning phase in the first learning session for the control condition. Hence, all participants learned in the hypermedia learning environment without prompts for 40 min and were free to navigate as they wished. Again, all page visits as well as their time and duration were recorded in a log file, and the participants were asked to think aloud during the entire learning task. The think-aloud protocols and the computer screen were recorded in a video file. Immediately after learning, the recall, comprehension, and transfer performance of the participants were measured.
The study included two learning phases: one phase in the first learning session on the topic of operant conditioning and one phase in the second learning session (stability test) on the topic of the psychology of motivation. An analysis of text readability (Michalke, 2015) showed similar levels of difficulty of both learning topics (i.e., the Flesch-Kincaid grade-level score for “learning theories” was 19.01, and that for “psychology of motivation” was 19.14). Both topics were presented in similar computer-based hypermedia learning environments. Each of the two learning environments included a page specifying the learning goal, 10 pages of information relevant to the topic specified in the learning goals (including approximately 2300 words as well as 5 pictures and tables); and approximately 40 pages including overviews, summaries, and pages not relevant to the learning goals. The hypermedia learning environment was designed to provide external validity to the learning process, which usually takes place in a hypermedia learning context with additional, irrelevant information. Moreover, self-regulation and metacognition are important as they help students negotiate learning situations that contain relevant and irrelevant information (among other hurdles). Thus, the students in this study were provided with an externally valid learning experience that made it necessary for them to self-regulate their learning by detecting the relevant learning pages according to their current learning goals. The participants could navigate the learning environment by using a menu bar on the left side of the computer screen, one of 300 hyperlinks, a next-page button and a previous-page button on the top of each page, and the browser buttons (back and forward).
Manipulation of the independent variable
The manipulation of the independent variable was implemented in two steps during the first learning session. In the first step, the training (before the learning phase) introduced the participants of the experimental condition to the prompts and showed them how to create their own prompts to be used during the upcoming learning phase. Additionally, the participants could familiarize themselves with the structure of the hypermedia learning environment (with different content) to anticipate which prompts they might need and at what point they would like to be supported by a prompt. At the end of the training phase, directly before the start of the learning phase, the participants created their own prompts. To do so, they were introduced to an example prompt listing major metacognitive learning activities. They were completely free in creating their prompts; i.e., they could choose to design prompts that were similar to the example prompt, loosely based on the example prompt, or completely different from the presented example prompt. However, they were asked to base the prompts on the training and thus mainly created metacognitive prompts. Additionally, the participants set timestamps to determine at which points within the 40-min learning phase they wished to receive the self-created prompt. The participants needed to set a minimum of 3 timestamps. By designing the self-created prompts in this way, the students could influence how they interacted with the prompts during the learning phase, the prompt itself, and the time of appearance.
In the second step, the participants received a think-aloud training and started with the learning in the environment. The prompts were shown to the participants of the experimental condition during the learning phase at the times determined by the participants. The prompts were presented in a pop-up window displaying the self-created list of metacognitive learning activities; see Fig. 2 for an example. Students were expected to select one or more of the prompted learning activities that they wanted to enact next, submit their list of selected activity/ies, and then continue learning in the hypermedia learning environment while performing the activity/ies (one after the other) during time following the prompt.
The participants in the control condition received a different training that was irrelevant to the content of the study to ensure a similar work time and workload under both conditions. Specifically, they learned about the major criteria of an ergonomic workplace and how to design their own ergonomic workplace. They could also familiarize themselves with the structure of the hypermedia learning environment before the learning phases started by using the same content as that presented to the students of the experimental group. Then, they received a think-aloud training and learned in the same hypermedia learning environment as the participants in the experimental conditions without any support by prompts.
Instruments and dependent variables
Learning process: navigation behavior
To investigate the learning process, we analyzed the recorded log files with regard to the students’ navigation behavior. The log files collected in this study recorded the pages and times of all webpages visited by the participants as well as the duration of the visits. Moreover, the webpages of the learning environment were categorized into relevant and irrelevant pages (see the description of the learning environment). Meaningful insights could be derived from our analysis of the systematic navigation behavior of the participants based on three parameters that could be drawn from the information given in the log files: (1) the relative frequency of relevant page visits (the number of relevant pages visited divided by the total number of pages visited), (2) the relative time spent on relevant pages (time spent on relevant pages divided by the total learning time), and (3) the frequency of linear navigation steps divided by the total number of navigation steps. The frequency and time spent on relevant webpages provide insight into navigation behavior because they indicate the degree to which the students used self-regulation strategies to reach their learning goals, selected relevant webpages to study, and did not simply follow the progression suggested by the chapters in the hypermedia learning environment. The last parameter is based on the operationalization of strategic navigation behavior as the number of nonlinear node selections (Astleitner, 1997).
The learning outcomes were measured on three dimensions with three knowledge tests. The dimensions were based on Bloom’s taxonomy of cognitive learning objectives (Bloom, 1956) and concentrated on the three least complex components: recall, comprehension, and transfer. Recall was measured by asking the participants to write down the terms and concepts of the topic they studied during the learning phases (operant conditions in learning session 1 and the psychology of motivation in learning session 2). The performance score for recall was determined by counting the correct number of terms and concepts mentioned by the participants. Comprehension was measured with a multiple-choice test consisting of 22 items for the topic of operant condition (learning session 1, α = .73) and 19 items for the topic of psychology of motivation (learning session 2, α = .40), each with 1 correct and 3 false response options. Transfer was measured with open questions in which the participants were asked to solve 8 prototypical problems in educational settings that were unknown to the participants and to apply the knowledge of operant conditioning (learning session 1, α = .61) or the psychology of motivation (learning session 2, α = .65). Two research assistants rated each answer independently on a scale from 0 (no answer) to 5 (correct answer) for all data. The interrater reliability was good (learning session 1: Cohen’s Kappa = .84; learning session 2: Cohen’s Kappa = .74). In case of disagreement between the two raters, an expert of the research team determined the final score.
Prompt utilization was measured by coding the think-aloud protocols of the participants in the experimental condition in the first learning session. Because both conditions involved learning without prompts in the second learning session, prompt utilization could be coded only in the first learning session. Coding was completed for each prompt appearance for every participant and then aggregated to yield one score for each participant. The interpretation of each prompt was coded in one of two categories, either (a) a cue to reflect on current metacognitive learning activities or (b) a cue to enact a (potentially new) metacognitive learning activity presented in the prompt. We also included a residuum category that was applied in case the prompt utilization could not be coded or the learner did not react to the prompt at all. This categorization was developed inductively after observing that the self-created prompts were used quite differently: (a) some participants seemed to use the prompts as a reflection tool. In the think-aloud protocols, these participants mentioned which (meta-) cognitive strategies they were currently using and compared them with the metacognitive learning activities listed in their prompt. For example, one participant gave this statement while hovering with her cursor over an activity (“I check whether I am reaching my goal”) in the prompt window: “I am checking whether I am reaching my goal… Yeah, I actually did that already. So, yes [ticking that box].” (b) Other participants were reminded by their prompt to enact some of the (meta-) cognitive activities mentioned in the prompt. In the think-aloud protocols, these participants did not mention their current learning strategies, often abandoned what they did when the prompt appeared, and verbalized their plan to enact one or more of the activities suggested in the prompt. For example, one participant gave the following statement at the time the prompt window opened: “[reading out loud:] An individual sees… [prompt window opens] Okay, learning activities. I’ll reflect the content learned so far [ticking that box] and afterwards I will get an overview of the material [ticking that box]. So, reflecting… I can say that the learning material is about…” This difference in the interpretation of the prompts (including a third category for unclear cases that were excluded from the analysis) were coded by two independent coders (Cohen’s Kappa = .78). Because each participant worked with more than one prompt, a score was aggregated for each participant by calculating the mode of each participant’s codes. If the mode for a participant was the residuum category or if both categories were coded the same amount of times, then the participant was not included in the analysis regarding prompt utilization.
The alpha level was set to 5% for all analyses. As the hypotheses for research question 1 and research question 2 were directional hypotheses, we used one-tailed hypothesis testing for the analyses regarding these research questions. Research question 3 was open; therefore, we used two-tailed hypothesis testing for the analyses regarding the third research question. To avoid a multiple comparison problem in interpreting the individual comparisons, we applied a Bonferroni correction. For the first and second research question, we tested six comparisons for the individual measures, three learning process parameters, and three performance tests. Thus, we corrected the significance level for the individual comparisons to 0.0083. For the third research question, we applied three comparisons per dataset for the individual measures of performance. Thus, we set the significance level to 0.017 for the individual comparisons.
Effects of self-created prompts on the learning process
To analyze the potential effect of self-created prompts on the learning process, we compared the navigation behavior of the students in the experimental condition to the navigation behavior of the students in the control condition. We hypothesized that self-created prompts would increase the frequency and time the students devote to relevant pages and decrease the linearity in which the hypermedia learning environment is navigated compared with learning without prompts. Furthermore, we hypothesized that these effects would persist in a second learning session (stability test) in which the students under both conditions learned without prompts. Table 2 displays the descriptive values for all navigation parameters in the first and second learning sessions. A multivariate analysis of variance showed that the prompts facilitated navigation behavior in the first learning session with a large effect (F(3, 53) = 6.63, p < .001; Wilk’s Λ = 0.73, partial η2 = .27). The analysis of the individual parameters showed that the self-created prompts affected only the relative time spent on relevant webpages (see Table 2). We found no overall effect of prompts on the navigation behavior in the second learning session (F(3, 53) = 2.60, p = .062; Wilk’s Λ = 0.87, partial η2 = .13).
In summary, the results partly support our hypotheses regarding our first research question: The self-created metacognitive prompts supported the learning process in the first learning session; however, the effect could not be maintained in the second learning session. The positive effect could be found only for one navigation parameter, i.e., the relative time spent on relevant webpages, an individual effect that was also shown in the second learning session.
Effects of self-created prompts on learning performance
To analyze the potential effect of self-created prompts on learning performance, we analyzed students’ recall, comprehension, and transfer performance in the experimental condition and compared their learning outcomes with the students in the control condition. We hypothesized that self-created prompts would foster learning performance compared with learning without prompts. Furthermore, we hypothesized that these effects would persist in a second learning session in which the students in both conditions learned without prompts. Table 2 displays the descriptive values for all learning performance parameters in the first and second learning sessions.
A multivariate analysis of variance showed a large, significant overall effect of the self-created metacognitive prompts on performance in the first learning session (F(3, 53) = 3.78, p = .016; Wilk’s Λ = 0.82, partial η2 = .18). However, an analysis of the individual parameters showed no differences in recall, comprehension, or transfer (see Table 2). In the second learning session, the self-created metacognitive prompts did not show an overall effect on performance measures (F(3, 53) = 1.61, p = .198; Wilk’s Λ = 0.92, partial η2 = .08).
In summary, the results do not support our hypotheses regarding our second research question: The self-created metacognitive prompts positively affected the learning performance in the first learning session; however, the effect could not be maintained in the second learning session and the individual comparisons do not reveal a positive effect of the self-created metacognitive prompts on recall, comprehension, or transfer
Effects of prompt utilization on short- and long-term learning performance
For a deeper understanding of how the self-created prompt could affect or fail to affect the students’ learning performance, we analyzed the think-aloud protocols to determine how the prompts were interpreted by the participants during learning. Furthermore, we analyzed whether possible effects could persist in a second learning session in which the participants learned without prompts. Table 3 contains the results of this exploratory analysis. Half of the participants used their prompts to reflect on their current metacognitive learning activities (n = 11), i.e., as a reflection request. The other half of the students used their prompts mainly as a cue to enact one or more prompted metacognitive learning activities (n = 12), i.e., as a call to action without deeper reflection on current learning activities.
Table 3 displays the descriptive results and individual differences in the parameters. There were no clear descriptive differences between the groups in terms of recall and comprehension. The descriptive difference in the first learning session in transfer performance was in favor of the students’ interpretation of the prompts as a reflection tool. However, a multivariate analysis of variance showed no main effect of the interpretation of the self-created prompts on performance (F(3, 19) = 2.87, p = .064; Wilk’s Λ = 0.69, partial η2 = .31). The analysis of the second learning session showed a long-term effect of the interpretation of the self-created prompts on learning performance (F(3, 19) = 6.45, p = .003; Wilk’s Λ = 0.50, partial η2 = .50). In the individual comparisons, a significant main effect with a large effect size was found in the transfer performance in favor of the students’ interpreting the self-created prompts as a reflection request.
In summary, the results regarding the third research question lead us to the assumption that interpreting self-created prompts as a reflection request is more beneficial for learning performance than interpreting prompts as a cue for action in fostering long-term transfer performance.
Do self-created metacognitive prompts promote short- and long-term effects in computer-based learning environments? The prior evidence supports the claims that metacognitive prompts generally can facilitate learning in CBLEs (e.g., Bannert et al., 2015; Belland et al., 2015; Bouchet et al., 2016; Delen et al., 2014; Kim & Pedersen, 2011; Winters et al., 2008; Zheng, 2016). We closely examined how a specific type of metacognitive prompt, self-created metacognitive prompts, may help students. Moreover, we investigated the effects of self-created prompts on not only students’ short-term learning performances but also students’ learning processes, i.e., how students navigate CBLEs and utilize the self-created metacognitive prompts and how sustainable the possible effects are in a second learning session without prompts. The results of this experimental study allow several conclusions to be drawn regarding the support of students in CBLEs with self-created metacognitive prompts.
As shown by the results, there are mixed beneficial effects for self-created metacognitive prompts on the learning process and learning performance. In addition to the significant effects in this regard, we found medium effect sizes for the relative time spent on relevant webpages in learning session 1 as well as descriptive advantages for comprehension in learning session 1 and learning session 2 (see Table 2). Especially when the prompt use of the students was characterized by deep reflection (reflection request), we found even larger effect sizes for transfer in learning session 2 (see Table 3). In addition, the results are interesting in comparison to studies testing different types of (metacognitive) prompts. In a next step, it would be necessary to test the effect of self-created prompts in empirical investigations compared with other metacognitive prompts. The result pattern of this study differs from that of earlier studies with comparable settings (e.g., Bannert et al., 2015; Bannert & Mengelkamp, 2013; Müller & Seufert, 2018), suggesting that the learning mechanism with self-created metacognitive prompts is perhaps just slightly different from prompts that students could not adapt (e.g., Müller & Seufert, 2018) or that they could adapt to a lesser degree by setting the timing of the prompt (e.g., Bannert et al., 2015). In the prior studies, setting the timing of the prompts caused students to navigate to relevant webpages more often and for longer periods and increased the students’ transfer performance. In this study, the self-created prompts similarly caused students to stay longer on relevant webpages. However, in contrast to similar work (e.g., Bannert, 2007; Bannert et al., 2015; Bannert & Mengelkamp, 2013), in general, the self-created prompts did not show to facilitate transfer performance. Thus, the freedom to manipulate the content of the prompts themselves leads to a difference in the pattern of the results: the students in this study regulated their learning to a lesser degree based on the learning goals, and their performance suffered compared to that of the participants in studies that gave the students less freedom in designing their learning process.
We based the design of this study on the general assumption that metacognitive prompts enhance the process of self-regulation and thus induce metacognitive activities and initiate deeper processing of information that is important for performance and mainly relevant for transfer performance. The results of this study allow us to hypothesize that the self-created metacognitive prompts might not have been sufficiently targeted to facilitate this process. The self-created prompts were intended to improve poor-to-mixed compliance (Bannert et al., 2015; Bannert & Mengelkamp, 2013; Clarebout & Elen, 2006; Randi & Corno, 2000; Schworm & Gruber, 2012). Our results could suggest that the students might not have been sufficiently knowledgeable to create prompts that would best fit their learning process. While the students were introduced to the creation of prompts, they could not possibly have the same prior knowledge as the researcher who created the prompts in most other studies. Thus, the self-created prompts could not be used to achieve the full potential of the students. In future studies, a more in-depth training session should be given so that students better understand how they are going to study and which prompts would be helpful. Another approach for further investigation would be to allow the students to create or manipulate their metacognitive prompts while they engage with the learning materials.
An alternative explanation for the inconclusive results could lie in the content of the prompts. While the metacognitive prompts created by experts and the self-directed prompts target metacognitive processes, the prompts created by the students might (also) target other learning processes such as cognitive activities. The advantage of purely metacognitive prompts over alternative or mixed prompts is consistent with meta-analytic results showing that only metacognitive prompts significantly foster performance compared with conceptual, strategic, or multiple prompts (Zheng, 2016).
While we aimed to investigate self-created prompts by giving more freedom to the students, this approach also allows less control over the content of the prompts in this experiment, which is a major limitation of the study and causes our conclusions to be limited to the intended effect of the self-created prompts and, to a lesser degree, to the actual behavior of the students (except for the coded interpretation of the prompts, i.e., students’ prompt utilization). The research indicates that some students do not utilize prompts as intended (e.g., Clarebout & Elen, 2006; Clarebout, Elen, Collazo, Lust, & Jiang, 2013; Moser et al., 2017; Schworm & Gruber, 2012), which could influence their learning outcomes (e.g., Bannert et al., 2015). Our rather explorative analysis investigated the utilization of prompts regarding the interpretation of the self-created prompts either as a cue to reflect about current cognitive and metacognitive activities or as a cue to enact a learning activity suggested by the self-created prompt. The results show a trend suggesting the benefit of interpreting the prompts as a cue to reflect one’s own learning compared with using the prompts mainly as a starting cue or as a cue for more new learning activities. Here, such reflection seems to lead to higher transfer performance and a strong effect in the second learning session. This result could support our assumption that self-created prompts indeed address a production deficit (Marschner et al., 2012; Veenman, 2007; Veenman et al., 2006), thus helping students to reflect upon their learning activities and subsequently improve the self-regulated learning process; however, this finding applies only to some students. The students who interpreted the prompts as a call to action might not understand their own learning process enough to reflect their current cognitive and metacognitive activities. If this was the cause, there was no production deficit (i.e., students possess the strategies but do not apply them spontaneously) to be addressed by the self-created prompts but rather a mediation deficit (i.e., students do not have the strategies) that cannot be addressed by this kind of prompt only (Bannert, 2007; Veenman, 1993). This analysis might have detected that the underlying assumption of using metacognitive prompts to facilitate the self-regulation process applies to only a subgroup of students: those who are affected by a production deficit (Marschner et al., 2012; Veenman, 2007; Veenman et al., 2006). The benefit of the interpretation of the self-created prompts as a reflection aid seems to contradict prior studies and the meta-analyses (Mäeots et al., 2016; Van den Boom, Paas, van Merriënboer, & van Gog, 2004; Zheng, 2016) suggesting that reflection prompts are inferior to metacognitive prompts. However, these studies investigated only the intended design of the researchers and not the interpretation of the prompts by the students. The self-created prompts investigated in this study would probably be categorized as metacognitive prompts and not as reflection prompts. However, as discussed before, the students did not always interpret the prompts as such. As assumed above, this interpretation is probably dependent on prior knowledge of self-regulation strategies. However, this interpretation is limited because of the exploratory nature of this work. The pattern found here should be tested directly in future confirmatory studies.
One important limitation of this study is the design, comparing learning with self-created prompts to learning without prompts. This design was employed to take a first step in investigating whether self-created prompts would be at all beneficial. For a closer examination of the mechanism behind different types of prompts, the design would need to include at least one more condition in which the prompts were not self-created by the students. Furthermore, the study is limited by two methodological constraints. The training before the learning phase as well as the learning phase took only approximately 10 min and 40 min, respectively. Thus, the time was comparable to one lesson in school or university but did not include a longer time in which SRL processes could develop further. Thus, we must consider that the SRL process might change over time and this development could not be shown in this paper. In addition, the Cronbach’s Alpha level for the comprehension test in the second learning session was rather low with 0.40 indicating a low internal consistency of this knowledge test. All other performance tests show higher values for internal consistencies of the specific scales, what we consider to be valid for testing the effect of the intervention.
This study expands our understanding of the support prompts can give to students in CBLEs by showing the partial effects of self-created prompts on the learning process. Furthermore, the study exceeded efforts in analyzing the impact of different prompts on learning (Zheng, 2016) by analyzing how differently the prompts are utilized and the effect of this different utilization on learning.
Based on the results of this study, we can recommend paying closer attention to students’ utilization of self-created prompts: what are the students doing with the prompts and are the students utilizing the prompts as they were meant to by researchers? While the self-creation of prompts might help students to design prompts to their anticipated needs, this practice also introduces new difficulties that might hinder learning such as the limited knowledge of the students regarding the support they will need in a subsequent learning session. Moreover, we must further investigate the interrelationship of involving students increasingly in creating their own prompts and the differences in utilization of prompts as well as the effect of this factor on the learning process and performance outcomes.
The results warrant careful recommendation for the design and implementation of prompts in a learning environment. The study cannot directly support the prior recommendation (e.g., Azevedo, 2005; Bannert et al., 2015; Belland et al., 2015; Zheng, 2016) to design prompts to target metacognitive learning processes. However, it might be beneficial for students to be asked to reflect upon their current learning activities as our exploratory results suggest that an inclusion of reflection in the utilization of self-created prompts affects transfer performance, even more in long-term performance.
In conclusion, we were able to show that self-created prompts partly facilitate the learning process CBLEs. However, due to the different outcome patterns compared to similar studies in which the students did not create the prompts themselves (e.g., Bannert et al., 2015; Bouchet et al., 2018; Daumiller & Dresel, 2018), we conclude that the involvement of students in creating prompts may influence the way in which the prompts support the learning process. Similarly, our results also show that the utilization of prompts affects the performance of students. While prompts that aim to support reflection did not yield a positive effect from a meta-analytic perspective (Zheng, 2016), self-created metacognitive prompts that are utilized to reflect upon current learning activities led to distinctly better transfer learning in the students of this study. Thus, this study contributes to the body of literature investigating how students can be supported to enhance the learning process and performance outcomes when learning in CBLEs.
Availability of data and materials
Since we are still publishing original articles based on the data from this study, we are not making the data and materials available at the moment.
Anthonysamy, L., Koo, A. C., & Hew, S. H. (2020). Self-regulated learning strategies in higher education: Fostering digital literacy for sustainable lifelong learning. Education and Information Technologies, 25(4), 2393–2414. https://doi.org/10.1007/s10639-020-10201-8 .
Astleitner, H. (1997). Lernen in Informationsnetzen: Theoretische Aspekte und empirische Analysen des Umgangs mit neuen Informationstechnologien auserziehungswissenschaftlicher Perspektive. Lang. Frankfurt/M.
Azevedo, R. (2005). Using hypermedia as a metacognitive tool for enhancing student learning? The role of self-regulated learning. Educational Psychologist, 40(4), 199–209. https://doi.org/10.1207/s15326985ep4004_2 .
Azevedo, R., & Aleven, V. (Eds.). (2013). Springer international handbooks of education. In International handbook of metacognition and learning technologies. New York: Springer. https://doi.org/10.1007/978-1-4419-5546-3 .
Azevedo, R., & Cromley, J. G. (2004). Does training on self-regulated learning facilitate students’ learning with hypermedia? Journal of Educational Psychology, 96(3), 523–535. https://doi.org/10.1037/0022-0618.104.22.1683 .
Azevedo, R., Cromley, J. G., Moos, D. C., Greene, J. A., & Winters, F. I. (2011). Adaptive content and process scaffolding: A key to facilitating students’ self-regulated learning with hypermedia. Psychological Test and Assessment Modeling, 53(1), 106–140.
Azevedo, R., Moos, D. C., Johnson, A. M. M. Y., & Chauncey, A. D. (2010). Measuring cognitive and metacognitive regulatory processes during hypermedia learning: Issues and challenges. Educational Psychologist, 45(4), 210–223. https://doi.org/10.1080/00461520.2010.515934 .
Azevedo, R., Taub, M., & Mudrick, N. (2018). Understanding and reasoning about real-time cognitive, affective, and metacognitive processes to foster self-regulation with advanced learning technologies. In D. H. Schunk, & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance, (2nd ed., pp. 254–270). Routledge.
Bannert, M. (2007). Metakognition beim Lernen mit Hypermedien: Erfassung, Beschreibung und Vermittlung wirksamer metakognitiver Strategien und RegulationsaktivitätenZugl.: Koblenz, Univ., Habil.-Schr., 2004 Pädagogische Psychologie und Entwicklungspsychologie: Vol. 61. Waxmann. http://deposit.d-nb.de/cgi-bin/dokserv?id=2993278&prov=M&dok_var=1&dok_ext=htm.
Bannert, M. (2009). Promoting self-regulated learning through prompts. Zeitschrift Für Pädagogische Psychologie, 23(2), 139–145. https://doi.org/10.1024/1010-0622.214.171.124 .
Bannert, M., & Mengelkamp, C. (2013). Scaffolding hypermedia learning through metacognitive prompts, Springer international handbooks of education. In R. Azevedo, & V. Aleven (Eds.), International handbook of metacognition and learning technologies, (vol. 28, pp. 171–186). New York: Springer. https://doi.org/10.1007/978-1-4419-5546-3_12 .
Bannert, M., & Reimann, P. (2011). Supporting self-regulated hypermedia learning through prompts. Instructional Science, 40(1), 193–211. https://doi.org/10.1007/s11251-011-9167-4 .
Bannert, M., Sonnenberg, C., Mengelkamp, C., & Pieger, E. (2015). Short- and long-term effects of students’ self-directed metacognitive prompts on navigation behavior and learning performance. Computers in Human Behavior, 52, 293–306. https://doi.org/10.1016/j.chb.2015.05.038 .
Belland, B. R., Walker, A. E., Olsen, M. W., & Leary, H. (2015). A pilot meta-analysis of computer-based scaffolding in STEM education. Journal of Educational Technology & Society, 18(1), 183–197.
Berthold, K., Nückles, M., & Renkl, A. (2007). Do learning protocols support learning strategies and outcomes? The role of cognitive and metacognitive prompts. Learning and Instruction, 17(5), 564–577. https://doi.org/10.1016/j.learninstruc.2007.09.007 .
Bloom B. S. (1956) Taxonomy of educational objectives. In Bloom B. S., M. D. Engelhart, E. J. Furst, W. H. Hill & D. R. Krathwohl (Hg.), Handbook I: Cognitive domain (S. 20–24). Longmans. New York, NY, USA.
Bouchet, F., Harley, J. M., & Azevedo, R. (2016). Can adaptive pedagogical agents’ prompting strategies improve students’ learning and self-regulation? In A. Micarelli, J. Stamper, & K. Panourgia (Eds.), Lecture notes in computer science: Vol. 9684. Intelligent tutoring systems: 13th international conference, ITS 2016, Zagreb, Croatia, June 7-10, 2016. Proceedings, (vol. 9684, pp. 368–374). Springer International Publishing. https://doi.org/10.1007/978-3-319-39583-8_43 .
Bouchet, F., Harley, J. M., & Azevedo, R. (2018). Evaluating adaptive pedagogical agents’ prompting strategies effect on students’ emotions. In R. Nkambou, R. Azevedo, & J. Vassileva (Eds.), Lecture notes in computer science. Intelligent tutoring systems, (vol. 10858, pp. 33–43). Springer International Publishing. https://doi.org/10.1007/978-3-319-91464-0_4 .
Clarebout, G., & Elen, J. (2006). Tool use in computer-based learning environments: Towards a research framework. Computers in Human Behavior, 22(3), 389–411. https://doi.org/10.1016/j.chb.2004.09.007 .
Clarebout, G., Elen, J., Collazo, N. A. J., Lust, G., & Jiang, L. (2013). Metacognition and the use of tools. In R. Azevedo & V. Aleven (Eds.), International handbook of metacognition and learning technologies (Vol. 28, pp. 187–195). Springer. New Yor, NY, USA.
Daumiller, M., & Dresel, M. (2018). Supporting self-regulated learning with digital media using motivational regulation and metacognitive prompts. The Journal of Experimental Education, 87(1), 161–176. https://doi.org/10.1080/00220973.2018.1448744.
Delen, E., Liew, J., & Willson, V. (2014). Effects of interactivity and instructional scaffolding on learning: Self-regulation in online video-based environments. Computers & Education, 78, 312–320. https://doi.org/10.1016/j.compedu.2014.06.018 .
Hilbert, T. S., Nückles, M., Renkl, A., Minarik, C., Reich, A., & Ruhe, K. (2008). Concept mapping zum Lernen aus Texten. Zeitschrift Für Pädagogische Psychologie, 22(2), 119–125. https://doi.org/10.1024/1010-06126.96.36.199 .
Hsu, Y.-S., Wang, C.-Y., & Zhang, W.-X. (2017). Supporting technology-enhanced inquiry through metacognitive and cognitive prompts: Sequential analysis of metacognitive actions in response to mixed prompts. Computers in Human Behavior, 72, 701–712. https://doi.org/10.1016/j.chb.2016.10.004 .
Jansen, R. S., van Leeuwen, A., Janssen, J., Conijn, R., & Kester, L. (2020). Supporting learners’ self-regulated learning in massive open online courses. Computers & Education, 146, 103771. https://doi.org/10.1016/j.compedu.2019.103771 .
Kauffman, D. F., Ge, X., Xie, K., & Chen, C.-H. (2008). Prompting in web-based environments: Supporting self-monitoring and problem solving skills in college students. Journal of Educational Computing Research, 38(2), 115–137. https://doi.org/10.2190/EC.38.2.a .
Kim, H. J., & Pedersen, S. (2011). Advancing young adolescents’ hypothesis-development performance in a computer-supported and problem-based learning environment. Computers & Education, 2(57), 1780–1789.
Kizilcec, R. F., Pérez-Sanagustín, M., & Maldonado, J. J. (2017). Self-regulated learning strategies predict learner behavior and goal attainment in massive open online courses. Computers & Education, 104, 18–33. https://doi.org/10.1016/j.compedu.2016.10.001 .
Kramarski, B., & Friedman, S. (2014). Solicited versus unsolicited metacognitive prompts for fostering mathematical problem solving using multimedia. Journal of Educational Computing Research, 50(3), 285–314. https://doi.org/10.2190/EC.50.3.a .
Lai, C.-L., & Hwang, G.-J. (2016). A self-regulated flipped classroom approach to improving students’ learning performance in a mathematics course. Computers & Education, 100, 126–140. https://doi.org/10.1016/j.compedu.2016.05.006 .
Lallé, S., Taub, M., Mudrick, N. V., Conati, C., & Azevedo, R. (2017). The impact of student individual differences and visual attention to pedagogical agents during learning with metatutor. In E. André, R. Baker, X. Hu, M. M. T. Rodrigo, & B. Du Boulay (Eds.), Lecture notes in computer science. Artificial intelligence in education, (vol. 10331, pp. 149–161). Springer International Publishing. https://doi.org/10.1007/978-3-319-61425-0_13 .
Mäeots, M., Siiman, L., Kori, K., Eelmets, M., Pedaste, M., & Anjewierden, A. (2016). The role of a reflection tool in enhancing students’ reflection. In L. Gómez Chova, A. López Martínez, & I. Candel Torres (Eds.), INTED proceedings, INTED2016 proceedings, (pp. 1892–1900IATED). https://doi.org/10.21125/inted.2016.1394 .
Marschner, J., Thillmann, H., Wirth, J., & Leutner, D. (2012). Wie lässt sich die Experimentierstrategie-Nutzung fördern? Zeitschrift für Erziehungswissenschaft, 15(1), 77–93. https://doi.org/10.1007/s11618-012-0260-5 .
Michalke, M. (2015). koRpus (version 0.06-3) [computer software] http://reaktanz.de/?c=hacking&s=koRpus.
Molenaar, I., Roda, C., van Boxtel, C., & Sleegers, P. (2012). Dynamic scaffolding of socially regulated learning in a computer-based learning environment. Computers & Education, 59(2), 515–523. https://doi.org/10.1016/j.compedu.2011.12.006 .
Moser, S., Zumbach, J., & Deibl, I. (2017). The effect of metacognitive training and prompting on learning success in simulation-based physics learning. Science Education, 101(6), 944–967. https://doi.org/10.1002/sce.21295 .
Müller, N. M., & Seufert, T. (2018). Effects of self-regulation prompts in hypermedia learning on learning performance and self-efficacy. Learning and Instruction, 58, 1–11. https://doi.org/10.1016/j.learninstruc.2018.04.011 .
Nückles, M., Schwonke, R., Berthold, K., & Renkl, A. (2004). The use of public learning diaries in blended learning. Journal of Educational Media, 29(1), 49–66. https://doi.org/10.1080/1358165042000186271 .
Pieger, E., & Bannert, M. (2018). Differential effects of students’ self-directed metacognitive prompts. Computers in Human Behavior, 86, 165–173. https://doi.org/10.1016/j.chb.2018.04.022 .
Randi, J., & Corno, L. (2000). Teacher innovations in self-regulated learning. In M. Boekaerts, M. Zeidner, & P. R. Pintrich (Eds.), Handbook of self-regulation, (pp. 651–685). Academic. https://doi.org/10.1016/B978-012109890-2/50049-4 .
Renner, B., Prilla, M., Cress, U., & Kimmerle, J. (2016). Effects of prompting in reflective learning tools: Findings from experimental field, lab, and online studies. Frontiers in Psychology, 7, 820. https://doi.org/10.3389/fpsyg.2016.00820 .
Sambe, G., Bouchet, F., & Labat, J.-M. (2017). Towards a conceptual framework to scaffold self-regulation in a MOOC. In C. M. F. Kebe, A. Gueye, & A. Ndiaye (Eds.), Lecture notes of the Institute for Computer Sciences, social informatics and telecommunications engineering. Innovation and interdisciplinary solutions for underserved areas, (vol. 204, pp. 245–256). Springer International Publishing. https://doi.org/10.1007/978-3-319-72965-7_23.
Schunk, D. H., & Zimmerman, B. J. (1998). Self-regulated learning: From teaching to self-reflective practice. Guilford Press http://www.loc.gov/catdir/bios/guilford051/97046438.html.
Schwonke, R., Hauser, S., Nückles, M., & Renkl, A. (2006). Enhancing computer-supported writing of learning protocols by adaptive prompts. Computers in Human Behavior, 22(1), 77–92. https://doi.org/10.1016/j.chb.2005.01.002 .
Schworm, S., & Gruber, H. (2012). E-learning in universities: Supporting help-seeking processes by instructional prompts. British Journal of Educational Technology, 43(2), 272–281. https://doi.org/10.1111/j.1467-8535.2011.01176.x .
Stark, R., & Krause, U.-M. (2009). Effects of reflection prompts on learning outcomes and learning behaviour in statistics education. Learning Environments Research, 12(3), 209–223. https://doi.org/10.1007/s10984-009-9063-x .
Stark, R., Tyroller, M., Krause, U.-M., & Mandl, H. (2008). Effekte einer metakognitiven Promptingmaßnahme beim situierten, beispielbasierten Lernen im Bereich Korrelationsrechnung. Zeitschrift Für Pädagogische Psychologie, 22(1), 59–71. https://doi.org/10.1024/1010-06188.8.131.52 .
Thillmann, H., Künsting, J., Wirth, J., & Leutner, D. (2009). Is it merely a question of “what” to prompt or also “when” to prompt? Zeitschrift Für Pädagogische Psychologie, 23(2), 105–115. https://doi.org/10.1024/1010-06184.108.40.206 .
van Alten, D. C. D., Phielix, C., Janssen, J., & Kester, L. (2020). Effects of self-regulated learning prompts in a flipped history classroom. Computers in Human Behavior, 108, 106318. https://doi.org/10.1016/j.chb.2020.106318 .
van den Boom, G., Paas, F., van Merriënboer, J. J. G., & van Gog, T. (2004). Reflection prompts and tutor feedback in a web-based learning environment: Effects on students’ self-regulated learning competence. Computers in Human Behavior, 20(4), 551–567. https://doi.org/10.1016/j.chb.2003.10.001 .
Veenman, M. V. (1993). Metacognitive ability and metacognitive skill: Determinants of discovery learning in computerized learning environments. European Journal of Psychology of Education, 29(1), 117–137.
Veenman, M. V. J. (2007). The assessment and instruction of self-regulation in computer-based environments: A discussion. Metacognition and Learning, 2(2-3), 177–183. https://doi.org/10.1007/s11409-007-9017-6 .
Veenman, M. V. J., van Hout-Wolters, B. H. A. M., & Afflerbach, P. (2006). Metacognition and learning: Conceptual and methodological considerations. Metacognition and Learning, 1(1), 3–14. https://doi.org/10.1007/S11409-006-6893-0 .
Winters, F. I., Greene, J. A., & Costich, C. M. (2008). Self-regulation of learning within computer-based learning environments: A critical analysis. Educational Psychology Review, 20(4), 429–444. https://doi.org/10.1007/s10648-008-9080-9 .
Wong, J., Khalil, M., Baars, M., de Koning, B. B., & Paas, F. (2019). Exploring sequences of learner activities in relation to self-regulated learning in a massive open online course. Computers & Education, 140, 103595. https://doi.org/10.1016/j.compedu.2019.103595 .
Zheng, L. (2016). The effectiveness of self-regulated learning scaffolds on academic performance in computer-based learning environments: A meta-analysis. Asia Pacific Education Review, 17(2), 187–202. https://doi.org/10.1007/s12564-016-9426-9 .
Zimmerman, B. J. (2008). Investigating self-regulation and motivation: Historical background, methodological developments, and future prospects. American Educational Research Journal, 45(1), 166–183. https://doi.org/10.3102/0002831207312909 .
We would like to thank Dr. Christoph Sonnenberg for his invaluable contributions in collecting, analyzing, and discussing the data as well as the student assistants Anna Horrer, Stefanie Beck, and Veronika Danner for their help in collecting the data, coding the data, and preparing the manuscript. This research was funded by the German Research Foundation (BA 2044/7-2).
This research was funded by the German Research Foundation (BA 2044/7-2). After accepting to fund the studies (before any data collection), the German Research Foundation did not influence any aspect in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Ethics approval and consent to participate
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study.
Consent for publication
The authors declare that they have no conflict of interest.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Engelmann, K., Bannert, M. & Melzner, N. Do self-created metacognitive prompts promote short- and long-term effects in computer-based learning environments?. RPTEL 16, 3 (2021). https://doi.org/10.1186/s41039-021-00148-w
- Self-created prompts
- Metacognitive prompts
- Self-regulated learning
- Knowledge acquisition
- Long-term effects