Skip to main content

Supporting knowledge monitoring ability: open learner modeling vs. open social learner modeling

Abstract

Research has demonstrated that people generally think both their knowledge and performance levels are greater than they are. Although several studies have suggested that knowledge and progress visualization offered by open learner modeling (OLM) technology might influence students’ self-awareness in a positive way, insufficient evidence exists to show that this is the case. This paper examines the effects of open learner modeling and its extension with social comparison features, known as open social learner modeling (OSLM), on students’ knowledge monitoring abilities. We report the results of two semester-long classroom studies, using subjects who were undergraduate and graduate students in Java Programming and Database Management courses at the University of Pittsburgh. During their studies, the students were able to use different versions of an online practice system equipped with both OLM and OSLM. The students’ knowledge monitoring abilities were examined in two ways: through absolute and relative assessments. According to the results, although in both OLM and OSLM groups the students’ absolute knowledge monitoring ability increased during the semester-long study, relative self-assessment ability (i.e., their ability to compare their own knowledge levels with the knowledge levels of their peers) only increased in the OSLM group. The authors also traced relationships between the students’ academic achievement and their absolute and relative knowledge monitoring abilities.

Introduction

Many studies have shown that people generally overestimate their knowledge, skills, and/or performance. The ability to estimate one’s own knowledge has been explored in different disciplines, using different terms such as knowledge monitoring ability, feeling of knowing, metamemory, and self-awareness (Tobias and Fletcher, 2000; Koriat, 1993; Nelson, 1990; Zimmerman, 2002). This ability is subject to a well-studied cognitive bias, known as overconfidence or the Dunning-Kruger effect, which is characterized by an overestimation of one’s actual abilities and chance of being successful, the belief that others are worse than oneself, and lack of hesitation in professing the correctness of personal beliefs (Moore and Healy, 2008). Knowledge monitoring ability has important implications in a range of educational contexts, from the schooling process to training at workplaces (Tobias and Fletcher, 2000), because it allows students to know what information and skills must be changed and/or improved (Clayson, 2005) to close the gap between the current and desired performance (Sadler, 1989).

The ability to estimate one’s own knowledge or performance may be evaluated using absolute and relative assessments. While an absolute assessment focuses on external criteria, such as task requirements, a relative assessment depends on the distribution of the scores of participants. An absolute assessment is very important to understand an individual’s knowledge/skill/performance levels that are necessary for successfully completing any task or job. Tobias and Everson (2002) developed a widely accepted absolute knowledge monitoring assessment (KMA) method, which has been used in several studies in various domains. This method is used to compute the differences between students’ actual performance and their own confidence. In addition to criterion-based absolute methods, people evaluate their knowledge and abilities by comparing themselves to other people (Suls, 1977). Festinger (1954) pioneered the use of social comparisons to accurately assess one’s abilities. Relative assessments are important to determine an individual’s eligibility for specific tasks/training and/or to discover who performs better. Comparing students’ actual performances relative to others with their self-reported judgments is also considered to be an important relative knowledge monitoring assessment method (Kruger & Dunning, 1999). The accuracy of both assessments is critical for students to be aware of their learning needs, to set more realistic goals, and to make better decisions about what topics to study (Somyürek & Çelik, 2018).

Since it is commonly believed that the ability to monitor one’s knowledge lies at the very heart of self-regulated learning, supporting this ability is crucial in the design of learning environments. This ability is more important in e-learning environments than in face-to-face learning environments because while e-learning, the students must decide where to go next, how to learn, and which learning strategies to use (Williams, 1996). Open learner modeling (OLM), in which the learner’s model is visible and accessible (Bull & Kay, 2010; Baker, 2016; Jivet et al., 2018), is considered as an important mechanism in e-learning environments for increasing students’ self-awareness. One of the recent extensions of OLM, known as open social learner modeling (OSLM), provides each student with the opportunity to examine their peers’ knowledge and progress, in addition to the opportunity to observe their own knowledge (Hsiao et al. 2013; Loboda et al., 2014). Using OSLM, students can acquire a better assessment of their performance or progress by comparing their capabilities with others.

Although some studies report the encouraging results that OLM can positively affect students’ awareness, insufficient evidence exists to firmly draw this conclusion. For example, Govaerts, Verbert, and Duval (2011) designed a visualization tool that contributes to awareness and self-monitoring and conducted two case studies. They collected data on the subjectively perceived usefulness of the tool from teachers. Their results indicate that the teachers strongly claimed to be more aware of what their students were doing. Kerly, Ellis, and Bull (2008); Mitrovic and Martin (2007); and Suleman, Mizoguchi, and Ikeda (2016) have focused on the effects of OLM on learners’ self-assessment accuracy. However, these studies include neither a KMA assessment nor a relative assessment. None of these studies examined an OSLM interface, and consequently, they could not compare the effects of both OLM and OSLM on knowledge monitoring. The purpose of this study is to fill this gap by comparatively examining the effects of OLM and its extension, OSLM, on students’ knowledge monitoring abilities in two separate but similar contexts.

Theoretical background

Delusion in subjective judgments is one of the most significant problems concerning the reasoning process (Nickerson, 1998). This has been examined in many studies from the perspectives of disciplines such as psychology, finance, and education (Somyürek & Çelik, 2018). Psychology studies generally focus on why people overestimate their own abilities and which factors affect their delusions. Financial studies address the role of overconfidence in marketing, investing, or risky financial behaviors. In education, many researchers have focused on measuring this cognitive bias in a more specific manner, with an awareness of the extent and quality of the subjects’ knowledge, or investigated it in relation to academic achievement. This cognitive bias is closely associated with students’ ability to assess their own levels of knowledge; thus, understanding it is crucial to the promotion of effective learning (Mitrovic & Martin, 2007). Expert learners, who are characterized as strategic and self-regulated (Ertmer & Newby, 1996), are aware of their strengths and weaknesses in the context of specific task requirements; this helps them to choose and apply appropriate strategies to achieve their goals (Isaacson & Fujita, 2006). Effective learners may evaluate the quality of their work more frequently (Lan, 1998), which will generally lead them to become more successful. Kruger and Dunning (1999) demonstrated that in addition to the general tendency of people to be overly optimistic about themselves, their exaggerated perceptions of their own performance increase along with a decrease in actual performance. Kruger and Dunning hypothesized that less skilled individuals have weaker metacognitive abilities, and as a result, cannot realize the truly low levels of their actual knowledge and performance.

An important part of metacognitive knowledge is an individual’s awareness of her/his own ability levels (Stankov & Crawford, 1996; Pintrich, 2002). Today, due to the constant need to update knowledge and skills, it is even more important for individuals to be responsible for their own learning and to have a requisite knowledge monitoring ability (Somyürek & Çelik, 2018). This is also critical in distributed and open learning environments due to the effects on students’ decisions and behaviors. Steiner, Götz, and Stieglitz (2013) stated that students’ unrealistic optimism about their own knowledge leads to insufficient efforts and avoidance of using some e-learning components. Because students may believe they already understand the content (although this is not true), they might underestimate the time needed to study and ignore the self-assessment questions that are frequently available in e-learning environments. As a result, imperfect knowledge monitoring abilities may cause several problems in the learning process.

The applications or interfaces in e-learning environments that include personal informatics, such as users’ knowledge and progress, could enhance students’ self-knowledge (Jivet et al., 2018; Verbert et al., 2013). Adaptive e-learning systems are also important learning environments that could potentially increase the self-knowledge of their users (Somyürek & Brusilovsky, 2015). An essential part of every adaptive e-learning system is the student model, which typically is an internal representation of a student’s knowledge. The student model is maintained using up-to-date observation of a learner’s activities and performance and is used to provide various adaptation effects, for example, by adapting content and navigation support in a system to the user’s current knowledge levels (Kay, 2000). While the majority of adaptive systems hide student models from the users and employ them internally within the system for adaptive interventions, open learner modeling (OLM) refers to various approaches to make some parts of the student model visible (Weber & Brusilovsky, 2001; Bull, Brna, & Pain, 1995; Bull & Kay, 2007). The popularity of OLM is due to its strong pedagogical foundation and positive results reported by several education studies (Ferreira et al., 2019). Existing studies indicate that OLM could provide substantial metacognitive support (Bull & Kay, 2013; Bull & Wasson 2016; Hsiao & Brusilovsky, 2017). However, the effects of OLM on students’ knowledge monitoring ability have not been extensively examined.

There is a growing interest in open social learner modeling, which is used to make peer models visible to students as well as the student’s own model. OSLM is based upon Festinger’s Social Comparison Theory, which has been frequently researched in social psychology studies (Corcoran, Crusius, & Mussweiler, 2011, pp. 119). According to the Social Comparison Theory, “self-knowledge is fulfilled [by] not just getting information about oneself but also comparing oneself to another” (Buunk & Gibbons, 2007). OSLM researchers use ideas from the Social Comparison Theory to design and develop e-learning systems and externalization methods for learner models (Guerra, Hosseini, Somyurek, & Brusilovsky, 2016; Hsiao, & Brusilovsky, 2017).

The externalization of a learner’s own model, an aggregated group model, or peer models is also employed in learning analytics dashboards. This research area is mainly focused on visualizing (presenting) student information and sharing it with all stakeholders such as instructors, teachers, peers, and parents through dashboards (Bodily et al., 2018). These dashboards could contain various panels of visualized indicators for monitoring knowledge (Bodily et al., 2018; Majumdar et al., 2019; Jivet et al., 2018; Verbert et al., 2013). Although learning analytics do not necessarily feed a learner model, nor offer inferences about unobserved knowledge or skill levels, this research area provides useful empirical evidence for OLM/OSLM usage.

Method

We explored the effects of OLM/OSLM on students’ knowledge monitoring abilities in two different classroom studies. The studies were conducted in two domains (i.e., university courses), one in Java Programming and the other in Database Management. A pre-test/post-test control group design was used in both studies, and we collected the same data to compute both absolute and relative knowledge monitoring abilities. Both the Java Programming and Database Management classes were divided into two groups, and the students were randomly assigned to one of the groups. Students in the first group studied with an e-learning system that included only OLM functions, and the second group studied with an e-learning system that also included OSLM functions. The same instructors taught both groups within each class.

The participants

The participants in the first study (Java Programming) were undergraduate students, and the participants in the second study (Database Management) were Masters-level students, who were taking regular semester-long courses in the School of Information Sciences at the University of Pittsburgh. The e-learning system was introduced to them as a free practice system, i.e., its use was not mandatory. Some of the students never logged in, and some of them used the system only a few times. In study 1, we assigned 26 and 29 students to the OLM and OSLM groups, respectively. In study 2, we assigned 49 and 53 students to the OLM and OSLM groups, respectively. However, six students in study 1 and 14 students in study 2 never logged in to the system, so we excluded their data. Among the remaining 49 students in study 1 and 88 students in study 2, only those who solved at least five problems (which we considered to be a sufficient amount of practice with the system to be affected by its features) were included in the analysis. After discarding those students with no or very low activity in the system, there were 44 students in study 1 and 43 students in study 2. Table 1 displays the descriptive statistics of the study participants.

Table 1 Descriptive statistics of the participants

The e-learning system

Both studies used an e-learning practice system called Mastery Grids (Loboda et al. 2014), which was developed by the Personalized Adaptive Web Systems (PAWS) Lab in the School of Information Sciences at the University of Pittsburgh (Fig. 1). This system has been used and evaluated in several studies that have proved its usability, efficiency, and effectiveness (Loboda, Guerra, Hosseini, & Brusilovsky, 2014; Brusilovsky et al., 2016; Guerra et al., 2016). The system offers access to practice-oriented learning content in the form of work examples and problems, and it includes adaptive navigation functions which can support learners by informing them about their learning process/performance. In study 1, the learning content was Java programming language. In study 2, the learning content was related to SQL programming, which is a considerable part of the Database Management course.

Fig. 1
figure 1

The e-learning system interface To access a learning activity (a question or a work example), the students click on one of the content cells inside the selected topic. The figure shows a work example from a general constraints topic, which was opened by a student who is now examining comments in its second line. All activities can be opened in a separate frame on the top of the OLM/OSLM interface

Two versions of the Mastery Grids interface were used to study the effects of OLM/OSLM. The first version, referred to as OLM, provides a learning dashboard that includes only OLM functions, which visually present to a student a model of their corresponding Java or SQL knowledge. In addition to its role as an OLM, this visual representation (Fig. 2) is also used for navigation to the learning content. In this interface, each cell in the top row represents one of the content topics. The color of the cell indicates the current knowledge level of the target student for this topic. If the student has no knowledge of the topic, the grid color is grey. With increases in student’s knowledge, the color becomes greener and progresses from light to dark green. Students can also view their progress in terms of percentiles for each topic by mousing over the grid cells. A click on a topic cell “opens” the topic and provides access to practice content for the topic, which is shown in two or more rows of slightly smaller cells grouped by content type (see the square with rows of “Examples” and “Quizzes” in Fig. 2). In each topic, there are several work examples that provide solutions for the given problems and several problems to practice. The color of the content cells reflects the progression of the student’s knowledge for a specific portion of the content. A click on a content cell opens this item for practice.

Fig. 2
figure 2

The OLM interface

The design of Mastery Grids is based on both self-regulated learning and the Social Comparison Theory. The OLM functions were designed to reflect the main ideas of the Self-regulated Learning Theory, which defines the learner as an active participant who can monitor her/his learning process and can find a way to succeed when s/he encounters obstacles (Zimmerman, 1990). The OLM interface includes self-monitoring tools to enable students to recognize when they have mastered content, and it shows the progress of the user across different learning content (solved examples and questions) in each topic (shown in Fig. 2).

The second version of the interface, called OSLM, provides access to the full Mastery Grids, which combine OLM with social comparison functions that are based on the Social Comparison Theory. According to the Social Comparison Theory, “people seek accurate knowledge of [the] self, and to find it they compare themselves with others” (Krueger, 2000, p.323). As seen in Fig. 3, the OSLM interface offers two additional rows in the topic grid. The third row is the Group row, which shows the average progress of the peer group (which in our studies was the whole class) for each topic. In this row, the topic color progresses from gray (no group progress for the topic) to blue. With increases in group progress, the topic colors become more intense (darker). The second row of the grid, called “Me vs. Group,” is used to compare a student’s own knowledge progress with the group’s progress. The color gradient in the second row represents the difference between the user and the group and ranges from dark blue to dark green. If the average progress of the class is higher than the student’s progress, the grid color becomes blue; the more intense it is, the more a student lags behind the class. If the user is ahead of the class average for this topic, the comparison color is shown as green; the more intense the color is, the further ahead is the student. This representation of the difference was designed to maintain consistency with the color coding of the first and third rows.

Fig. 3
figure 3

The OSLM interface

To offer a more detailed social comparison, the OSLM version offers a “Load the rest of the learners” button. A click on this button opens a detailed student-by-student progress visualization, in which the students are listed in descending progress order (see Fig. 4). Neither names nor identifiers are shown. The student’s exact position in the list is also shown in this interface in green.

Fig. 4
figure 4

The student list in the OSLM interface

The data collection tools

An assessment tool was used to measure the students’ academic achievement and knowledge monitoring abilities in each study. In the first study, this tool consists of 12 questions about Java Programming, which are administered as both a pre-test and a post-test. In these 12 questions, we asked the students to write short answers for the given Java codes, as shown in the following example:

public class MyTester {

public static void main(String[] args) {

int i = 14;

int j = 20;

int k;

k = j / i * 7 % 4;

}

What is the final value of the variable k: _________

For each question in the Java pre- and post-tests, the students were asked to report their confidence. This was done by asking them whether they were able to solve the question with a yes/no prompt. The last question in this tool was used to ask the students to estimate their test results percentile, which indicates how many students have scored lower than the target student on the given test. This tool was used in both the pre-test and the post-test. To evaluate the students’ knowledge monitoring ability scores, the knowledge monitoring assessment (KMA) method developed by Everson and Tobias (1998) was used. This measurement method can be used in various domains to evaluate the differences between a student’s actual performance and his/her confidence regarding each question. According to this method, the following four scores (a, b, c, d) were generated (Everson and Tobias, 1998).

In Table 2, the a and d scores represent the correct estimates of whether or not the questions were answered correctly. The b and c scores show that the student incorrectly estimated that s/he was giving the right or wrong answers to the questions. With these four scores, the KMA score was computed using the following formula:

Table 2 Scores generated according to the KMA method

KMA = ((a + d) − (b + c))/total questions

The KMA score ranges from 1 to − 1, where 1 indicates perfect knowledge monitoring and − 1 shows that the student has no idea about her/his performance.

In addition to absolute knowledge monitoring with KMA scores, we also focused on relative knowledge monitoring. To assess their relative knowledge monitoring ability, we compared the actual positions of the students in the class (measured in percentiles and computed using the number of other students that performed lower in the pre- and post-tests) with the position estimated by the learners in the pre- and post-tests.

In the second study, the data collection tools and procedures were similar to those in the first study. The pre- and post-tests were similar in form but were composed of SQL programming questions. This test also included an estimation of correctness for each question and the students’ percentiles.

Procedure

The procedure was very similar in both studies. First, the e-learning system was introduced to both groups (assigned to either the OLM or the OSLM e-learning interfaces). In study 1, the procedure started in the first week of the Object-Oriented Java Programming course. In study 2, it began in the 3rd week of the Database Management course, right before the introduction of the SQL content. The research study was explained to the students, and then they signed consent forms if they wished to participate in the study. The participants were presented with an introduction to the e-learning system using a live demonstration accompanied by an explanation of how to use it. They were then given the pre-test. The use of the system was not mandatory in the course. However, one extra credit point was offered to students who solved at least 10 problems in the system, in order to motivate them to enter and explore it. All the user interactions with the system were recorded in a database. The descriptive statistics for the students’ interactions are presented for study 1 and study 2 in Tables 3 and 4. The post-test was administered at the end of the 8-week period of the study.

Table 3 System usage by the OLM and OSLM groups in study 1
Table 4 System usage by the OLM and OSLM groups in study 2

In the Java Programming context (study 1), the system included a total of 19 topics, 74 examples, and 94 questions. The log data for the OLM group shows that the students on average covered 55.74% of the topics, attempted 49.50 problems, explored examples 30.05 times, clicked 26.68 times in the interface, clicked 109.23 content cells in Mastery Grids, and spent 12,281 s in the system. In the OSLM group, the log data shows averages of 52.90% of the topics covered, 77.09 problems attempted, examples explored 46.64 times, the Mastery Grids interface clicked 28.50 times, content cells in Mastery Grids clicked 124.23 times, and 9570 s spent in the system. These numbers show a very similar overall usage of the OLM and OSLM interfaces by the two groups in study 1.

In the Database Management context (study 2), the system included a total of 11 topics, 47 examples, and 64 questions. On average, the students in the OLM group covered 59.09% of the topics, attempted to solve problems 90.33 times, explored examples 48 times, clicked the Mastery Grids interface 55.75 times, clicked content cells in Mastery Grids 130,92 times, and spent 14,527 s in the system. In the OSLM group, the students on average covered 76.27% of the topics, attempted to solve problems 147.65 times, explored examples 54.13 times, clicked the Mastery Grids interface 88.77 times, clicked content cells in Mastery Grids 171,55 times, and spent 13,538 s in the system. These results show that the OSLM interface was more actively used by the participants in study 2.

To ensure that the students had ample opportunity to observe their absolute and relative (for the OSLM groups) progress, we explored their interactions with the content cells of the Mastery Grids interface. Mastery Grids was designed to serve as both a progress visualization tool and a content access tool. On the one hand, the main grid of the system displays the detailed progress of the learner over the whole course and for each topic. On the other hand, the students must click on the grid cells to access learning content for each topic and to select a question or an example to practice. The students become continually engaged with the system to observe their progress information, even if their original motivation is merely to access the content. The number of clicks on the grid cells provides an estimate of how frequently the students observe their progress. In study 1, the mean number of clicks was 118, the median was 92, and the minimum number of clicks was 9. In study 2, the mean usage was 160.21, the median was 151, and the minimum was 31. This data provides good evidence that the students had ample opportunity to view their progress visualizations.

Results

Study 1

Knowledge monitoring ability

Table 5 displays the mean, median, and standard deviations of the KMA scores for both the OLM and OSLM groups in the Java Programming course. Of the 22 students in the OLM group and the 22 students in the OSLM group who were sufficiently active in the system to be considered for this analysis, only 18 and 17 students respectively answered the confidence questions in both the pre- and post-tests. These were included in the following analyses. There was no difference in the KMA scores of the pre-test between the OLM and OSLM groups (t = .581, p > .05), which suggests that there was no selection effect regarding the dependent variable.

Table 5 KMA mean, median, and standard deviation scores for the OLM and OSLM groups

A mixed-effects analysis of variance (ANOVA) was conducted to evaluate the effects of time (pre-KMA vs. post-KMA) and group (OLM, OSLM). According to the results, there was a significant main effect of time, F(1, 33) = 18.12, p < .001, r = 0.35. This effect shows that if we ignore the groups of participants, the absolute KMA scores were different for the pre- and post-measurements. However, there was no significant interaction between the groups of participants and time, F(1, 33) = 0.03, p > .05. This result shows that increases in the KMA scores were not significantly different for the OLM and OSLM groups. Also, there is no significant main effect for the group, F(1, 33) = 0.054, p > .05. This means that if we ignore the pre- and post-measurements, the KMA scores are not different for the OLM and OSLM groups. These results reveal that the post-absolute knowledge monitoring scores of the students were significantly higher than the pre-absolute knowledge monitoring scores after they studied with the Mastery Grids system. The means and interaction graph are shown in Fig. 5.

Fig. 5
figure 5

The means and interaction graph

After working with the system, the students’ knowledge monitoring abilities were significantly improved in both groups. To examine whether OLM or OSLM improves the students’ KMA scores, a paired sample t test was conducted. The results indicate that both the OLM (t = − 2.510, p < .05) and OSLM (t = − 3.920, p < .01) groups’ knowledge monitoring abilities were increased from the pre-test to post-test.

The ability to monitor knowledge level compared to others

To examine the students’ relative knowledge monitoring ability, we used the students’ comparisons of themselves with their classmates. For this purpose, the students’ self-reported percentiles (how they thought they ranked against others in both the pre- and post-tests) were compared to their actual percentiles (the position of the pre- and post-test scores within the whole group). Due to violation of the normality assumption, this comparison was conducted using a Wilcoxon signed-rank test for both the OLM and OSLM groups, and a separate analysis was done for the pre- and post-test. The results show that for the pre-test assessments, the students’ estimated percentile ranks were significantly higher than their actual percentile ranks in both the OLM (Z = − 2.857, p < .01) and OSLM (Z = − 3.300, p < .01) groups. In other words, the students overestimated their relative levels of knowledge (see Table 6). We also found significant differences between the students’ average actual and estimated percentiles in the post-test for the OLM group (Z = − 2.214, p < .05). However, there were no significant differences for the OSLM group (Z = − 1.065, p > .05), which means that the students in the OSLM group were much closer to reality when estimating their relative levels of knowledge. This result suggests that the OSLM interface, in which students can see the other students’ knowledge levels and their own position within the class, improved their awareness of their relative knowledge assessment, which was expected.

Fig. 6
figure 6

Scatter plots showing the relationships between academic achievement measured by the post-test and absolute and relative knowledge monitoring abilities. Note that better abilities mean higher KMA ssscores (left) but lower percentile differences (right)

Table 6 The students’ actual percentiles and estimated percentiles for the OLM and OSLM groups

Knowledge monitoring ability and academic achievement

Prior research has demonstrated that successful knowledge monitoring seems to increase as the students’ competence levels increase (Kruger and Dunning, 1999). To examine whether this general tendency is valid in our case, a correlation analysis was conducted, and a significant positive correlation was found between the post-KMA scores and academic achievement, as measured by the post-test (R = .681, p < .001). This result shows that as the students’ knowledge increased, they became more successful in assessing their knowledge. Figure 6 (left) displays the relationship between the students’ academic achievement, as measured by the post-test, and their knowledge monitoring ability. A significant negative correlation (at the .01 level) was also found between academic achievement and the post-bias score (percentile difference) (R = − .542, p < .001). In other words, when the students’ knowledge increases, the difference between their estimated percentiles and real percentiles decreases.

Study 2

Knowledge monitoring ability

We also conducted a similar experiment in another context, a graduate class on Database Management Systems. Table 7 displays the mean, median, and standard deviations of the KMA scores for both OLM and OSLM groups. Since the normality distribution is violated in the KMA scores in this context, we conducted a non-parametric analysis. Before using the Mastery Grids interface, the two groups took a pre-test to examine whether they had equal knowledge monitoring abilities for SQL programming. According to the results of a Mann-Whitney U test (U = 163.500, p = 0.547), there were no significant differences in the KMA scores between the groups in the pre-test.

Table 7 KMA mean, median, and standard deviation scores for the OLM and OSLM groups

After the training, a Wilcoxon signed-rank test was conducted to evaluate the effects of repeated KMA measurements on both the OLM and OSLM groups. As shown in Table 8, the results reveal that there was no significant difference between the pre-KMA scores and the post-KMA scores for the OLM and OSLM groups. Though the improvement was not significant, in both groups, we could see increased KMA scores for the majority of the students.

Table 8 Wilcoxon signed-rank test results of the students’ repeated KMA scores for the OLM and OSLM groups

Another Mann-Whitney U test (U = 162.000, p = 0.631) showed that there was no significant difference between the post-KMA scores of the students in the OLM group and OSLM group. This effect tells us that the post-KMA scores in the OLM group were basically the same as those in the OSLM group.

According to these results, working with the system helped the students to improve their knowledge monitoring abilities, but this improvement was not statistically significant. Even though this finding was less satisfactory than our findings for the Java Programming course, the likely reason for this result is clear. When we were coding the data, we observed that some of the students checked “No” for confidence in all the questions or in several questions, and their pre-test scores were zero or very low, indicating that these students had no knowledge about the questions and gave random answers. For example, if a student had no prior knowledge about SQL programming, they would check all the confidence questions as “No” and simply choose any of the answers for these ten questions. The result of the four scores used to compute the KMA score for such a student is shown in Table 9.

Table 9 a, b, c, and d scores for a given example

KMA = ((0 + 10) − (0 + 0))/10 = 1

In this example, this student had a perfect knowledge monitoring ability score of 1, which means s/he was aware of what s/he knows or does not know. However, this result is not obtained only from the awareness of the person, but also from his/her lack of prior knowledge. The KMA scores of students who could not answer any question in the pre-test also confirm this observation. Their mean KMA scores were .45 in the pre-test and .31 in the post-test, which indicates that their knowledge monitoring scores were decreasing (because estimating the correctness of their answers at post-test time is harder than at pre-test time). In the Database Management context, the mean pre-academic achievement score (measured by the pre-test) was 11.67 for the OLM group and 12.26 for the OSLM group. In other words, most of the students had no prior knowledge, and only a few had very limited knowledge. Taking these conditions into account, a general increase in the KMA scores is arguably critical, even though this increase is not statistically significant.

The ability to monitor knowledge level compared to others

We also wanted to analyze how successful the students were in comparing themselves with their classmates relatively. For this purpose, the students’ actual percentiles and their estimated percentiles in both the OLM and OSLM groups were analyzed using the Wilcoxon signed-rank test, due to the violation of the normality assumption. The results show that, for the pre-test assessments, there were significant differences between the students’ average actual percentiles and their estimated percentiles in both the OLM (Z = − 2.666, p < .05) and OSLM (Z = − .3.4573, p < .05) groups.

In the post-test assessments, we also found significant differences between the students’ average actual and estimated percentiles in the OLM group (t(11) = 2.227, p < .001). However, there were no significant differences in the OSLM group (t(26) = − .299, p > .05). This result shows that using an OSLM interface, in which the students can see the other students’ knowledge levels and their own position within the class, improved their relative knowledge monitoring ability, as in study 1. Table 10 displays the mean and standard deviations of the students’ actual and estimated percentiles for both OLM and OSLM groups.

Table 10 The students’ actual percentiles and their estimated percentiles for the OLM and OSLM groups

The percentile-based analysis provides sufficient evidence in most of the practical cases when the scores have a reasonably broad and non-skewed distribution but will not work for narrow or skewed distributions. To examine the sufficiency, we checked the normality and breadth of the distribution of the test scores in all four cases. While in three cases the distribution was broad and close to normal, this was not true for study 2 pre-test. These pre-test scores were clustered at certain points, such as 0, 10, 20, 30, 40, and 50 in OLSM and 0, 10, 20, and 30 in OLM, and they did not show a wide distribution. To add a more reliable analysis for this case, we decided to examine “better-than-average” phenomena related to placement, which means the belief that one’s relative knowledge/performance is better than others. To examine this effect, we reviewed the students’ estimated pre-test percentiles in the two groups, to separate those who believed their performance would be better than at least 50% of the class from those who did not. Then, we explored the relationship between the students’ pre-test scores and their estimations descriptively using cross tables (Table 11).

Table 11 The students’ pre-test scores and their estimations of their relative placement

As can be seen in Table 11, most of the students did not estimate themselves to be in the top half of the class. Though this finding was unexpected, it is still reasonable considering that the students’ general preliminary knowledge levels were very low. As can be seen from the values, all the students who could not answer any question in the pre-test and got a zero score out of 100 (eleven students in the OSLM group and four students in the OLM group), and most of the students who received a ten score in the pre-test (eight students in the OSLM group and two students in the OLM group) had estimated that they were in the bottom half of the percentile positions. We also found that a vast majority of the students had low scores and that they generally did not think they were “better-than-average.”

Knowledge monitoring ability and academic achievement

Due to the violation of the normality assumption, the relationships were analyzed using Kendall’s tau-b correlation coefficient. According to the results, a significant positive correlation was found between the post-KMA scores and the academic achievement scores (rτ = .458, p < .001). These results show that when the students’ academic achievement (measured from the post-test scores) increases, their knowledge monitoring ability also improves, as was seen in study 1. Figure 7 shows the relationship between the students’ academic achievement scores and their knowledge monitoring abilities. A significant negative correlation was found (at the .05 level) between the academic achievement scores and the post-bias scores (percentile difference) (rτ = − .567, p < .05). In other words, when the students’ knowledge increased, the difference between their estimated percentiles and their real percentiles decreased. This suggests that better learners also are better at assessing their relative knowledge.

Fig. 7
figure 7

Scatter plots showing the relationships between academic achievement (as measured by the post-test) and absolute and relative knowledge monitoring abilities. As in Fig. 4; better abilities mean higher KMA scores (left) but lower percentile differences (right)

Discussion and conclusion

Prior research has demonstrated that people generally think their knowledge levels are objectively higher than they are (Kahneman, Slovic, & Tversky, 1982). But along with increases in expertise and ability, the gap between perceived and real levels of knowledge becomes smaller (Kruger & Dunning, 1999). In a learning context, this biased preview of one’s own knowledge/ability may prevent learners from making adequate study choices and consequently can result in ineffective learning (Dunlosky & Rawson, 2012). Thus, accurate knowledge monitoring skills are considered a necessity for productive learning (Black & William, 1998).

E-learning environments, especially adaptive and intelligent ones, try to support learners not only in their learning about a particular domain, but also to become more effective learners (Mitrovic & Martin, 2002). Learning dashboards are an important tool in modern learning environments and can be used for this purpose (Verbert et al., 2013). Providing a visual overview of students’ activities and allowing them to compare their progress with that of their peers can support their learning process and also inform both teachers and other stakeholders about their teaching process (Duval, 2011). Bull (2016) suggested combining the power of OLMS with learning analytics dashboards.

The OLM and OSLM interfaces can be used as a powerful and detailed learning dashboard to visualize student models. This is a basic component of adaptive e-learning systems. However, a systematic literature review by Jivet et al. (2018) and a study by Verbert et al. (2013) both indicate that some important problems and limitations exist in extant learning dashboard studies. First, no learning theory or concept has been used to design most learning dashboards. Second, most learning dashboards were developed to inform teachers; only a few have been developed to support learners. Third, evaluations of learning dashboards often are not consistent with their usage goals.

Our study addresses these problems/limitations and examines the effects of OLM/OSLM on knowledge monitoring abilities. Our Mastery Grids system is based on two important learning theories, the Self-regulated Learning and the Social Comparison Theory. To support the SRL, the OLM interface includes self-monitoring tools which allow students to view their progress in different learning activities (work examples and problems) in each topic, so that they can recognize when they have mastered the activities. In addition, an OSLM interface based on the Social Comparison Theory allows students to view the progress of their peers and an overall model of the class (Guerra, Hosseini, Somyürek, & Brusilovsky, 2016). The Mastery Grids system specifically supports learners rather than teachers. Finally, since the aim of the system is to support SRL and social comparisons, its evaluation was conducted via absolute and relative knowledge monitoring assessments, which are crucial indicators of SRL and social comparisons. In other words, the aim of our system and its evaluation methodology are consistent.

The results of our two classroom studies indicate that OLM/OSLM interfaces can be used to support students’ knowledge monitoring abilities. According to the results, the Mastery Grids system helped the participating students to improve their knowledge monitoring abilities in both the Database Management and the Java Programming courses. However, this improvement was statistically significant only in the Java Programming course. The Java Programming students’ mean academic achievement scores were 31.17 for the OLM group and 29.87 for the OSLM group in the pre-test, which means that they had some prior knowledge about Java Programming before the treatment. In the Database Management course, the students’ mean academic achievement scores were 11.67 for the OLM group and 12.26 for the OSLM group in the pre-test, which indicate that most of the students had no prior pertinent knowledge and only a few of them had a small amount of prior topic knowledge. If the level of preliminary information is so low that a student will not even try to answer the test questions, it is not possible for him/her to be overconfident regarding those questions. In conclusion, while the students’ KMA increased in both groups, we assessed that these increases were not significant in the Database Management course study because these students’ average preliminary topic knowledge was very meager.

Our results support prior studies, which also show some positive results of OLM on students’ self-assessments. For example, Mitrovic and Martin (2002) analyzed the impact of OLM on learners’ self-assessments in terms of the number of abandoned problems in their system and the reasons for abandoning the problems. They assigned students to two versions of their constraint-based tutor, both with and without OLM. They split each group into two subgroups according to their knowledge levels. The findings were that learners who had a high level of knowledge in the experimental group abandoned significantly fewer problems than those in the control group. Regarding the reason for abandoning the problems, the students in the control group (especially those who had a high level of knowledge) said the problems were too easy (they stated this more often than students in the experimental group); however, their logs were generally not consistent with this reason. The researchers interpreted this as evidence that OLM can help students with higher levels of knowledge to complete problems and evaluate themselves more accurately. Similarly, Kerly, Ellis, and Bull (2008) designed and evaluated their CALMsystem with an open learner model. Students were asked to rate their confidence at one of four levels (low, moderate, good, or high) for each of the topics in the system. The OLM presented students with both the system’s belief about their level of topic knowledge and their own confidence ratings. One of the versions of the CALMsystem included OLM, while the other version included OLM and an additional chatbot. If the learner’s confidence rating and the system’s estimation were different, the chatbot was used for negotiation. The study revealed that mean self-assessment errors were reduced for the learners in both versions of the CALMsystem. Our previous study results (Somyürek & Brusilovsky, 2015) also demonstrate that OLM/OSLM interfaces help to improve students’ self-assessment skills. We used the Mastery Grids system and assigned students to two versions with OLM and OSLM. In pre- and post-tests, we asked the students to check “Yes” or “No” for each item in the tests to record whether they were confident that their answer was correct. Using their confidence data and their performance for each question, we computed several self-assessment metrics, such as total correct assessments, correctness ratio, and incorrectness ratio. According to the results, the metrics were higher in the post-test than in the pre-test, which indicates an improvement in the accuracy of the students’ self-assessments. Although the results of these three studies are consistent with our present results, the measurement method (KMA) of the present study differs from these earlier studies.

We also obtained evidence that studying with an OSLM interface improves students’ awareness about their relative knowledge assessment, which was expected. In both case studies, we found that there were no significant differences between the students’ post-average actual and estimated percentiles for the OSLM group, in which the students could see the other students’ knowledge levels and their own position within the class. We also found that studying with the traditional OLM-only interface is not sufficient to develop the students’ relative knowledge assessment abilities. This result is both novel (because the effects of OSLM have not been deeply explored in previous studies) and practically valuable because it shows that an OSLM interface can provide important information to help learners assess their relative levels of knowledge in a class. This result also supports the main principle of the Social Comparison Theory, which states the importance of social comparison to help people evaluate their abilities and reduce uncertainty (Buunk & Gibbons, 2007). Though an objective knowledge assessment criterion is most important for correct self-evaluation, it does not provide as much “get ahead” impetus as social comparisons with peers. These comparisons are valuable for people to understand whether they are superior or inferior in relation to others because people define “superior” in comparative terms. As a result, this information may be critical to goal setting, which can lead to self-enhancement (Collins, 1996). Regarding the value of social comparisons in an educational context, our results demonstrate that the OSLM interface seems to improve students’ relative knowledge monitoring abilities.

In study 2, because the students’ pre-test scores were clustered at certain points and did not show a wide distribution, we examined “better-than-average” phenomena to add further analysis for this case. The “better-than-average” effect is an important indicator of the overestimation of one’s own standing. According to the literature, most people believe even that though their performance is not so good, they are still better than the median (Moore & Healy, 2008). But the literature also emphasizes that this effect is limited to easy tasks; for difficult tasks, the effect may be reversed (Kruger, 1999). In our results, when the students’ prior knowledge was very low, no “better-than-average” effect was found.

Finally, we obtained evidence that the students’ academic achievement scores are correlated with their knowledge monitoring abilities. Better learners (i.e., those with a higher academic achievement scores in the post-test) also seem to have better meta-cognitive skills (i.e., their absolute and relative knowledge assessments). Similar results that indicate a relationship between the participants’ knowledge monitoring skills and academic achievement scores were also reported in several earlier studies (Kruger & Dunning, 1999; Somyürek & Çelik, 2018; Somyürek & Brusilovsky, 2015; Isaacson & Fujita, 2006). We would like to interpret this result together with our other two results to compare with previous studies. Prior studies have reported relationships between social comparisons, self-assessment, self-regulation, and academic performance. Festinger (1954) proposed that the most accurate evaluation of one’s abilities may be gained by making social comparisons with a similar target. Wheeler, Martin, and Suls (1997) developed a model to show how people use social comparisons to understand whether they might successfully accomplish a task. According to their model, people can predict their capability to succeed in any specific task if they know the performance of a similar individual as well as their own past experience, whether or not this experience reflects their maximum effort and specifically their performance-related attributes. Thus, social comparisons can improve peoples’ self-assessments and can increase learning through self-regulation. Self-assessment is a self-regulatory strategy; also, self-regulation and academic performance are positively influenced by self-assessment (Panadero, Jonsson, & Botella, 2017). Several literature reviews and meta-analysis studies have demonstrated that self-regulation has a positive influence on academic performance (Panadero, 2017; Richardson, Abraham, & Bond, 2012). Thus, OSM and OSLM interfaces may be used to increase learning, due to their positive effects on learners’ knowledge monitoring abilities. However, because the relationships between self-assessment, self-regulation, and academic performance are considered both reciprocal and intricate (Panadero, Jonsson, & Botella, 2017), the opposite may also be true. Because the students’ Java and SQL knowledge increased in our post-tests, this improvement might have supported their knowledge monitoring abilities. In other words, the improvement effect may not be associated only with the use of the OLM/OSLM system, but also with learning the content. We need to underline that these are cumulative effects of learning the content and using the systems.

The study’s limitations and recommendations

The first limitation of this study is that the results were obtained from the cumulative effects of learning the content and using the systems with the OLM and OSLM interfaces. Considering this important limitation, this study could be repeated with an extra control group that would use the e-learning system without the OLM/OSLM features. In this way, the effects on the control group (only learning the content) and the experimental groups (learning the content and using the systems) could be compared in future studies. Unfortunately, it still would be impossible to remove the effects of OSM and OSLM influence on self-regulation or academic performance, or their effects on knowledge monitoring skills.

From a methodological point of view, this study includes other important limitations associated with the samples. The sample sizes were not very large in either of the two studies, and in the second study, the numbers of students in the OLM and OSLM groups were not balanced. Sample size is closely related to the power of the study, which refers to its ability to detect an effect. Interpretation of results is also more difficult in studies including small samples because only large sample studies can produce precise results. Therefore, when examining the results of this study, it should be taken into account that they were obtained from a relatively small sample. Another problem is the unbalanced sample sizes in our study 2. In the beginning of that study, we used different sections of the same Database Management course, and the numbers and other preconditions were very similar in both groups. However, due to the social features of the OSLM interface, the students in that group were more engaged, which we discovered from their answers in the student questionnaires and their usage data. In other words, since more students preferred not to use the e-learning system in the OLM group, we had more missing data, which resulted in imbalances. Because the missing data was completely random, we still conducted our analysis and were able to analyze the data of 12 students in the OLM group and 31 students in the OSLM group. We also did not find any significant differences between the OLM and OSLM groups’ absolute knowledge monitoring abilities in study 2. The numbers in that sample also may have led to these insignificant results. So, we suggest using larger sample sizes and more balanced groups for future studies.

The following suggestions may be additionally useful to other researchers and practitioners in the field of e-learning. The OLM features in this study show the progress of learners in two topics within a Computer Science context, Database Management and Java Programming. Each topic includes questions and work examples. The OLM displays the mastery levels of the students regarding the content. Visualizations are provided for the topics, each example, and each question. Students can also view their progress in the form of a percentile for each topic by mousing over the grid cells. The color of a cell presents the student’s activity and mastery level. Darker green indicates more progress in a scheme ranging from grey to green. In this context, these OLM features can help to improve the learners’ absolute knowledge monitoring ability, especially in cases like study 1. Thus, they may be useful additions to e-learning systems, to provide students an opportunity to observe their knowledge/performance and progress.

OSLM was added in the aggregated class model to display comparisons of a student’s personal progress to the progress of peers in the class. The student’s progress level in the aggregated class is similarly indicated with skill meters for each topic, example, and question. Increasing personal progress in this case is indicated by darker blue color in the grid, in a range from grey to blue. The comparison grids feature three color transitions: darker green means the student has made more progress than the group, darker blue means the class is ahead of the student, and grey means equal progress. Peer models are shown in separate rows, in which the student can view her/his ranking by clicking a button. In this context, OSLM can help to improve the students’ relative knowledge monitoring ability. Adding these features to e-learning systems would provide students an opportunity to compare their progress with others.

In this study, Mastery Grids was used both for visualization of OLM/OSLM and navigation. It is located on the main page, so that students may view their progress visualizations each time they log in. On the one hand, this design leads the student to constantly encounter progress information, but on the other hand, with this design, the researchers cannot discern exactly why a student displayed the tool. S/he may have viewed it only to navigate through the content and/or to view her/his own progress and/or the progress of others. For this reason, a similar systems should be designed to isolate and distinguish access for OLM and OSLM visualizations, as opposed to navigation uses.

In future studies, students’ knowledge monitoring abilities can be assessed through simple pre- and post-test questions, as in this study. However, confidence data could be collected every time the students answer questions within the system. This more frequent data recording would enable the collection of more data and thus a more detailed examination of a student’s knowledge monitoring ability development throughout the learning process.

We discussed the limitations of KMA measurement related to the students’ prior knowledge in the context of study 2. As we stated before, if they possess no preliminary information or if that information is meager, KMA measurement cannot be used effectively to discern the students’ knowledge monitoring ability. Another important limitation about KMA measurement is that it can only be used with questions that ask respondents to choose from or to write from a limited set of possible answers. In its original formulation, KMA works for closed-ended questions that require writing short answers, such as the result of a programming code, or selecting the correct option among multiple choices. However, the revised version of KMA assessment developed by Gama (2004) provides a solution that allows questions with longer, free-form answers. This expanded version of KMA measurement permits the use of open-ended questions because the student can both partially solve the problem and evaluate their performance as “partially correct.” In this revised formulation, the students’ answers are grouped into three categories—correct, incorrect, or partially correct—rather than only correct or incorrect. Similarly, the students’ confidence in solving a problem may be recorded in the same three categories. By using a third value to represent an intermediary state, it is possible to use different kinds of questions. Using other metrics such as overestimation or overconfidence (Moore and Healy, 2008), which are studied in judgment and decision-making research, may provide yet another solution regarding longer and/or higher cognitive level questions. Researchers should plan what type of questions they need to ask as well as what is the best metric to assess knowledge monitoring with the selected question types.

Availability of data and materials

Because the participants were informed in the consent document that no collected information nor data would be accessible by anyone apart from the research team due to regulations on confidentiality, we are unable to share the study data.

Abbreviations

OLM:

Open learner modeling

OSLM:

Open social learner modeling

References

  • Baker, R. S. (2016). Stupid tutoring systems, intelligent humans. International Journal of Artificial Intelligence in Education, 26(2), 600–614.

    Google Scholar 

  • Brusilovsky, P., Somyürek, S., Guerra, J., Hosseini, R., Zadorozhny, V., & Durlach, P. (2016). Open social student modeling for personalized learning. IEEE Transactions on Emerging Topics in Computing., 4(3), 450–461.

    Google Scholar 

  • Bull, S. (2016). Negotiated learner modelling to maintain today’s learner models. Research and practice in technology enhanced learning, 11(1), 10.

    Google Scholar 

  • Bull, S., Brna, P., & Pain, H. (1995). Extending the scope of the student model. User Modeling and User-Adapted Interaction, 6(1), 45–65.

    Google Scholar 

  • Bull, S., & Kay, J. (2007). Student models that invite the learner in: The SMILI:() Open Learner Modelling Framework. International Journal of Artificial Intelligence in Education, 17(2), 89–120.

    Google Scholar 

  • Bull, S., & Kay, J. (2010). Open learner models, In Advances in intelligent tutoring systems (pp. 301-322) (). Berlin, Heidelberg: Springer.

    Google Scholar 

  • Bull, S., & Kay, J. (2013). Open learner models as drivers for metacognitive processes, In International handbook of metacognition and learning technologies (pp. 349-365) (). New York, NY: Springer.

    Google Scholar 

  • Bull, S., & Wasson, B. (2016). Competence visualisation: making sense of data from 21st-century technologies in language learning. ReCALL, 28(02), 147–165.

  • Buunk, A. P., & Gibbons, F. X. (2007). Social comparison: The end of a theory and the emergence of a field. Organizational Behavior and Human Decision Processes, 102(1), 3–21.

    Google Scholar 

  • Clayson, D. E. (2005). Performance overconfidence: Metacognitive effects or misplaced student expectations? Journal of Marketing Education, 27(2), 122–129.

    Google Scholar 

  • Collins, R. L. (1996). For better or worse: The impact of upward social comparison on self-evaluations. Psychological bulletin, 119(1), 51.

    Google Scholar 

  • Corcoran, K., Crusius, J., & Mussweiler, T. (2011). Social comparison: Motives, standards, and mechanisms. In D. Chadee (Ed.), Theories in social psychology (pp. 119-139). : Wiley-Blackwell.

  • Dunlosky, J., & Rawson, K. A. (2012). Overconfidence produces underachievement: Inaccurate self evaluations undermine students’ learning and retention. Learning and Instruction, 22(4), 271–280.

    Google Scholar 

  • Duval, E. (2011, February). Attention please!: Learning analytics for visualization and recommendation. In Proceedings of the 1st international conference on learning analytics and knowledge (pp. 9-17). ACM.

  • Ertmer, P. A., & Newby, T. J. (1996). The expert learner: Strategic, self-regulated, and reflective. Instructional science, 24(1), 1–24.

    Google Scholar 

  • Everson, H. T., & Tobias, S. (1998). The ability to estimate knowledge and performance in college: A metacognitive analysis. Instructional Science, 26(1-2), 65–79.

    Google Scholar 

  • Ferreira, H., de Oliveira, G. P., Araújo, R., Dorça, F., & Cattelan, R. (2019). Technology-enhanced assessment visualization for smart learning environments. Smart Learning Environments, 6(1), 14.

    Google Scholar 

  • Govaerts, S., Verbert, K., & Duval, E. (2011, December). Evaluating the student activity meter: two case studies. In International Conference on Web-Based Learning (pp. 188-197). Springer, Berlin, Heidelberg.

  • Guerra, J., Hosseini, R., Somyürek, S., Brusilovsky, P. (2016). An intelligent interface for learning content: Combining open learner model and social comparison to support self-regulated learning and engagement, In Proceedings of the 21st International Conference on Intelligent User Interfaces, (pp. 152-163). ACM. Sonoma, California, USA, March, 2016.

  • Hsiao, I. H., Bakalov, F., Brusilovsky, P., & König-Ries, B. (2013). Progressor: Social navigation support through open social student modeling. New Review of Hypermedia and Multimedia, 19(2), 112–131.

    Google Scholar 

  • Hsiao, I.-H., & Brusilovsky, P. (2017). Guiding and motivating students through open social student modeling: Lessons learned. Teachers College Record, 119(3), Retrieved from https://www.scopus.com/inwar d/record.uri?exml:id = 2-s2.0-85016572642&partnerID=40&md5=535ea4847007d651c86dd532b 9a34c88.

  • Isaacson, R., & Fujita, F. (2006). Metacognitive knowledge monitoring and self-regulated learning. Journal of the Scholarship of Teaching and Learning, 6(1), 39–55.

    Google Scholar 

  • Jivet, I., Scheffel, M., Specht, M., & Drachsler, H. (2018). License to evaluate: Preparing learning analytics dashboards for educational practice. LAK ’18, March 5–9, 2018, Sydney, NSW, Australia

  • Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgments under uncertainty. Cambridge: Heuristics and Biases.

    Google Scholar 

  • Kay, J. (2000). User interfaces for all, chapter user modeling for adaptation, p.p, 271–294 (). Inc: Human Factors Series. Lawrence Erlbaum Associates.

    Google Scholar 

  • Kerly, A., Ellis, R., & Bull, S. (2008). CALMsystem: A conversational agent for learner modelling. Knowledge-Based Systems, 21(3), 238–246.

    Google Scholar 

  • Koriat, A. (1993). How do we know that we know? The accessibility model of the feeling of knowing. Psychological review, 100(4), 609.

    Google Scholar 

  • Krueger, J. (2000). The projective perception of the social world, In Handbook of social comparison (pp. 323-351) (). Boston, MA: Springer.

    Google Scholar 

  • Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of personality and social psychology, 77(6), 1121.

    Google Scholar 

  • Lan, W. Y. (1998). Teaching self-monitoring skills in statistics. In D. H. Schunk, & B. J. Zimmerman (Eds.), Self-regulated learning: from teaching to self-reflective practice, (pp. 86–105). New York: Guilford Press.

    Google Scholar 

  • Loboda, T., Guerra, J., Hosseini, R., and Brusilovsky, P. (2014) Mastery grids: An open source social educational progress visualization. In: Proceedings of 9th European Conference on Technology Enhanced Learning (EC-TEL 2014), Graz, Austria, September 16-19, 2014, pp. 235-248.

  • Mitrovic, A., & Martin, B. (2002, May). Evaluating the effects of open student models on learning. In Proceedings of Second International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems, Malaga, Spain. pp. 296-305.

  • Mitrovic, A., & Martin, B. (2007). Evaluating the effect of open student models on self assessment. International Journal of Artificial Intelligence in Education, 17(2), 121–144.

    Google Scholar 

  • Moore, D. A., & Healy, P. J. (2008). The trouble with overconfidence. Psychological review, 115(2), 502–517.

    Google Scholar 

  • Nelson, T.O. (1990). “Metamemory: A theoretical framework and new findings”. The Psychology of Learning and Motivation (PDF). 26. Academic Press. pp. 125–173.

  • Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of general psychology, 2(2), 175–220.

    Google Scholar 

  • Pintrich, P. R. (2002). The role of metacognitive knowledge in learning, teaching, and assessing. Theory into Practice, 41(4), 219–225.

    Google Scholar 

  • Somyürek, S. & Brusilovsky, P. (2015). Impact of open social student modeling on self-assessment of performance. Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2009 (E-Learn 2015). Kona, Hawaii, United States, October 19-22, 2015

  • Somyürek, S., & Çelik, İ. (2018). Dunning-Kruger syndrome and subjective judgements. Educational Technology Theory and Practice., 8(1), 141–157.

    Google Scholar 

  • Stankov, L., & Crawford, J. D. (1996). Confidence judgments in studies of individual differences. Personality and Individual Differences, 21(6), 971–986.

    Google Scholar 

  • Steiner, M., Götz, O., & Stieglitz, S. (2013). The influence of learning management system components on learners’ motivation in a large-scale social learning environment. 34th International Conference on Information Systems (ICIS) 2013, 15-18 December, Milano, Italy.

  • Suleman, R. M., Mizoguchi, R., & Ikeda, M. (2016). A new perspective of negotiation-based dialog to enhance metacognitive skills in the context of open learner models. International Journal of Artificial Intelligence in Education, 26(4), 1069–1115.

    Google Scholar 

  • Suls, J. M. (1977). Social comparison theory and research: An overview from 1954. In J. M. Suls, & R. L. Miller (Eds.), Social comparison processes: Theoretical and empirical perspectives, (pp. 1–19). Washington, DC: Hemisphere.

    Google Scholar 

  • Tobias, S., & Everson, H. T. (2002). Knowing what you know and what you don’t: Further research on metacognitive knowledge monitoring. Research Report No. 2002-3. College Entrance Examination Board.

  • Tobias, S., & Fletcher, J. D. (Eds.) (2000). Training and retraining: A handbook for business, industry, government, and the military. New York: Macmillan Gale Group.

    Google Scholar 

  • Verbert, K., Duval, E., Klerkx, J., Govaerts, S., & Santos, J. L. (2013). Learning analytics dashboard applications. American Behavioral Scientist, 57(10), 1500–1509.

    Google Scholar 

  • Weber, G., & Brusilovsky, P. (2001). ELM-ART: An adaptive versatile system for Web-based instruction. International Journal of Artificial Intelligence in Education, 12(4), 351–384.

    Google Scholar 

  • Williams, M. (1996). Learner control and instructional technologies. In D. Jonassen (Ed.), Handbook of research on educational communications and technology, (pp. 957–983). New York: Scholastic.

    Google Scholar 

  • Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational psychologist, 25(1), 3–17.

    Google Scholar 

  • Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into practice, 41(2), 64–70.

    Google Scholar 

Download references

Acknowledgements

The first author is supported in part by grants from the Turkish Fulbright Commission and 2219 Postdoctoral Research Fellowship Program of The Scientific and Technological Research Council of Turkey. This research was supported by the Advanced Distributed Learning Initiative contract W911QY13C0032.

This submission extends the content of our previous conference paper titled as “Somyürek, S. & Brusilovsky, P. (2015). Impact of Open Social Student Modeling on Self-Assessment of Performance. Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2009 (E-Learn 2015). Kona, Hawaii, United States, October 19-22, 2015”

Funding

This research was supported by the Advanced Distributed Learning Initiative contract W911QY13C0032, the Turkish Fulbright Commission and 2219 Postdoctoral Research Fellowship Program of the Scientific and Technological Research Council of Turkey, and National Commission for Science Research and Technology, Chile.

Author information

Authors and Affiliations

Authors

Contributions

SS designed research model, collected the data, performed the statistical analysis and drafted the manuscript.

PB was the coordinator of the major project covering various research, including this study. All studies carried out within the scope of this research have been supervised by him.

JG participated in the research design, reviewed the literature, and helped to collect the data and statistical analysis.

Corresponding author

Correspondence to Sibel Somyürek.

Ethics declarations

Ethics approval and consent to participate

Permission was obtained from the participants via consent forms.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Somyürek, S., Brusilovsky, P. & Guerra, J. Supporting knowledge monitoring ability: open learner modeling vs. open social learner modeling. RPTEL 15, 17 (2020). https://doi.org/10.1186/s41039-020-00137-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41039-020-00137-5

Keywords