The effect on new knowledge and reviewed knowledge caused by the positioning task in closed concept maps
Research and Practice in Technology Enhanced Learning volume 14, Article number: 15 (2019)
The advancement of technology has made it possible for automated feedback to be added to learning activities such as the construction of concept maps. The addition of feedback allows learners to acquire new knowledge instead of only focusing on reviewed knowledge. The cognitive processes for acquiring new knowledge and reviewing knowledge are different, so the benefits of concept maps in past research may not apply to the acquisition of new knowledge. However, how concept map construction varies across these two aspects has not been investigated. This research starts this investigation by researching how the positioning task affects new knowledge and reviewed knowledge. The position task is the act of deciding and managing the position of the elements of the concept map. In this paper, we study the differences in new knowledge and reviewed knowledge across two closed concept map interfaces by comparing test answers. One interface, Kit-build, includes the positioning task. The other interface, Airmap, does not include it. Results suggest that the interfaces only differ in retained reviewed knowledge, having similar performance in immediate new knowledge, immediate reviewed knowledge, and retained new knowledge. Results have potential implications for the general presence of the positioning task in learning interfaces.
A concept map is a graphical tool used to represent knowledge. It is composed of concepts and of links between the concepts (Cañas and Novak 2010). Concept maps are believed to improve reading comprehension (Usman et al. 2017; Riahi and Pourdana 2017; Chang et al. 2002; Sánchez et al. 2010; Guastello et al. 2000). It is theorized that concept maps improve reading comprehension through the continuous processing of content (Armbruster and Anderson 1984), by providing a template to help structure and understanding the information (Cañas and Novak 2010), and by being close to the macrostructure of the text (Van Dijk et al. 1983).
Computer-based tools for building concept maps are popular (Liu et al. 2010; Reader and Hammond 1994; Anderson-Inman and Zeitz 1993). Among these tools, some of them provide the nodes and links so that the user only has to assemble the map (Hirashima et al. 2011; Wu et al. 2012). They are called closed concept map tools because the set of possible maps is finite. This allows for automated scoring of the maps by comparing the built map to an expert map. This mechanism also allows for automated feedback (Pailai et al. 2018; Furtado et al. 2018). Since closed concept maps provide information to the user, they can go beyond the reviewing activities that traditional concept maps provide. Closed concept maps with automated feedback can help students acquire new knowledge (Pailai et al. 2018; Furtado et al. 2018). While there is research on the reviewing aspects of concept maps (Cañas and Novak 2010; Armbruster and Anderson 1984; Van Dijk et al. 1983), no research so far has investigated if the benefits of concept maps also transfer to the acquisition of new knowledge through automated feedback.
One matter of concern in closed concept maps is the effort of organizing the pieces in the concept map. We call this the positioning task. This is one portion of the activity that can be automatized and thus lower the effort of building the map. Past research has shown that the effort related to building the map is lowered by automatizing the positioning task (Furtado et al. 2018). Also, tests done after building the map show that immediate reading comprehension is not affected by automatizing the task. However, tests done after 2 weeks suggest that automatizing the task negatively affects the retention of the information.
It remains unknown how the positioning task affects reviewed knowledge and new knowledge. Reviewed knowledge is similar to how traditional concept maps work, with the user continuously processing knowledge he previously acquired. However, new knowledge is when the user attains new information while building the map. This can happen in closed concept maps because automated feedback is possible. It is a new aspect of learning by concept map building, and it is necessary to investigate the difference from the traditional concept map aspect of reviewing knowledge. It is important to know how positioning affects these different types of knowledge because of the following:
It can help to decide when the positioning task is desirable in activities using closed concept maps.
It can help explain the cognitive mechanisms between the positioning task, which can help understand better how concept maps help students learn.
It can inform designers of the potential benefits and drawbacks of designing activities which include similar tasks.
This study has four research questions :
How does the positioning task affect immediate new knowledge?
How does the positioning task affect immediate reviewed knowledge?
How does the positioning task affect new knowledge after a retention period?
How does the positioning task affect reviewed knowledge after a retention period?
To answer the research questions, we study how the Airmap interface and the Kit-build interface differ in knowledge retention as a reviewing tool and as an acquisition tool. To the best of our knowledge, this type of investigation has never been done before. To do so, we compare test scores at three points of time: before building a map, after building a map, and after a 2-week delay. As such, 2 weeks are used as the length of the retention period described in the research questions. Knowledge which the student had before building the map is identified as reviewed knowledge. Knowledge the student obtains during map construction is identified as new knowledge.
Computer-based concept map tools and closed concept maps
Computer-based concept mapping has been used to improve learning in general (Hwang et al. 2013; Kim and Olaciregui 2008; Willerman and Mac Harg 1991) and in reading comprehension (Morfidi et al. 2018; Omar 2015; Alkhateeb et al. 2015). It has been pointed out that the advantages of computer-based concept map mapping are the ease of correction and construction (Liu et al. 2010), the capability to add behavior-guiding constraints (Reader and Hammond 1994), the creation process personalization, and the frustration reduction (Anderson-Inman and Zeitz 1993). Computer-based concept mapping tools also make automated feedback possible. Past work has used semantic web technologies to make this feedback possible (Park and Calvo 2008). Another approach is using word proximity data to score the maps (Taricani and Clariana 2006). It is trivial to compare the student-built maps to the expert map when using closed concept maps. Multiple tools used this approach, such as Cmapanalysis (Cañas et al. 2013), Kit-build (Hirashima et al. 2011; Hirashima et al. 2015), CRESST (Herl et al. 1999), KAS (Tao 2015), and ICMLS (Wu et al. 2012). It is possible to display exactly in which ways the student map differs from the expert map because they are built from the same pieces. This type of automatic diagnosis was found to be reliable when compared to traditional map scoring approaches (Wunnasri et al. 2018; Wunnasri et al. 2017) and was found to correlate with standard science tests (Yoshida et al. 2013).
This type of automated measurement has been used to measure changes in interdisciplinary learning during high school (Reiska et al. 2018). Teachers can revise their lessons and feedback by using the diagnosis information. This approach has shown good results in retention when compared to traditional teaching, especially when the teacher uses the map to give the feedback (Sugihara et al. 2012). This type of automatic diagnosis also allows for different automated feedback schemes, which has been effective for improving reading comprehension (Pailai et al. 2018; Wu et al. 2012).
Closed concept map interfaces
This section describes both interfaces. They were both coded using the Unity engineFootnote 1. WebGL builds were then generated so that the interfaces could be used on web browsers.
A screenshot of Kit-build can be seen in Fig. 1. Example of nodes in the screen are “Komodo dragon” and “large size.” Examples of links are “mates” and “can grow to.” At first, the nodes and links are displayed in columns. There is a column for the nodes and a column for the links.
There are gizmos to the side of the links. Users can drag and drop those gizmos. By overlapping the gizmo with a node, the user can associate the link with the node. As links have two gizmos, it can be associated with two nodes. When this happens, a proposition composed of the two nodes and the link is formed.
By drag and dropping the gizmos away from the nodes, the association between node and link is undone. This allows the user to undo the created propositions.
The user can also drag and drop nodes and links. This allows them to manage the layout. The nodes and links never move by themselves.
Figure 2 shows the Airmap interface. Concepts 1 to 3 are the nodes. Users can click on nodes to select them. Concept 1 and concept 2 are selected in the figure. Link 2-3 is a link. It is connecting concept 2 and concept 3. Thin lines connect the link to the concepts. There is a button menu on the left. It shows which links are available alongside their available quantities.
To connect nodes, it is necessary to select two of them. Afterward, it is necessary to select the link that will connect the two nodes. It is not possible for a link to be associated with a single node in Airmap. This marks a difference from Kit-build.
Users can click on links to disconnect nodes. Clicking on a link destroys it. After the destruction, the link becomes available once again.
Links and nodes move automatically. The user is unable to directly control the positions of the nodes and links. As such, users are not burdened with managing the layout.
The feedback system
Since Kit-build and Airmap both use closed concept maps, it is possible to compare the user made maps to the expert map. After the user builds the map, the system compares the built map to the expert map. It then displays to the user which parts of the expert map are missing in the user map. It also displays which parts of the user map do not exist in the expert map. The user can then use this information to modify their own map. The user continues to receive this information until they can build the expert map.
The positioning task, reviewed knowledge, and new knowledge
The positioning task is the task of moving nodes and links while building a closed concept map. This is unnecessary in Airmap because of the automatic layout function. As such, the positioning task is only present in Kit-build. Closed concept maps involve finding three elements to build propositions. The positioning task in Kit-build may make the student have to reorganize the map while searching, increasing the effort related to building the map. Furthermore, Airmap makes the search simpler by separating links and nodes into different portions of the screen. As such, the positioning task in Kit-build also includes the higher effort caused by the lack of separation between links and nodes. Also, the positioning task is believed to increase cognitive load (Furtado et al. 2018).
One important consideration is that the map creation effort should decrease as the user makes the map. This is because the amount of free pieces becomes smaller, decreasing the number of possible propositions the user could make. This means that the activity should become more manageable as the user completes the map, lowering the burden caused by the positioning task.
Reviewing in Kit-build and Airmap
Reviewing involves recalling previously learned information. As an example, let us say that the student answers that Komodo dragons are known as “Komodo monitor” in the pre-test. Then, when building the map, the student builds a proposition “Komodo dragon - known as - Komodo monitor.” That means that the student reviewed the information about the Komodo dragon while building the map. The ability of the users to keep this information in memory is compared across the two interfaces used in the experiment. This is a simplification of the actual process that occurs, as building the proposition is quite difficult given the number of pieces present.
The way information is recalled varies between traditional and closed concept maps. Closed concept map creation involves cued recall because the user has access to the labels in the nodes and links from the start. In contrast, traditional concept map built without access to external material involves free recalling. Past research has shown that cued recalls have beneficial memory effects when compared to free recall (Paivio et al. 1994; Begg 1972).
Both closed concept maps and traditional concept maps involve summarizing existing knowledge into propositions (a trio of concept, link, and concept). The difference is that in traditional concept maps, the user can freely create the labels in the concepts and link. In a closed concept map, the user must find equivalently labeled links and concepts. Furthermore, there is the possibility that the idea the user wants to express is not viable with the provided pieces.
Cued recalls and summarizing into proposition work similarly in Kit-build and Airmap. However, the burden caused by the positioning task in Kit-build might increase the frequency of recalls, thus increasing retention (Furtado et al. 2018). Retention impairment is also plausible since the positioning task might be distracting the user from the reviewing process (Furtado et al. 2018).
As discussed above, the burden of the positioning task is theorized to be higher before the user first receives feedback. Because of this, most cued recall is believed to happen during this period of high burden. Furthermore, the positioning task burden might be increasing the amount of processing done on the macrostructure of the content (Schroeder et al. 2017). If this is true, there should be a gap between Kit-build and Airmap in reviewing performance.
Furthermore, in this study, users received feedback on which propositions are correct, incorrect, and missing. In this case, reviewing is also reinforced, since the user is given feedback that his constructed propositions are correct. Furthermore, the user has to break down the incorrect propositions while creating the missing propositions, a process which involves reviewing information, especially misunderstandings. This process of reconstructing the map is complicated by the positioning task of Kit-build since the user also has to reposition the concepts.
New knowledge in closed concept creation
In this study, new knowledge is related to a question which the user answers correctly after building the map but not before. If the user states that apples are blue before building the map and then states apples are red after building it, then the user has acquired new information while building the map.
In closed concept maps, the user can acquire new knowledge through the use of automated feedback. The way information is acquired would depend on how the feedback is designed. A simple approach would be to inform the users of which propositions are correct, which are incorrect, and which are missing from his map. This is the approach used for the dataset analyzed in this study. Other types of feedback have been used in past studies, such as asking the user to justify his incorrect propositions by using phrases of a text (Pailai et al. 2018).
As discussed above, the user has to reconstruct the map after receiving feedback. The complication caused by the positioning task of Kit-build also affects the acquisition process, as the user corrects his misunderstandings and builds new propositions. This could result in an increase in the retention of new knowledge when using Kit-build.
In contrast, the burden caused by the positioning task is believed to be lower during acquisition, since the search space gets smaller after the user receives feedback. The effect of the macrostructure of the map and positioning decisions could be diminished by the reduced burden. If this is true, the positioning task may not influence the retention of new knowledge.
Data analysis methods
Each research question needs a quantifiable metric. The metrics can be modeled after answer transitions between a test done before the concept map is built (pre-test), after the concept map is built (post-test), and after a delayed period (delayed post-test). Table 1 shows how each question can be classified. Each classification is related to one of the research questions.
“Review” is related to reviewed knowledge. “New” is related to new knowledge. “OnDelay” metrics are related to the 2-week retention period after building the map. Metrics which do not have the word “OnDelay” on them are related to the immediate measurements. Those immediate measurements are the pre-test and post-test performed minutes before and after building the map.
The questions are classified and then counted for each metric. This gives raw metrics for each user. Based on the raw metrics, normalized metrics are then calculated. The normalized metrics take into account individual ceilings into the calculation of each metric and are more representative of each research question. The normalized metrics and their formulas can be seen in Table 2.
The collection of data had a main phase and an optional delayed phase. In the main phase, participants were required to do the following:
Read tool instructions
Build the correct training map using the tool
Read a text
Take the comprehension pre-test
Build the correct text map using the tool
Take the comprehension post-test
The only difference between the conditions was the interface used. Air used the Airmap interface without hideable links, and Kit used the Kit-build interface. The interfaces had feedback enabled and required participants to redo their maps until they built the correct map. Participants who completed the main phase were invited to participate in the delayed phase. The delayed phase consisted of the same comprehension post-test used in the main phase, but with a delay of 2 weeks.
The data was collected from users recruited through Amazon Mechanical Turk. Participants were required to be residents of the USA and were also required to have completed more than 5000 tasks on AMT with an approval rate above 97%. They were paid $3.10 upon completion of the activities. Participants who agreed to take the delayed post-test received an additional $0.80.
The test answers are administrated through the system, digitally. As such, test answers are saved as log data. Table 3 shows an example of such data. The log data can then be used with the data analyses methods described above.
The text used described various characteristics of the Komodo dragon. It is a modified, shorter version of a text found in WikipediaFootnote 2. The comprehension pre-test and post-tests contained the same questions. The questions consisted of ten multiple choice questions created to test the content of the text. An English native speaker who is a university teacher of English as a second language verified the test and found no problems with it. The map participants were requested to build was based on the text and on the reading comprehension exercises. The expert map used in this experiment had 17 concepts and 17 links. Since each link corresponds to a proposition, it contained 17 propositions. The expert map was built based on the text.
The data collection process was delivered through a website. Participants completed informed consent and then proceeded to read instructions on the map building tool they would use. Afterward, they would build the training map to get used to the tool. The training map consisted of three concepts and three links. The content of this training map had no relation to the rest of the experiment. The tool instructions and the tool used to build the map were specific to each condition. After building the training map, participants read the narrative and answered the pre-test. After the pre-test, participants had to build the map using the tool respective to their condition. Participants then answered the post-test, ending the main phase of the experiment.
All activities in the main phase had a 5-min limit, with the exception of building the map, which had a 20-min limit.
During map constructions, users were given automated feedback by the system and could only proceed to the next task after submitting the correct map.
Two weeks later, participants were contacted by email to take part in the optional delayed phase. The delayed phase consisted of the same comprehension test taken in the pre-test and post-test. Participants did nothing else other than answer the comprehension test.
Normalized values for review, new, retained review, and retained new were calculated for each participant using their answers for the pre-test, post-test, and delayed post-test from the dataset. Table 4 shows the number of participants of each condition, alongside the average and standard deviation of the relevant normalized metrics. Figure 3 shows box plot comparisons of the two conditions.
To address how the positioning task affects immediate retained knowledge, we compare the values of normalized review shown in the boxplots of Fig. 3 and the average values seen in Table 4. There is very little difference in normalized review between the two interfaces, with users remembering around 90% of their pre-test answers in the post-test.
To address how the positioning task affects immediate new knowledge, we compare the values of normalized new shown in the boxplots of Fig. 3 and the average values seen in Table 4. There is very little difference in normalized new between the two interfaces, with users from both interfaces correctly answering around 70% of the questions in the post-test that they could not answer correctly in the pre-test.
To address how the positioning task affects delayed reviewed knowledge, we performed a Mann-Whitney test with retained review as the dependent variable and condition as the predictor. The test revealed that retained review for the Kit condition (Mdn = 1) was significantly higher than retained review for the Air condition (Mdn = 0.62, U = 58, p = 0.002). Looking at Table 4, Airmap users remember around 60% of revised information. In contrast, Kit-build users remember 87% of the revised information. Not only that, but the standard deviation is lower for Kit-build, suggesting results are more stable. Both Air and Kit maintain similar transitions during map building, but Air drops steeply after the 2-week delay, as far as reviewing is concerned. In contrast, Kit shows little loss in reviewed knowledge after the 2-week delay.
Looking at the scatter plot in Fig. 4, multiple Kit-build users forgot none of the test answers related to reviewed information after 2 weeks. In the worst case scenario, Kit-build users would forget two answers, while Airmap users could forget up to four answers.
To address how the positioning task affects delayed new knowledge, we compare the values of retained new in the boxplots of Fig. 3 and the average values seen in Table 4. There is very little difference in normalized retained new between the two interfaces. This suggests that the two interfaces do not differ in retention as an acquisition tool. Users of both interfaces remember around 60% of the acquired information. This value is similar to the normalized retained review users have during Airmap use. This suggests that users process reviewed information and new information at around the same level while using Airmap.
Results show that there was little difference in immediate new knowledge, in the retention of new knowledge, and in immediate reviewed knowledge. As such, we can assume that the differences between the interfaces are not associated with processing new knowledge. Thus, Airmap outperforms Kit-build when new knowledge is of concern since users can make maps using less effort without decreasing immediate and delayed understanding of new knowledge. The reduction in effort, believed to also cause a reduction in cognitive load, is desirable because it has been associated with various benefits, such as reduced stress and higher satisfaction (Zhang et al. 2015; Ward and Mann 2000). Results are in line with past research that stated the positioning task does not affect immediate learning gains (Furtado et al. 2018), but it goes further to also state that it does not affect the retention of new information.
This, however, did not hold true for delayed reviewed knowledge. Kit-build outperforms Airmap in retention of reviewed content after a 2-week period. As such, we have a trade-off between effort and reviewing retention. Cognitive load reduction leading to a reduction in general retention has been shown in other research as well (Kirschner et al. 2009; Schnotz and Rasch 2005), which is in line with the theory that Airmap has lower cognitive load than Kit-build (Furtado et al. 2018). Unlike past results, results suggest that this influence on retention is limited to reviewing activities. The reduction is believed to be mostly caused by the layout management burden, but there is also the visual load reduction factor. Isolating these two factors to see how they affect reviewing retention is a matter for future studies.
Results also inform further educators who use closed concept map building tools. Previously, it was stated that Kit-build should be used whenever retention is of concern (Furtado et al. 2018). However, current results suggest that Kit-build should be used as a tool for reviewing. If the user does not have a good grasp of the content, Airmap is better suited since a good portion of information will be new. Furthermore, developers of other closed concept map building tools now have more information when deciding whether or not to add automatic layout management and spatial separation to their tools.
Past research has also pointed out that learning applications, in general, could be made easier to use by applying automatic layout management and spatial separation when retention is not of concern (Furtado et al. 2018). The same work also pointed out that using a reversed approach could benefit retention gains in learning activities. Adding the positioning task to the activity would be the reversed approach. Current results go further, stating that retention is only prejudiced during reviewing activities, so the amount of activities that could benefit from removing the positioning task is higher than previously thought. However, the reversed approach that was thought to benefit overall retention is suggested to only influence retention during reviewing. As such, only learning activities which focus on reviewing knowledge should consider this reversed approach. Fields in which retention plays a strong role, such as vocabulary learning (Folse 2006) and science classes (O’day 2007), could benefit from review activities focused on these reversed approaches.
This study showed the effect of the positioning task has on reviewed knowledge and new knowledge when using closed concept maps. The positioning task did not affect new knowledge in any way while affecting reviewed knowledge after a 2-week retention period. Thus, having to manage the layout during concept map creation and having to search around for pieces in a complex space help students commit reviewed information deeper into memory, without affecting new information attained during the process.
The obtained results inform educators and researchers of when the positioning task is desirable in the use of closed concept maps. It also helps inform learning activity designers of the potential benefits of redesigning an activity to include or exclude the positioning task.
A limitation of this study is that the two factors related to cognitive load were not separated. Because of this, it is unknown how much the spatial separation of elements and the automatic layout management separately influence learning and cognitive load. Another limitation of the study is that it does not include how involved the students are in the positioning task when using Kit-build. Different students may be more or less involved in positioning. They might also give positioning different degrees of importance. This study does not take these elements into consideration. The fact that only one learning material was used in the experiment is also a limitation. The comprehension test used was verified by a university teacher of English who is also a native speaker, but it is a non-standardized test. Using standardized materials in a future experiment would improve the robustness of the results.
One matter for future studies is analyzing differences in knowledge acquisition without the use of feedback mechanisms. Finally, an investigation of whether or not time and energy saved by using Airmap can be used in other activities might strengthen the retention of reviewed knowledge when using the tool. Finally, a mixed approach of using first Airmap to introduce the content and then Kit-build to review could extend the retention-enhancing properties of Kit-build to a larger portion of the learning content.
Availability of data and materials
Requests for data and materials will be evaluated on a case-by-case basis.
Amazon Mechanical Turk
Web Graphics Library
Alkhateeb, M., Hayashi, Y., Rajab, T., Hirashima, T. (2015). Comparison between kit-build and scratch-build concept mapping methods in supporting efl reading comprehension. The Journal of Information and Systems in Education, 14(1), 13–27.
Anderson-Inman, L., & Zeitz, L. (1993). Computer-based concept mapping: Active studying for active learners. The computing teacher, 21(1).
Armbruster, B.B., & Anderson, T.H. (1984). Mapping: Representing informative text diagrammatically. In Spatial learning strategies. https://doi.org/10.1016/b978-0-12-352620-5.50015-1. Elsevier, (pp. 189–209).
Begg, I. (1972). Recall of meaningful phrases. Journal of Memory and Language, 11(4), 431.
Cañas, A.J., & Novak, J.D. (2010). The theory underlying concept maps and how to construct and use them. Práxis Educativa, 5(1), 9–29.
Cañas, A.J., Bunch, L., Novak, J.D., Reiska, P. (2013). Concept Mapping. An international outlook. Journal for Educators, Teachers and Trainers. Laboratorio de Investigación en Formación y Profesionalizació.n Universidad de Granada, Spain, (pp. 36–46).
Chang, K.-E., Sung, Y.-T., Chen, I.-D. (2002). The effect of concept mapping to enhance text comprehension and summarization. The Journal of Experimental Education, 71(1), 5–23.
Furtado, P.G.F., Hirashima, T., Hayashi, Y. (2018). Reducing cognitive load during closed concept map construction and consequences on reading comprehension and retention. IEEE Transactions on Learning Technologies, 12(3), 402–412. https://doi.org/10.1109/tlt.2018.2861744.
Folse, K.S. (2006). The effect of type of written exercise on l2 vocabulary retention. TESOL quarterly, 40(2), 273–293.
Guastello, E.F., Beasley, T.M., Sinatra, R.C. (2000). Concept mapping effects on science content comprehension of low-achieving inner-city seventh graders. Remedial and special education, 21(6), 356–364.
Hirashima, T., Yamasaki, K., Fukuda, H., Funaoi, H. (2011). Kit-build concept map for automatic diagnosis. In International Conference on Artificial Intelligence in Education. Springer, New York, (pp. 466–468).
Hirashima, T., Yamasaki, K., Fukuda, H., Funaoi, H. (2015). Framework of kit-build concept map for automatic diagnosis and its preliminary use. Research and Practice in Technology Enhanced Learning, 10(1), 17.
Herl, H., O’Neil Jr, H., Chung, G., Schacter, J. (1999). Reliability and validity of a computer-based knowledge mapping system to measure content understanding. Computers in Human Behavior, 15(3-4), 315–333.
Hwang, G.-J., Yang, L.-H., Wang, S.-Y. (2013). A concept map-embedded educational computer game for improving students’ learning performance in natural science courses. Computers & Education, 69, 121–130.
Kim, P., & Olaciregui, C. (2008). The effects of a concept map-based information display in an electronic portfolio system on information processing and retention in a fifth-grade science class covering the earth’s atmosphere. British Journal of Educational Technology, 39(4), 700–714.
Kirschner, F., Paas, F., Kirschner, P.A. (2009). Individual and group-based learning from complex cognitive tasks: Effects on retention and transfer efficiency. Computers in Human Behavior, 25(2), 306–314.
Liu, P.-L., Chen, C.-J., Chang, Y.-J. (2010). Effects of a computer-assisted concept mapping learning strategy on efl college students’ english reading comprehension. Computers & Education, 54(2), 436–445.
Morfidi, E., Mikropoulos, A., Rogdaki, A. (2018). Using concept mapping to improve poor readers’ understanding of expository text. Education and Information Technologies, 23(1), 271–286.
Omar, M. (2015). Improving reading comprehension by using computer-based concept maps: A case study of esp students at umm-alqura university. British Journal of Education, 3(4), 1–20.
O’day, D.H. (2007). The value of animations in biology teaching: A study of long-term memory retention. CBE-Life Sciences Education, 6(3), 217–223.
Pailai, J., Wunnasri, W., Hayashi, Y., Hirashima, T. (2018). Correctness-and confidence-based adaptive feedback of kit-build concept map with confidence tagging. In International Conference on Artificial Intelligence in Education. Springer, New York, (pp. 395–408).
Paivio, A., Walsh, M., Bons, T. (1994). Concreteness effects on memory: When and why?. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(5), 1196.
Park, U., & Calvo, R.A. (2008). Automatic concept map scoring framework using the semantic web technologies. In Advanced learning technologies, 2008. ICALT’08. Eighth IEEE International Conference On. https://doi.org/10.1109/icalt.2008.125. IEEE, (pp. 238–240).
Reader, W., & Hammond, N. (1994). Computer-based tools to support learning from hypertext: Concept mapping tools and beyond. In Computer assisted learning: Selected contributions from the CAL’93 Symposium. Elsevier, (pp. 99–106).
Reiska, P., Soika, K., Cañas, A.J. (2018). Using concept mapping to measure changes in interdisciplinary learning during high school. Knowledge Management & E-Learning: An International Journal (KM&EL), 10(1), 1–24.
Riahi, Z., & Pourdana, N. (2017). Effective reading comprehension in efl contexts: Individual and collaborative concept mapping strategies. Advances in Language and Literary Studies, 8(1), 51–59.
Sánchez, J., Cañas, A., Novak, J. (2010). Concept map: A strategy for enhancing reading comprehension in English as l2. CMC 2010, 1, 29.
Schroeder, N.L., Nesbit, J.C., Anguiano, C.J., Adesope, O.O. (2017). Studying and constructing concept maps: A meta-analysis. https://doi.org/10.1007/s10648-017-9403-9.
Schnotz, W., & Rasch, T. (2005). Enabling, facilitating, and inhibiting effects of animations in multimedia learning: Why reduction of cognitive load can have negative results on learning. Educational Technology Research and Development, 53(3), 47.
Sugihara, K., Osada, T., Nakata, S., Funaoi, H., Hirashima, T. (2012). Experimental evaluation of kit-build concept map for science classes in an elementary school. Proc. ICCE2012, 17–24.
Tao, C. (2015). Development of a knowledge assessment system based on concept maps and differential weighting approaches. PhD thesis. Virginia Tech.
Taricani, E.M., & Clariana, R.B. (2006). A technique for automatically scoring open-ended concept maps. Educational Technology Research and Development, 54(1), 65–82.
Usman, B., Mardatija, R., Fitriani, S.S. (2017). Using concept mapping to improve reading comprehension. English Education Journal, 8(3), 292–307.
Van Dijk, T.A., Kintsch, W., Van Dijk, T.A. (1983). Strategies of discourse comprehension: Academic Press New York. https://doi.org/10.2307/415483.
Ward, A., & Mann, T. (2000). Don’t mind if i do: Disinhibited eating under cognitive load. Journal of personality and social psychology, 78(4), 753.
Willerman, M., & Mac Harg, R.A. (1991). The concept map as an advance organizer. Journal of research in science teaching, 28(8), 705–711.
Wu, P.-H., Hwang, G.-J., Milrad, M., Ke, H.-R., Huang, Y.-M. (2012). An innovative concept map approach for improving students’ learning performance with an instant feedback mechanism. British Journal of Educational Technology, 43(2), 217–232.
Wunnasri, W., Pailai, J., Hayashi, Y., Hirashima, T. (2017). Reliability investigation of automatic assessment of learner-build concept map with kit-build method by comparing with manual methods. In International Conference on Artificial Intelligence in Education. Springer, New York, (pp. 418–429).
Wunnasri, W., Pailai, J., Hayashi, Y., Hirashima, T. (2018). Validity of kit-build method for assessment of learner-build map by comparing with manual methods. IEICE Transactions on Information and Systems, 101(4), 1141–1150.
Yoshida, K., Sugihara, K., Nino, Y., Shida, M., Hirashima, T. (2013). Practical use of kit-build concept map system for formative assessment of learners’ comprehension in a lecture. Proc. of ICCE2013, 892–901.
Zhang, S., Zhao, L., Lu, Y., Yang, J. (2015). Get tired of socializing as social animal? An empirical explanation on discontinuous usage behavior in social network services. In PACIS, (p. 125).
This work was partially supported by JSPS KAKENHI grant numbers 17H01839 and 15H02931.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Fonteles Furtado, P.G., Hirashima, T. & Hayashi, Y. The effect on new knowledge and reviewed knowledge caused by the positioning task in closed concept maps. RPTEL 14, 15 (2019). https://doi.org/10.1186/s41039-019-0108-1
- Concept map
- Closed concept map
- Automated layout