Skip to main content

A framework to foster analysis skill for self-directed activities in data-rich environment

Abstract

Self-direction skill is considered a vital skill for twenty-first-century learners in both the learning context and physical activity context. Analysis skill for self-directed activities requires the students to analyze their own activity data for understanding their status in that activity. It is an important phase that determines whether an appropriate plan can be set or not. This research presents a framework designed to foster students’ analysis skill in self-directed activities, which aims (1) to build a technology-enabled learning system allowing students to practice analyzing data from their own daily contexts, (2) to propose an approach to model student’s analysis skill acquisition level and process, and (3) to provide automated support and feedback for analysis skill development tasks. The analysis module based on the proposed framework was implemented in the GOAL system which synchronized data from learners’ physical and reading activities. A study was conducted with 51 undergraduate students to find reliable indicators for the model to then measure students’ analysis skills. By further analyzing students’ actual usage of the GOAL system, we found the actual activity levels and their preferences regarding analysis varied for the chosen contexts (learning and physical activity). The different context preference groups were almost equal, highlighting the utility of a system that integrates data from multiple contexts. Such a system can potentially respond to students’ individual preferences to execute and acquire self-direction skill.

Introduction

Self-directed learning (SDL) is a process in which individuals take the initiative, with or without the help from others, in diagnosing their learning needs, formulating goals, identifying human and material resources, choosing and implementing appropriate learning strategies, and evaluating learning outcomes (Knowles 1975). Self-direction skill (SDS) is considered as a necessary skill for learners in the framework for twenty-first-century learning (P21-Framework 2009). For supporting the acquisition of SDS, questionnaire-based reflective practices are a common method that scholars apply, such as the Self-Rating Scale of Self-Directed Learning (SRSSDL) (Williamson 2007) and Personal Responsibility Orientation to Self-Direction in Learning Scale (PRO-SDLS) (Stockdale and Brockett 2011). For this process to be implemented in the classroom setting, human tutors can also facilitate (Walker and Lofton 2003; Fisher et al. 2001; Taylor and Burgess 1995). Overall, SDL has various meta-cognitive aspects that are supported by reflective evaluation of the learners’ ability through the validated questionnaires. However, the results from those questionnaires rely mainly on the students’ self perception and might have its own biases. In our research, we try to facilitate students’ cognitive process of acquisition of SDS while they participate in actual self-directed activities. Advanced technologies give us a chance to achieve this objective by capturing and utilizing log data generated during such activities.

A self-directed activity for students could happen not only in the learning context, but also in their daily physical activity context. For example, developing a habit of running 1 h every day and a habit of reading 1 h every day, both of them require individuals to choose appropriate strategies such as to monitor the process and to evaluate outcomes. The current e-learning tools and wearable devices make tracking and logging both learning behaviors and physical activities more affordable, respectively. Many recent researches have shown great opportunities for applying multiple data sources in learning analytics (LA), such as arm tracking (Andrade 2017), step counts (Di Mitri et al. 2017), heart rate (Spann et al. 2017), etc. In our research, we utilize data synchronized from learning context as well as from physical contexts and provide it back to the students to promote their SDS.

In our previous work, we had proposed DAPER (Data Collection–Data Analyze–Setting Goal and Plan–Execution and Monitoring–Reflect), a model of data-driven SDS execution and acquisition (Majumdar et al. 2018). The five phases in the model relate to the five sub-skills of being self-directed. This research specifically focuses on the Data Analysis phase, the second phase in the DAPER model, which is a precursor for setting meaningful goals and feasible plans. In the Data Analysis phase, learners analyze their self-data to understand their status in a self-directed activity. To support the Data Analysis phase, the major challenges remain regarding providing learning design, student modeling, and adaptive feedback. Instead of asking students to recall their experience by questionnaires, how can we develop a learning design which involves an interactive dashboard to help students perform analysis tasks in multiple contexts? How to model a student? And what feedback shall the system provide students responding to their different analysis skill level?

In this paper, we propose our technology-enabled learning framework using the GOAL (Goal-Oriented Active Learner) system (Majumdar et al. 2018) to tackle the above challenges. Based on the implemented framework, we conducted a study to investigate,

  • i) How to choose reliable indicators for analysis in a system that provides self-data as a context for promoting analysis skill?

  • ii) Given such a system to analyze self-data from multiple contexts, what are the students’ analysis behaviors and their perception of the analysis task?

The paper is organized in the following sections: Next in the “Theoretical and technical foundation” section, we present the theoretical foundation of analysis skill in self-directed activities and previous methods for supporting it, and an overview of the GOAL system; the “Analysis module in GOAL: learning design and system implementation” section highlights the learning task design, the entire framework implemented in the GOAL system, and two working examples from the actual collected data; the “Research methods” section of the study is presented to answer three research questions based on our design of experiment. Next, we present the results of the study; the “Discussion” section discusses our finding from the results regarding the research questions and limitations of this work; the “Conclusion” section summarizes our finding from the pilot study, current contributions, and future scope of this research.

Theoretical and technical foundation

Analysis skill in self-directed learning

Loyens et al. (2008) pointed out that analysis is the starting point of SDL. They stated that analysis in the practice of SDL is to determine the task (e.g., what is the task about?) and personal features relevant for the task (e.g., what knowledge can I apply? Do I find the task interesting?). According to Thornton (2010)’s four phases of a self-directed learning cycle, analyzing task needs and current skill is in the planning phase. Noguchi and Mccarthy (2010) stated that analytical skill is the ability to examine what happened in their learning process in detail and discern the cause and effect relationship among various elements involved in the process. Previous literature converges to the understanding that analysis skill is to identify issues in self-directed activities with respect to one’s own learning personalities. In our research, we support students to analyze their own status by using the data synchronized and affordances designed in the system.

Previous works to support analysis skill

To promote students’ analysis skill, interviews and questionnaires were widely used in previous research. Williamson (2007) designed the Self-Rating Scale of Self-Directed Learning (SRSSDL) as the instrument to measure the level of self-directedness in learner’s learning process. For analysis skill, students rate the items (e.g., I identify my learning needs; I am able to select the most suitable method for my learning; I am able to identify my areas of strength and weakness). Stockdale and Brockett (2011) also used a similar method to support analysis skill, that a scale called Personal Responsibility Orientation to Self-Direction in Learning Scale (PRO-SDLS) is designed. Noguchi and Mccarthy (2010) made a list of the criteria of grading according to the record which asks advisors of learners to speak their thinking process out while they are evaluating and deciding final grades for the submitted module work of their students.

These questionnaire-based evaluations give a score based on students’ answers for rating students’ skill. Such kind of support is often given after the execution of analysis skill during a task and requires students to recall their analysis behavior. It relies on students’ reflective narrative and might observe only an instance of their analysis behavior. Furthermore, to support the acquisition of analysis skill, traditionally, a human tutor often involved in guiding students. The human tutor gives suggestions to students with different skill levels.

Our method is to observe student behavior itself for rating their analysis skill while they are executing analysis by using the system affordances. And an automated support is provided for the acquisition of analysis skill.

GOAL system: data-driven support for self-direction skills

The GOAL system aims to support the DAPER model of SDS acquisition and execution by providing various contexts. Data from two domains were selected, one related to physical activity and another related to e-book-based learning. For the physical activity context, the GOAL client synchronizes data from HealthKit for iOS users and from Google Fit for Android users by using APIs provided by Apple and Google and then sends those datasets to the GOAL server. For the learning context, the GOAL server gets the data directly from the BookRoll (Ogata et al. 2015), which is an e-book reading system that can track users’ reading behaviors while users browse and annotate on the uploaded contents. Figure 1 illustrates the architecture of the GOAL system (Majumdar et al. 2018).

Fig. 1
figure1

Architecture of the GOAL system (Majumdar et al. 2018)

Analysis module in GOAL: learning design and system implementation

This section introduces the detail of the learning design and the technology framework for fostering analysis skill within the GOAL system. We explain with two running examples which are created from actual data collected during this study.

Learning tasks for analysis skill development

The learning tasks are created for students to engage with data from self-directed activities and to develop their analysis skill. The first task is to analyze self-status in the selected context by checking self-data in the most recent days and related resources. For example, in the daily waking activity context, they can use the visualization tool displaying their own steps data and average/maximum/minimum data from their group. They can also check the criteria relating to the activity. The second task is to check the message from the instructor to help them do the analysis. The third task is to report their analysis result about the activity status. We ask students to predict their activity status of the next day. This assists in understanding whether students can analyze their activity trends of the most recent days and successfully identify status. The fourth task is to check the analysis result provided by the instructor to see if their analysis result is correct or not. The last task is to check the feedback given by the instructor for promoting analysis skill.

A framework to support analysis module in GOAL

We propose a framework to support students conducting the learning tasks in the GOAL system (see Fig. 2). This framework has four components: (1) providing interactive panel for analysis, (2) extracting analysis behavior, (3) measuring analysis skill level, and (4) providing adaptive support for different analysis skill levels. To implement these four functions, corresponding components are built. In the first component, the system provides a visualization tool with students’ self-data to analyze and space to self report their status. Other panels provide on-demand information regarding analysis criteria, system help regarding analysis, status computed by system, and task feedback. In the second component, the system extracts students’ interactions while analyzing, and their input regarding status that they report. In the third component, the system computes students’ activity status and rates students’ analysis skill levels based on a rubric containing interaction behaviors and reported status. The last component of the system generates feedback related to different skill levels. In the next sub-sections, we introduce the details of those components.

Fig. 2
figure2

Technology framework for fostering analysis skill

Interactive panel for analysis: features to support learning tasks

We developed features to support the learning tasks in the GOAL system. Figure 3 illustrates all those features to support learning tasks. The GOAL system provides learning scaffolds and system affordances for analysis.

Fig. 3
figure3

Analysis module in the GOAL system

To begin, students select a context from the activity list, for instance, a daily waking activity context related to the maintenance of a healthy lifestyle (see Fig. 3a). The system integrates activity data synchronized from multiple contexts into a common data structure. Then, it generates an interactive graph for students to analyze the data of those self-directed activities (see Fig. 3b). A combination of bar and line is used for displaying activity values (y-axis) over time (x-axis) of the most recent 7 days. Students’ self-data is shown as bars, and the cohort average and extremum values are shown as lines. Group comparisons can be made on demand when the user checks group statistics—[Average]/[Max]/[Min]. When students move the mouse pointer on a line or a bar, the specific value will appear. For instance, to analyze the number of pages in a reading activity, the student can read the bar chart to check self-data, and check the average reading pages of his/her class by clicking the option—[Average]. Additionally, the system provides the criteria of the selected activity that students can interact to check. Further explanation about the criteria is given in the later sub-section. A self-report panel is given for rating activity status of the next day as the result of analyzing data of the given days to support the third learning task (see Fig. 3c). The students have an option to request system prompts to help them analyze using the visualizations (see Fig. 3d). Those prompts are designed based on the five visualization tasks relating to the line chart and bar chart: “Retrieve value,” “Find Extremum,” “Determine Range,” “Find Correlations/Trends,” and “Make Comparisons” (Lee et al. 2017) and are contextualized for each activity.

The system generates the analysis result by using prediction models, such as the linear regression model, and provides the result to students (see Fig. 3e). Lastly, the system rates students’ analysis skill level and offers students feedback regarding different levels (see Fig. 3f). The measurement of analysis skill level and the feedback generator are explained in the next sub-sections.

Interactions logging to extract analysis behavior in the GOAL system

The GOAL system captures user’s interactions while they perform the analysis tasks as Experience API (xAPI) statements (xAPI 2016). An action is defined by the verbs and the objects in the GOAL system. For instance, “Smith inputted his status is good.” is an action that contains a verb, INPUT, and an object, a report. Table 1 defines the extraction of analysis behavior and gives instances for each.

Table 1 Extraction of analysis behavior in the GOAL system

Analysis skill level measurement

Our previous work defined five levels of analysis skill (levels 0–4), level 4: check data—successfully identify status WITHOUT system support; level 3: check data–successfully identify status WITH system support; level 2: check data—PARTIALLY identify status; level 1: check data—DID NOT identify status; level 0: DID NOT check data (see Table 2) (Majumdar et al. 2019). Our approach of measuring the student’s analysis skill level is to compute interactions relating to the analysis behavior and to compare the analysis results of students with those of the system to check if students could successfully identify status.

Table 2 Scoring rubric for analysis skill

The task we designed is to predict activity status of the next day as the analysis result. Students use the self-report panel to record their results. At the same time, the system gets the result by analyzing the given activity data with prediction models and referring to the criteria. The prediction model and criteria vary in different contexts.

In the learning context, various prediction models (Hasnine et al. 2018; Askinadze et al. 2018; Flanagan et al. 2018) are proposed in previous work. In our work, we introduce a simple time-copy model considering data corresponding to the most recent week as our pilot study. We take 1 week as a time slice. On the other hand, the system generates an analysis report by the group-based classification. The system classifies students by the predicted values of an indicator, which is correlated with the student’s learning outcomes. To find a reliable indicator, we collected the student’s reading data and final exam scores from one course in our experiment. The detail is given in the next section.

In the physical activity context, we use linear regression as the prediction model, a model that has been studied rigorously and used extensively in practical applications (Xin and Xiaogang 2009). The system classifies students with the predicted values based on the existing criteria, such as Tudor-Locke et al. (2008)’s zone-based hierarchy for computing status of daily step activities.

Through the approach given above, students’ analysis skills are divided into five levels defined in our previous work (Majumdar et al. 2018). Table 2 gives the definition of the five levels and the specific logic expressions showing in the fourth column. For level 4, the student checked his/her own activity data (N1>0) and then reported activity status (N2>0) without system’s help (!S), and the result of self-report is as same as the result reported by the system (R1==R2). For level 3, the student checked his/her own activity data (N1>0) and then reported activity status (N2>0) with system’s help (S), and the result of self-report is as same as the result reported by the system (R1==R2). For level 2, the student checked his/her own activity data (N1>0) and then reported activity status (N2>0), but the result of self-report is not the same with the result reported by the system (R1!=R2), no matter with or without system’s help. For level 1, the student checked his/her own activity data (N1>0), but did not report activity status (N2==0). For level 0, the student did not check his/her own activity data at all (N1==0).

Adaptive feedback design

To further support the acquisition of knowledge and skills, feedback is considered as a critical factor (Shute 2008). Especially, automated feedback is of great value to those teachers who have limited time to provide the needed individualized guidance to their increasingly large classrooms of students (Gerard et al. 2015). In our system, the feedback is automatically generated and delivered to students who are at different skill levels. Following the framework of automated feedback proposed by Serral et al. (2019), we designed the feedback considering three aspects: content design, delivery, and context (see Table 3). The GOAL system gives positive feedback at the task level for suggestive purpose. The feedback is delivered after the automated measurement of their skill level. The students can choose to see feedback messages anywhere and any time (see Fig. 3f). Table 4 displays the message to each level.

Table 3 Adaptive feedback design (instantiated from Serral et al. (2019))
Table 4 Feedback message

Demonstration of analyzing an activity context

This section gives two examples to explain the whole flow about how the system supports analysis skill in learning and physical contexts. Two examples come from the actual data collected in the GOAL system during our pilot study, one from analyzing learning activity and another from analyzing physical activity.

Support analysis skill in learning context

The data in the first example is from one undergraduate student who participated in a reading activity on a course. The reading page is treated as the indicator in this activity. The daily reading data from Nov 13, 2019, to Nov 19, 2019, was provided to the student. The course is taken at a certain time every week during one semester. The system computed the predicted value by the time-copy model for each student from the same class (size = 51). Then, the system divided the set of the predict values to five degrees with similar size as the criteria: (1) 0 pages, (2) 1–10 pages, (3) 11–50 pages, (4) 51–60 pages, and (5) ≥60 pages. The predicted value of the next day is 126 pages reported by the system, whereas the student reported the degree of status in the next day is 4. Hence, the system rated this student’s analysis skill level is 2 and informed what THEY need to do next to improve analysis skill. Figure 4 indicates the whole flow.

Fig. 4
figure4

Example of supporting analysis skill in learning context

Support analysis skill in physical context

The data in this example also comes from one undergraduate student. The daily step counts from Nov 13, 2019, to Nov 19, 2019, synchronized from smartphone were provided to the student. The system adopts the criteria of adults’ daily steps proposed by Tudor-Locke, which classified adults into 5 degrees of status: (1) < 5000 steps/day (sedentary), (2) 5000–7499 steps/day (low active), (3) 7500–9999 steps/day (somewhat active), (4) 10,000–12,499 steps/day (active), and (5) ≥12,500 steps/day (highly active) (Tudor-Locke et al. 2008). The student checked the 7-day data and predicted the degree of status in the next day is 2. The system predicted the value of the next day is 7097 steps by using the linear regression model. So the degree of status in the next day is 2 according to the criteria. Hence, the system rated this student’s analysis skill level is 2 and informed what THEY need to do next to improve analysis skill. Figure 5 indicates the whole flow.

Fig. 5
figure5

Example of supporting analysis skill in physical context

Research methods

The first challenge we are facing is to choose a reliable indicator for analysis in a system that provides self-data as a context for promoting analysis skill. For instance, “Step count” can be considered to be a reliable indicator for physical activity where the criteria of adults’ daily step count is proposed by Tudor-Locke (Tudor-Locke et al. 2008) could be implemented in the system. Similarly in the learning context, standardized achievement test is the most commonly used for assessing students’ learning achievement (Ansley 1996). However, the criteria to directly evaluate students’ daily learning activity are still limited and often attendance is the only proxy available. Hence, we ask the following research question in response to the first challenge:

RQ1: Which indicators are reliable to estimate students’ activity status in a learning activity?

Providing students analysis tasks with a measurable indicator can help model analysis skill more precisely.

The second challenge we are facing is to understand the students’ analysis behaviors and their perception of the analysis task while given such a system to analyze self-data from multiple contexts. So, we put forward the following research questions:

RQ2: What was the learner’s perception regarding the analysis tasks and the affordances in the GOAL system?

RQ3: How did the students use the GOAL system during the pilot study to conduct analysis tasks?

The answers to these two research questions help us to improve the system for easier use.

Study design and participants

A single group design was conducted in one of computer science courses in a public university. A total of fifty-one students with the age of 20 to 22 years participated in this experiment. The data was collected over one semester from October 16, 2019, to January 8, 2020. In the first week, the project was introduced to all participants. They were told to read the material of this course on the BookRoll and to use mobile phones or wearable devices to record step data. In the second week, the analysis features were introduced to the students, asking them to use analysis modules on the GOAL system. At the last week, we conducted a questionnaire-based exit survey. The study followed the standard ethical considerations approved for the project by the institutional committee.

Data collected and analysis methods

Three types of datasets were collected in the GOAL system (see Table 5). All of fifty-one students used the BookRoll system in this course, and the GOAL system synchronized their daily reading data from the BookRoll system. Twenty-one students synchronized daily steps from their smartphones to the GOAL system. Twenty-nine students generated analysis interactions in the GOAL system. Besides, we received forty-one students’ final exam scores in this course and twenty-three students’ exit-survey responses during this experiment.

Table 5 Data collected in the GOAL system

To answer the RQ1, we computed the correlation between the reading behavior data and the final exam scores to find a reliable indicator. We selected three parameters from the reading data (reading time, number of pages read, average time spent on each page) and calculated these parameters for each student from three time dimensions (reading during this semester, reading in class, reading out of class). A total of nine indicators were processed for 41 students whose final exam scores were available. Regarding the RQ2, we answer it by analyzing twenty-three students’ exit-survey responses to evaluate the validation of the designed features and feedback for fostering analysis behavior on the system. Regarding the RQ3, we looked into twenty-nine students’ analysis interactions to observe their preference for self-directed activities in multiple contexts. Additionally, we evaluated the usage of the GOAL system based on twenty-three students’ exit-survey responses.

Results

Correlation between learning activity data and final exam score

Figure 6 shows the correlation between the reading behavior indicators and the final score. In this dataset, there was a significant correlation between time spent on each page out of class (r = 0.3212) and the final exam score with acceptable confidence (p-value = 0.04).

Fig. 6
figure6

Indicators and correlation with the final exam score

Students’ use of the GOAL system for analysis

We analyzed interaction logs (n = 153) generated by 29 students to find out what kind of activity they analyzed. Figure 7 gives proportion of different preferences. There are three distinct groups separated from 29 students, the group which analyzed only physical activity, the group which analyzed only learning activity, and the group which analyzed both learning and physical activity. The result shows equal group size of students emerged. Figure 8 gives the interaction count of analysis behavior in learning and physical contexts. Each column shows interactions from one student. The blue bar shows how many logs generated in the physical activity, and the orange bar indicates how many generated in the learning activity. The result shows the interaction count in the physical activity is more than that in the learning activity.

Fig. 7
figure7

Proportion of different preferences

Fig. 8
figure8

Interaction count of analysis behavior in learning and physical activities

After using the GOAL system for 85 days, the System Usability Scale (SUS) (Brooke 1996) score reported in the exit survey was 44.8. The SUS score’s adjective rating is near ‘OK’ (Bangor et al. 2008). However, it still indicates scopes of redesigning. To carry out the redesign from the perspective of students’ mental model about analysis, we conducted a further perception survey to investigate whether analysis features in the system match students’ mental model. We report about that next.

Learner’s perception of analysis behavior on the GOAL system

As the GOAL interactions were voluntary for the students, during the exit survey, we designed a scenario to walk through the analysis workflow. It had a multiple choice. The given items relate to the features designed in the system, which are “Check graphs to ANALYZE current status,” “Note current status,” “Check the status reported by system,” “Compare to other data.” These questions help us to understand students’ perceptions of designed features. Figure 9 shows the result of how many students select the item. “Check graphs to ANALYZE current status” was selected by 14 students (60.9%), and “Note current status” was selected by 10 students (43.5%). However, only 5 of them (21.7%) selected “Check the status reported by system”, and 1 of them (4.3%) selected “Compare to other data.” Additionally, 17 students (73.9%) selected only one option, and 6 students (26.1%) selected two or three options. None of them selected all options.

Fig. 9
figure9

Result of suggestion to identify activity status

Figure 4 displays the feedback message designed for each skill level. We summed the number of students who gave the expected answer and the unexpected answer (see Fig. 10). For feedback for level 1 to level 4, over half of the students selected the expected answer. However, for the feedback to level 0, over half of the students failed to select the expected answer.

Fig. 10
figure10

Result of selecting appropriate feedback for each level

Discussion

RQ1: Which indicators are reliable to estimate students’ activity status in a learning activity?

The solution to the first research question was to analyze the correlation between selected parameters and students’ final exam scores. The result indicates that the reading time students spend on each page out of class can be considered as a reliable indicator for rating students’ learning status in the selected course. This is consistent with the previous study, pointing out that time on reading books out of class is the best predictor of several reading achievement measures (Anderson et al. 1988). Unlike traditional classroom learning, SDL pays more attention to students’ spontaneous learning progress (Williamson 2007), which includes not only learning activities in the classroom but also learning activities out of the classroom. The analysis skill in SDL requires students to analyze self-data of learning activities to identify their learning status. Indicators that correlate with academic performance are more convincing in reflecting their learning status. Providing such an indicator as analysis tasks can help students perform SDS and measure students’ analysis skills more precisely. However, we only analyzed learning data from one course in the university, and it still lacks sufficient evidence to prove that this indicator can apply to all courses. Still, our approach is novel and provides a proof of concept to use the linked LA platform to conceptualize learning status based on the various performance parameters in the system and relate it to the analysis indicator. In recent years, there has been a lot of research on predictive models in LA (Chatti et al. 2012; Romero et al. 2013; Peña-Ayala 2014). They introduced more features with high accuracy algorithms, such as Classification Tree and Neural Network, to predict students’ academic performance (Akçapınar et al. 2019). Our future work considers introducing these methods to expand more possible indicators in our study.

RQ2: What was the learner’s perception regarding the analysis tasks and the affordances in the GOAL system?

The result shows that most students agree with using a visualization tool to analyze, whereas they tend to complete the analysis process with a single action. This can be explained that convenience is a crucial decision factor impacting students’ choice (Jang 2015). The fact that students prefer simple processes to perform analysis skill in SDL suggests that we shall simplify the analysis process when designing analysis features. Besides, we also need to consider importing techniques, such as rewards (Michie et al. 2013), to motivate students using those necessary features. Regarding designed feedback to different analysis skill levels, the result reveals that more than half of the students can match the correct feedback designed for level 1 to 4. However, over half of the students failed to select the correct feedback designed for level 0. To reach the highest level (level 4), the learners at the lowest level (level 0) have to make more effort than learners at other levels. Current support on the GOAL system limits on sending messages to students no matter which level the student is at. The finding suggests that it is not enough for students with low-level skill. Mariani (1997) also indicates that high support is beneficial when students face challenging tasks, because they can achieve new learning and become more confident and eventually tackle the tasks independently. In our future work, we consider building scaffolds that fade as students’ skill level increases.

RQ3: How did the students use the GOAL system during the pilot study to conduct analysis tasks?

Firstly, we observed the students’ preference in multiple contexts. The GOAL system provides students the datasets to analyze activity status in either learning context or physical context. The result shows three equal-size groups: (1) the group only prefers to physical activity, (2) the group only prefers to the learning activity, and (3) the group prefers multiple activities. Moreover, Fig. 8 shows more interactions of analysis behavior generated in the physical context than in the learning context, which reveals that students are more active in physical activities. Whereas current researches of self-direction limit in only one context, Brockett and Hiemstra (2018), Hammond and Collinsa (2013), Candy (1991), and Knowles (1975) discussed the theory and practice of self-direction in the learning context, and Mirowsky and Ross (2010) mentioned that staying healthy requires self-direction skill. None of them investigates self-direction in multiple contexts. Our study is an innovative attempt to utilize both learning and physical activity context for training students’ SDS. However, this study still needs further investigation about the motivation of students to choose those specific contexts and evaluations of the particular effects of exercising analysis skill in different contexts. Additionally, we evaluated the usage of the GOAL system based on the survey results. The result shows that the interface of the GOAL system is not easy for students using it. Considering the reason, we provided limited data (data in the most recent 7 days) and only two kinds of activities, reading and walking, which may not be enough to meet students’ actual needs. Future work should consider more types of data.

Conclusion

In this study, we aim to foster learner’s analysis skill for self-directed activities in a data-rich environment. The GOAL system is an innovative platform to support self-directed learning design by data from students’ own context. This paper described a framework of student modeling, user interface, and adaptive support for such a system. The answer to RQ1 helps us overcome the challenge of choosing a reliable indicator for analysis in a system that provides self-data as a context for promoting analysis skill. Based on that, we established our model for measuring students’ analysis skill. Once students’ skill levels are precisely identified, the system can provide them with appropriate feedback to improve. The answer to RQ2 and RQ3 gives us clues about students’ analysis behaviors and their perception of the analysis task when given such a system to analyze self-data from multiple contexts. The system usage statistics found students choose multiple context to exercise their analysis skill. This provide support for our innovative platform that synthesizes data of multiple activities across learning and physical activity context of students. Recent studies confirmed that multimodal data, such as heart rate, step count, weather condition, and learning activity, has great potential for predicting learning performance (Di Mitri et al. 2017). Future work on the GOAL platform would also consider automating the process of finding a reliable indicator from “beyond-LMS” multimodal data from the learners’ context of self-directed activities.

Availability of data and materials

The logs in the GOAL system analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Akçapınar, G., Altun, A., Aşkar, P. (2019). Using learning analytics to develop early-warning system for at-risk students. International Journal of Educational Technology in Higher Education, 16(40). https://doi.org/10.1186/s41239-019-0172-z.

  2. Anderson, R.C., Wilson, P.T., Fielding, L.G. (1988). Growth in reading and how children spend their time outside of school. Reading Research Quarterly, 23(3), 285–303. https://doi.org/10.1598/rrq.23.3.2.

    Article  Google Scholar 

  3. Andrade, A. (2017). Understanding student learning trajectories using multimodal learning analytics within an embodied-interaction learning environment. ACM International Conference on LAK, 70–79. https://doi.org/10.1145/3027385.3027429.

  4. Ansley, T. (1996). The role of standardized achievement tests in grades k-12. In Educational Psychology, Handbook of Classroom Assessment. https://doi.org/10.1016/B978-012554155-8/50011-9. Academic Press, (pp. 265–285).

  5. Askinadze, A., Liebeck, M., Conrad, S. (2018). Predicting student test performance based on time series data of ebook reader behavior using the cluster-distance space transformation. In ICCE 2018 - 26th International Conference on Computers in Education, Workshop Proceedings, Manila, (pp. 430–439).

  6. Bangor, A., Kortum, P.T., Miller, J.T. (2008). An empirical evaluation of the system usability scale. International Journal of Human–Computer Interaction, 24(6), 574–594.

    Article  Google Scholar 

  7. Brockett, R.G., & Hiemstra, R. (2018). Self-direction in adult learning: Perspectives on theory, research and practice. New York: Routledge.

    Book  Google Scholar 

  8. Brooke, J. (1996). SUS: A “quick and dirty” usability scale. London: Taylor and Francis.

    Google Scholar 

  9. Candy, P.C. (1991). Self-direction for lifelong learning. A comprehensive guide to theory and practice. San Francisco: Jossey-Bass.

    Google Scholar 

  10. Chatti, M.A., Dyckhoff, A.L., Schroeder, U., Thüs, H. (2012). A reference model for learning analytics. International Journal of Technology Enhanced Learning, 4(5-6), 318–331.

    Article  Google Scholar 

  11. Di Mitri, D., Börner, D., Scheffel, M., Ternier, S., Drachsler, H., Specht, M. (2017). Learning pulse: A machine learning approach for predicting performance in self-regulated learning using multimodal data. ACM International Conference on LAK, 188–197. https://doi.org/10.1145/3027385.3027447.

  12. Fisher, M., King, J., Tague, G. (2001). Development of a self-directed learning readiness scale for nursing education. Nurse Education Today, 21(7), 516–525.

    Article  Google Scholar 

  13. Flanagan, B., Chen, W., Ogata, H. (2018). Joint activity on learner performance prediction using the BookRoll dataset. In ICCE 2018 - 26th International Conference on Computers in Education, Workshop Proceedings, Manila, (pp. 487–492).

  14. Gerard, L., Matuk, C., McElhaney, K., Linn, M. (2015). Automated, adaptive guidance for k-12 education. Educational Research Review, 15, 41–58.

    Article  Google Scholar 

  15. Hammond, M., & Collinsa, R. (2013). Self-directed learning: Critical practice. New York: Routledge.

    Book  Google Scholar 

  16. Hasnine, M., Akçaınar, G., Flanagan, B., Majumdar, R., Ogata, H., Mouri, K. (2018). Towards final scores prediction over clickstream using machine learning methods. In ICCE 2018 - 26th International Conference on Computers in Education, Manila, (pp. 392–397).

  17. Jang, Y. (2015). Convenience matters: A qualitative study on the impact of use of social media and collaboration technologies on learning experience and performance in higher education. Education for Information, 31(1-2), 73–98.

    Article  Google Scholar 

  18. Knowles, M.S. (1975). Self-directed learning: A guide for learners and teachers. New York: Association Press.

    Google Scholar 

  19. Lee, S., Kim, S.H., Kwon, B.C. (2017). VLAT: Development of a visualization literacy assessment test. IEEE Transactions on Visualization and Computer Graphics, 23(1), 551–560.

    Article  Google Scholar 

  20. Loyens, S.M.M., Magda, J., Rikers, R.M.J.P. (2008). Self-directed learning in problem-based learning and its relationships with self-regulated learning. Educational Psychology Review, 20(4), 411–427.

    Article  Google Scholar 

  21. Majumdar, R., Yang, Y.Y., Li, H., Akçapinar, G., Flanagan, B., Ogata, H. (2018). Goal: Supporting learner’s development of selfdirection skills using health and learning data. In ICCE 2018 - 26th International Conference on Computers in Education, Main Conference Proceedings, Manila, (pp. 406–415).

  22. Majumdar, R., Yang, Y.Y., Li, H., Akçapinar, G., Flanagan, B., Ogata, H. (2019). Adaptive support for acquisition of self-direction skills using learning and health data. In 2019 IEEE 19th International Conference on Advanced Learning Technologies (ICALT), volume 2161-377X. Curran Associates, Brazil, (pp. 54–56).

    Chapter  Google Scholar 

  23. Mariani, L. (1997). Teacher support and teacher challenge in promoting learner autonomy. Perspectives, a Journal of TESOL-Italy, 22, 5–19.

    Google Scholar 

  24. Michie, S., Richardson, M., Johnston, M., Abraham, C., Francis, J., Hardeman, W., Eccles, M.P., Cane, J., Wood, C.E. (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Annals of Behavioral Medicine, 46(1), 81–95.

    Article  Google Scholar 

  25. Mirowsky, J., & Ross, C.E. (2010). Self-direction toward Health: Overriding the Default American Lifestyle. In The Handbook of Health Psychology and Behavioral Medicine. Guilford Press, New York, (pp. 235–250).

    Google Scholar 

  26. Noguchi, J., & Mccarthy, T. (2010). Reflective self-study: Fostering learner autonomy. In: Stoke, A.M. (Ed.) In JALT2009 Conference Proceedings. JALT, Tokyo.

    Google Scholar 

  27. Ogata, H., Yin, C., Oi, M., Okubo, F., Shimada, A., Kojima, K., Yamada, M. (2015). Ebook-based learning analytics in university education. In International Conference on Computer in Education (ICCE 2015), Saskatoon, (pp. 401–406).

  28. P, 21-Framework (2009). A framework for 21st century learning. http://www.p21.org/our-work/p21-framework. Accessed 30 July 2020.

  29. Peña-Ayala, A. (2014). Educational data mining: A survey and a data mining-based analysis of recent works. Expert Systems with Applications, 41(4 PART 1), 1432–1462.

    Article  Google Scholar 

  30. Romero, C., Espejo, P.G., Zafra, A., Romero, J.R., Ventura, S. (2013). Web usage mining for predicting final marks of students that use Moodle courses. Computer Applications in Engineering Education, 21(1), 135–146.

    Article  Google Scholar 

  31. Serral, E., Ruiz, J., Elen, J., Snoeck, M. (2019). Conceptualizing the domain of automated feedback for learners. In XXII Ibero-American Conference on Software Engineering, CIbSE 2019. Curran Associates, Cuba, (pp. 223–236).

    Google Scholar 

  32. Shute, V.J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.

    Article  Google Scholar 

  33. Spann, C.A., Schaeffer, J., Siemens, G. (2017). Expanding the scope of learning analytics data: Preliminary findings on attention and self-regulation using wearable technology. ACM International Conference on LAK, 203–207.

  34. Stockdale, S.L., & Brockett, R.G. (2011). Development of the PRO-SDLS: A measure of self-direction in learning based on the personal responsibility orientation model. Adult Education Quarterly, 61(2), 161–180.

    Article  Google Scholar 

  35. Taylor, I., & Burgess, H. (1995). Orientation to self-directed learning: Paradox or paradigm?Studies in Higher Education, 20(1), 87–98.

    Article  Google Scholar 

  36. Thornton, K. (2010). Supporting self-directed learning: A framework for teachers. Language Education in Asia, 1, 158–170.

    Article  Google Scholar 

  37. Tudor-Locke, C., Hatano, Y., Pangrazi, R.P., Kang, M. (2008). Revisiting “how many steps are enough?”Medicine and Science in Sports and Exercise, 40(7 SUPPL.1), 537–543.

    Article  Google Scholar 

  38. Walker, J.T., & Lofton, S.P. (2003). Effect of a problem based learning curriculum on students’ perceptions of self directed learning. Issues in Educational Research, 13, 71–100.

    Google Scholar 

  39. Williamson, S.N. (2007). Development of a self-rating scale of self-directed learning. Nurse Researcher, 14(2), 66–83.

    Article  Google Scholar 

  40. xAPI (2016). Experience API (xAPI) specification. http://github.com/adlnet/xAPI-Spec. Accessed 30 July 2020.

  41. Xin, Y., & Xiaogang, S. (2009). Linear regression analysis: Theory and computing. Singapore: World Scientific.

    Book  Google Scholar 

Download references

Funding

This work is partially funded by the following research grants:

Prof. Hiroaki Ogata

JSPS KAKENHI Grant-in-Aid for Scientific Research (S) 16H06304

NEDO Special Innovation Program on AI and Big Data 18102059-0

Dr. Rwitajit Majumdar

JSPS KAKENHI Grant-in-Aid for Early-Career Scientists 20K20131

JSPS KAKENHI Grant-in-Aid for Scientific Research (B) 20H01722 (co-PI)

SPIRITS 2020 of Kyoto University

Dr. Brendan Flanagan

JSPS KAKENHI Grant-in-Aid for Scientific Research (B) 20H01722

Author information

Affiliations

Authors

Contributions

YY drafted the initial manuscript and data analysis. RM designed the architecture of the system and provided insight and editing of the manuscript. YY and HL implemented the software system and supporting algorithms. RM, HL, and YY performed the experiments and analyzed the data. GA and BF provided technical resources and insights. HO, RM, and BF secured funding to partially support this project. HO provided supervision of the research. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yuanyuan Yang.

Ethics declarations

Consent for publication

All the participants participated in the course and had given the consent to use their learning, physical activity, interaction logs, and survey response for academic research reporting.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yang, Y., Majumdar, R., Li, H. et al. A framework to foster analysis skill for self-directed activities in data-rich environment. RPTEL 16, 22 (2021). https://doi.org/10.1186/s41039-021-00170-y

Download citation

Keywords

  • Data analysis
  • Self-directed learning
  • Self-direction skill
  • Skill measurement
  • Automated measurement
  • Student modeling
  • User interface design
  • Adaptive support