Skip to main content

Table 4 F-score of two raters for the authorship of questions

From: Evaluation of a question generation approach using semantic web for supporting argumentation

 

Rater 1

Rater 2

 

F-score

Precision

Recall

F-score

Precision

Recall

Topic 1 (inter-rater agreement Kappa = 0.086)

0.33

0.75

0.21

0.51

0.81

0.37

Topic 2 (inter-rater agreement Kappa = 0.233)

0.50

0.87

0.35

0.52

1.00

0.35

Topic 3 (inter-rater agreement Kappa = 0.263)

0.40

0.77

0.27

0.44

0.92

0.29