From: Beyond reliability: assessing rater competence when using a behavioural marker system
Type of ICC | Explanation of the difference | Type used in this study | Justification |
---|---|---|---|
One-way random (ICC 1), two-way random (ICC 2) or two-way mixed (ICC 3) | One-way random assumes that there are no consistent raters for all ratees Two-way random assumes consistent raters for all ratees, and the raters are a sample from a larger population. Two-way mixed assumes consistent raters for all ratees, and the raters are a population, not a sample | Two-way random | All raters rated the same ratees. Our raters were a sample from a larger population |
Correlation or absolute agreement | Absolute agreement is used when it is important for scores to be the same (such as in academic exams). Correlation is used if, for example, a mean of ratings will be used, and the absolute value is less important | Absolute agreement | Desire to know how well each rater would assess the ratee |
Single measures or average measures | The single-measures ICC determines the accuracy of a single rater when used alone. The average-measures ICC determines the accuracy if multiple raters are used | Single measures | Desire to understand the accuracy of a single rater when used alone |