Identifying Sources of Difference in Reliability in Content Analysis

This paper reports on a case study which identifies and illustrates sources of difference in agreement in relation to reliability in a context of quantitative content analysis of a transcript of an online asynchronous discussion (OAD). Transcripts of 10 students in a month-long online asynchronous d...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Elizabeth Murphy, Justyna Ciszewska-Carr
Formato: article
Lenguaje:EN
Publicado: Athabasca University Press 2005
Materias:
Acceso en línea:https://doaj.org/article/a97620a42dc840259ec8a511b758675a
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:This paper reports on a case study which identifies and illustrates sources of difference in agreement in relation to reliability in a context of quantitative content analysis of a transcript of an online asynchronous discussion (OAD). Transcripts of 10 students in a month-long online asynchronous discussion were coded by two coders using an instrument with two categories, five processes, and 19 indicators of Problem Formulation and Resolution (PFR). Sources of difference were identified in relation to: coders; tasks; and students. Reliability values were calculated at the levels of categories, processes, and indicators. At the most detailed level of coding on the basis of the indicator, findings revealed that the overall level of reliability between coders was .591 when measured with Cohen’s kappa. The difference between tasks at the same level ranged from .349 to .664, and the difference between participants ranged from .390 to .907. Implications for training and research are discussed. Keywords: content analysis; online discussions; reliability; Cohen's kappa; sources of difference; coding