Share

Objective Measurement of Subjective Phenomena

7. Reliability

Reliability refers to the precision with which a scale or instrument assesses a dimension. If one were to administer a scale twice to a sample of participants, one would not expect to obtain precisely the same score for each participant at the two occasions. But, the closer each person’s score at time 1 corresponds to his or her score at time 2, the higher the reliability of the measure. Thus, reliability refers to the reproducibility of scores on multiple, theoretical applications of the measuring instrument (McDonald, 1999).

Reliability is defined as the proportion of variance in observed test score that is related to true scores (Cronbach, 1951; McDonald, 1999). Under classical test theory (see above), we can distinguish three sources of variance: (a) true score variance, (b) error variance (or measurement error), and (c) total scale variance (which is the sum of true score and error variance).

Unfortunately, we typically do not have estimates of true score variance or error variance, having only an estimate of the total scale variance. But, invoking simple assumptions regarding the true and error scores (see preceding section of Classical Test Theory), we can obtain estimates of the ratio of true score variance to total score variance, and we use these estimates as our estimates of the reliability of a scale.

Show All

TYPES OF RELIABILITY
Parallel Forms Reliability (Or Coefficient of Equivalence)
Split-Half Reliability
Internal Consistency Reliability
Test-Retest Reliability (Or Coefficient of Stability)
Coefficient of Stability and Equivalence
Interrater Reliability

McDonald, R. P. (1999). Test theory: A unified treatment. Mahwah, NJ: Erlbaum.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334.
McDonald, R. P. (1999). Test theory: A unified treatment. Mahwah, NJ: Erlbaum.