In statistics and research, internal consistency is typically a measure based on the correlations between different items on the same test (or the same subscale on a larger test). It measures whether several items that propose to measure the same general construct produce similar scores. For example, if a respondent expressed agreement with the statements "I like to ride bicycles" and "I've enjoyed riding bicycles in the past", and disagreement with the statement "I hate bicycles", this would be indicative of good internal consistency of the test.
Cronbach's alpha
editInternal consistency is usually measured with Cronbach's alpha, a statistic calculated from the pairwise correlations between items. Internal consistency ranges between negative infinity and one. Coefficient alpha will be negative whenever there is greater within-subject variability than between-subject variability.[1]
A commonly accepted rule of thumb for describing internal consistency is as follows:[2]
Cronbach's alpha | Internal consistency |
---|---|
0.9 ≤ α | Excellent |
0.8 ≤ α < 0.9 | Good |
0.7 ≤ α < 0.8 | Acceptable |
0.6 ≤ α < 0.7 | Questionable |
0.5 ≤ α < 0.6 | Poor |
α < 0.5 | Unacceptable |
Very high reliabilities (0.95 or higher) are not necessarily desirable, as this indicates that the items may be redundant.[3] The goal in designing a reliable instrument is for scores on similar items to be related (internally consistent), but for each to contribute some unique information as well. Note further that Cronbach's alpha is necessarily higher for tests measuring more narrow constructs, and lower when more generic, broad constructs are measured. This phenomenon, along with a number of other reasons, argue against using objective cut-off values for internal consistency measures.[4] Alpha is also a function of the number of items, so shorter scales will often have lower reliability estimates yet still be preferable in many situations because they are lower burden.
An alternative way of thinking about internal consistency is that it is the extent to which all of the items of a test measure the same latent variable. The advantage of this perspective over the notion of a high average correlation among the items of a test – the perspective underlying Cronbach's alpha – is that the average item correlation is affected by skewness (in the distribution of item correlations) just as any other average is. Thus, whereas the modal item correlation is zero when the items of a test measure several unrelated latent variables, the average item correlation in such cases will be greater than zero. Thus, whereas the ideal of measurement is for all items of a test to measure the same latent variable, alpha has been demonstrated many times to attain quite high values even when the set of items measures several unrelated latent variables.[5][6][7][8][9][10][11] The hierarchical "coefficient omega" may be a more appropriate index of the extent to which all of the items in a test measure the same latent variable.[12][13] Several different measures of internal consistency are reviewed by Revelle & Zinbarg (2009).[14][15]
See also
editReferences
edit- ^ Knapp, T. R. (1991). Coefficient alpha: Conceptualizations and anomalies. Research in Nursing & Health, 14, 457-480.
- ^ George, D., & Mallery, P. (2003). SPSS for Windows step by step: A simple guide and reference. 11.0 update (4th ed.). Boston: Allyn & Bacon.
- ^ Streiner, D. L. (2003) Starting at the beginning: an introduction to coefficient alpha and internal consistency, Journal of Personality Assessment, 80, 99-103
- ^ Peters, G.-J. Y (2014) The alpha and the omega of scale reliability and validity: Why and how to abandon Cronbach’s alpha and the route towards more comprehensive assessment of scale quality. European Health Psychologist, 16 (2). URL: http://ehps.net/ehp/index.php/contents/article/download/ehp.v16.i2.p56/1
- ^ Cortina. J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78, 98–104.
- ^ Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334.
- ^ Green, S. B., Lissitz, R.W., & Mulaik, S. A. (1977). Limitations of coefficient alpha as an index of test unidimensionality. Educational and Psychological Measurement, 37, 827–838.
- ^ Revelle, W. (1979). Hierarchical cluster analysis and the internal structure of tests. Multivariate Behavioral Research, 14, 57–74.
- ^ Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8, 350–353.
- ^ Zinbarg, R., Yovel, I., Revelle, W. & McDonald, R. (2006). Estimating generalizability to a universe of indicators that all have an attribute in common: A comparison of estimators for . Applied Psychological Measurement, 30, 121–144.
- ^ Trippi, R. & Settle, R. (1976). A Nonparametric Coefficient of Internal Consistency. Multivariate Behavioral Research, 4, 419-424. URL: http://www.sigma-research.com/misc/Nonparametric%20Coefficient%20of%20Internal%20Consistency.htm
- ^ McDonald, R. P. (1999). Test theory: A unified treatment. Psychology Press. ISBN 0-8058-3075-8
- ^ Zinbarg, R., Revelle, W., Yovel, I. & Li, W. (2005). Cronbach’s α, Revelle’s β, and McDonald’s ωH: Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70, 123–133.
- ^ Revelle, W., Zinbarg, R. (2009) "Coefficients Alpha, Beta, Omega, and the glb: Comments on Sijtsma", Psychometrika, 74(1), 145–154. [1]
- ^ Dunn, T. J., Baguley, T. and Brunsden, V. (2013), From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology. doi: 10.1111/bjop.12046