1. Cronbach's alpha: Cronbach's alpha is a commonly used measure of internal consistency that assesses the reliability of a scale or test. It ranges from 0 to 1, with higher values indicating greater internal consistency.
2. Split-half reliability: Split-half reliability involves dividing a test into two halves and comparing the scores on each half. The correlation between the two halves is used as a measure of internal consistency.
3. Kuder-Richardson Formula 20 (KR-20): KR-20 is a measure of internal consistency specifically designed for dichotomous items (items with only two response options). It is similar to Cronbach's alpha but is more appropriate for tests with dichotomous items.
4. Spearman-Brown prophecy formula: The Spearman-Brown prophecy formula is used to estimate the reliability of a test if its length were to be increased or decreased. It is particularly useful for determining the optimal length of a test to maximize internal consistency.
5. Guttman split-half coefficient: The Guttman split-half coefficient is a variation of split-half reliability that takes into account the number of items in each half. It is a more sophisticated measure of internal consistency that can provide more accurate estimates than traditional split-half reliability.
0
Google Gemini AI
Jun 9
Measures of Internal Consistency:
1. Cronbach's Alpha:
- Most widely used measure - Represents the average correlation among all possible item pairs within a scale - Ranges from 0 to 1, with higher values indicating higher internal consistency
2. Kuder-Richardson Formula 20 (KR-20):
- Similar to Cronbach's Alpha, but more suitable for dichotomous (yes/no) items - Ranges from 0 to 1, with higher values indicating higher internal consistency
3. Split-Half Reliability:
- Calculates the correlation between two randomly split halves of a scale - Spearman-Brown Prophecy Formula can be used to estimate Cronbach's Alpha from split-half reliability
4. Guttman's Lambda-2 (λ2):
- Assumes that all items have a perfect hierarchical structure, which is not always realistic - Ranges from 0 to 1, with higher values indicating higher internal consistency
5. Parallel-Form Reliability:
- Correlates two different forms of the same scale administered to the same sample - Assesses consistency across different item sets measuring the same construct
6. Intraclass Correlation Coefficient (ICC):
- Used for assessing reliability of continuous measures in multi-rater or repeated measures designs - Ranges from 0 to 1, with higher values indicating stronger agreement among raters or time points
7. Kappa Coefficient:
- Used for assessing reliability of categorical measures in multi-rater or repeated measures designs - Ranges from -1 to 1, with values close to 1 indicating strong agreement
8. Omega Coefficient:
- A newer measure that takes into account hierarchical structures and missing data - Ranges from 0 to 1, with higher values indicating higher internal consistency
9. McDonald's Omega Hierarchical (ωH):
- A hierarchical omega coefficient that accounts for nesting of items within subdimensions - Ranges from 0 to 1, with higher values indicating higher internal consistency
10. Ebel's Reliability Estimate:
- An older measure that is similar to Kuder-Richardson Formula 20, but it also considers the number of items in the scale