Identifier
etd-10282009-144508
Degree
Doctor of Philosophy (PhD)
Department
Educational Theory, Policy, and Practice
Document Type
Dissertation
Abstract
Numerous procedures have been suggested for determining the number of factors to retain in factor analysis. However, previous studies have focused on comparing methods using normal data sets. This study had two phases. The first phase explored the Kaiser method, Scree test, Bartlett’s chi-square test, Minimum Average Partial (1976 & 2000), Horn’s parallel analysis, and Longman’s Parallel Analysis on normal data using the estimation methods of Maximum Likelihood (ML), Principal Component Analysis (PCA), and Principal Factor Analysis (PFA). The second phase explored the Kaiser method, Scree test, Minimum Average Partial (1976 & 2000), and Horn’s parallel analysis, and Longman’s Parallel Analysis on data that contained outliers using the estimation methods of PCA and PFA. In the first phase, sample correlation matrices were generated with varied conditions (sample size, number of variables, estimation methods). Three hundred sample correlation matrices were generated for each condition for a grand total of eighteen hundred. The performance of parallel analysis and the Kaiser method were generally the best across all situations. However, the increase in variables and sample size under each condition showed a difference in accuracy among the methods. The increase in sample size resulted in little difference between estimation methods of PCA and PFA. Recommendations concerning the accuracy of the methods under each condition are discussed. In the second phase, fifty sample correlation matrices were randomly selected from each of the three hundred sample correlations matrices under each condition. An outlier was randomly incorporated in each of the fifty sample correlation matrices. The squared Mahalanobis distance was recorded for each to determine the distance at which the methods start to fail. The research conducted here indicates that Parallel Analysis and Longman’s Parallel Analysis was very resistant to outliers in some specific cases. However, it was evident from the data that each method tended to make the incorrect decision on retaining the correct number of factors when the squared Mahalanobis distance reached a certain amount. A discussion of method performance is given on each of the conditions to help determine the most effective and useful combinations on dealing with the outliers.
Date
2009
Document Availability at the Time of Submission
Release the entire work immediately for access worldwide.
Recommended Citation
Swaim, Victor Snipes, "Determining the number of factors in data containing a single outlier: a study of factor analysis of simulated data" (2009). LSU Doctoral Dissertations. 3044.
https://repository.lsu.edu/gradschool_dissertations/3044
Committee Chair
Kennedy, Eugene
DOI
10.31390/gradschool_dissertations.3044