|
Free download
Factor is a freeware program developed at the Rovira i Virgili University. Users are invited to download a DEMO and the program:
If you work with Excel, the following file can be used to preprocess the data file. Please note that that you must allow macros when opening the preprocessing.xlsm file:
We would greatly appreciate any suggestions for future improvements. Detailed reports of failures are also welcome.
Version of the program: 12.04.05 (2nd October 2023)
This version implements:
-
The report of indices for detecting correlated residuals (doublets) has been improved. Now the variables involved in each pair is labeled according to the variables in the data file. So far, the variables were labeled based on the variables included in the analysis (not the variables in the file). The new report allows a easiest interpretation of the results.
-
The computing of bootstrap confidence intervals related to goodness-of-fit indices have been refined in order to achieve that the point estimate systematically falls inside of the corresponding bootstrap confidence interval.
- FACTOR has been successfully tested for the recent release of Windows 11 Pro.
Version of the program: 12.04.04 (19th September 2023)
This version implements:
-
So far, omega in the linear case was obtained by using an empirical estimate of the variance of the raw-scores in the denominator of the reliability formula. In other words, the variance was computed directly from the column of total scores. In the present version, instead, we use a structural estimate of the total variance which is based on the fitted factor solution. The estimate we use now is that in the denominator of formula (6.20b) in McDonald (1999).
Version of the program: 12.04.03 (4th September 2023)
This version implements:
-
References related to the output has been updated..
-
This version corrects some internal bugs. These bugs were reported by some users when analyzing their own data. We are grateful to these users that help us to improve Factor.
Version of the program: 12.04.02 (31th July 2023)
This version implements:
-
The information during the computing of index GOLDEN has been improved to report the percentage of computing already performed.
-
This version corrects some internal bugs. These bugs were reported by some users when analyzing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 12.04.01 (19th May 2023)
This version implements:
-
Seneca Estimate: this is a procedure to estimate an optimal sample size. The proposal is based on an intensive simulation process in which the sample correlation matrix is used as a basis for generating datasets from a pseudo-population in which the parent correlation holds exactly. And the criterion for determining the needed size is a threshold that quantifies the closeness between the pseudo-population and the sample reproduced correlation matrices.
-
Non-inferential GOF índices for nested comparions are implemented in FACTOR. The índices included are: Raykov’s effect-size measure and GOLDEN index. The results of the simulation study suggested that to propose a single, omnibus cut-off or reference value for both indices, although feasible, would be too simplistic. Instead of this, we proposed using an empirical threshold that takes into account the characteristics of the solutions that are compared.
- This version corrects some internal bugs. These bugs were reported by some users when analyzing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 12.03.02 (20th October 2022)
This version implements:
Gulliksen’s pool: A quick factor-analytic tool for preliminary detection of inappropriate items in item analysis. Exploratory factor analysis is widely used for item analysis in the earlier stages of test development, usually with large pools of items. In this scenario, the presence of inappropriate or ineffective items can hamper the process of analysis, making it very difficult to correctly assess dimensionality and structure. To avoid, or greatly minimize, this (quite frequent) problem, we propose and implement a simple procedure designed to flag potentially problematic items before we specify any particular factorial solution. The procedure defines regions of item appropriateness and efficiency based on the combined impact of two prior item features: extremeness and consistency. It can be computed using the button “PreFactor” in the “Configuration” menu.
The test statistics for assessing model-data fit (like, RMSEA or CFI) cannot be derived if the minimum fit function value is not available (as it happens in MRFA). To overcome this limitation, we propose a chi-square type goodness-of-fit test statistic intended for situations when the minimum fit function value is not available. Actually, our statistic can also be computed in most extraction methods (for example, ML or Robust-ULS). We labeled the new statistic LOSEFER (as an acronym of authors’ names). The statistic is empirically obtained via intensive simulation based on a two-stage approach. It can be configured from the “Bootstrap for Robust Analysis” menu.
This version corrects some internal bugs. These bugs were reported by some users when analyzing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 12.01.02 (22th December 2021)
This version implements:
External variables can be used in order to compute validity studies, and UNIVAL assessment. A new video-tutorial explains how to prepare the data and to load it in Factor. In the validity study, zero-order correlation and disattenuated correlations are computed. The aim of UNIVAL is to assess the essentially unidimensionality based on external sources of information.
The items positions are described using the QIM (Quartil Ipsative Mean) of items. The aim is to provide information about how well the set of items represent the total distribution of person’s scores.
SOLOMON reports the S index just after loading the data.
This version corrects some internal bugs. These bugs were reported by some users when analyzing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 11.05.01 (7th July 2021)
This version implements:
Measure of Sampling Adequacy (MSA) at the single-variable level Kaiser’s (1970; Kaiser & Rice, 1974) is implemented, and the Relative difficulty index of variables. These indices are systematically computed when assessing the quality of the correlation matrix to be analyzed using factor analysis. Values of MSA below .50 suggest that the item does not measure the same domain as the remaining items in the pool, and so that it should be removed. At the same time, for a normal-range test, an optimal pool of items should have a large spread of Relative difficulty indices. When removing items from the pool, these aspects should be taken into account. Sometimes, the conclusion is that new items should be added to the pool of items.
In the classical exploratory factor analysis (EFA) model, residuals are constrained to be uncorrelated. This release of FACTOR implements two classical methods for detecting correlated residuals (doublets), and two new ones based on MORGANA approach. The basic idea of the MORGANA approach is the potential propagating effects of substantial doublets constrained to be zero, and so, its principle is to minimize these effects in order to obtain clear change estimates when each doublet is or is not constrained to be zero. The classical methods implemented are: Fitted residuals, and Partial correlations. The new ones are: Expected Residual correlation direct Change index (EREC index), and Expected commuNalIty DirEct change Index (ENIDE index). Thresholds values for the four indices are obtained based on parallel analysis.
This version corrects some internal bugs. These bugs were reported by some users when analyzing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 11.04.02 (4th June 2021)
This version implements:
When a single group dataset is loaded and the sample is at least of 400 observations (i.e., the number of rows is larger than 399), two subsamples are computed using Solomon method. Solomon algorithm optimally splits the data in two equivalent halves, and improves the representativeness of the subsamples (i.e., all possible sources of variance are enclosed in the subsamples). The aim is to allow the researcher to have two subsamples to run different analyses in each subsample. For example, the first subsample could be used to run a fully exploratory analysis based on a rotation to maximize factor simplicity (like Promin); and the second subsample could be used to run a second analysis with a confirmatory aim based on an oblique Procrustean rotation using a target matrix build as suggested by the outcome of the first fully exploratory analysis. FACTOR allows the researcher to save the new dataset that includes the group variable, so that new analyses can be started from this file.
FACTOR now checks whether it is placed in a folder where it can write the output files.
It checks that the number of missing values is lower than 30% of the values, and that the maximum number of rows with missing values is lower than 30%.
It checks whether the variables have variance larger than zero in the sample, but also in the bootstrap samples.
Version of the program: 11.02.04 (22th April 2021)
This version implements:
After quite 20 years of developing FACTOR, the internal menus and technical details were at some extend updated. With this release, we have moved our C language code to a new Microsoft Visual Studio project, and we have designed all the menus from the very beginning. However, we aimed to maintain the same design. Let’s see if the new project manages to last another 20 years!
New help buttons have been added all around FACTOR menus. During these years we have implemented so many new methods related to exploratory factor analysis that the utility of many of them could be unnoticed to applied researchers. Now FACTOR itself aims to explain any method implemented using a plain language, so that researchers can understand the methods, when to use them, and how to interpret them without the need of reading the original paper where they were proposed. Help explanations also include information related to the format of files need to run FACTOR.
Frequently, researchers analyze samples that include different groups of participants, like women and men. Their aim is to compute a factor analysis for the overall sample, but also it is not unusual that they need to compute separate factor analyses in the different groups. The new release allows to manage different groups of participants, and to decide which groups are included in the factor analysis at hand. This feature is a first step towards much more interesting analyses, like factor invariance analyses. We expect to be able to develop and implement such analyses in a near future.
DIANA scores are computed in the present release. The aim is to estimate participants’ scores on latent variables using optimal addition of their item responses.
We have started a new series of short video-tutorials to help to understand how to use FACTOR, and how to carry out different analyses. At moment, only two are ready, but we expect to include much more in a near future.
This version corrects some internal bugs. These bugs were reported by some users when analysing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 10.10.03 (7th April 2020)
This version implements:
- This version corrects some internal bugs. These bugs were reported by some users when analysing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 10.10.02 (2nd March 2020)
This version implements:
-
In the context of computing participants’ scores, a new procedure named DAINA is implemented. DIANA helps to select the optimal set of items that must be added in order to compute individuals' scores as unit-weight sum scores. The procedure aims to maximize fidelity and correlational accuracy in the context of multiple factor solutions intended for ordered-categorical responses. The Ordinal Coefficient of Fidelity (O-COF) is a direct index for assessing the extent to which the raw sum scores obtained according to DIANA are good proxies for the latent factors they intend to measure. When the factor model holds, the accuracy of the sum scores as measures of the true latent scores increases with the number of items and the signal/noise ratio (i.e., increase in loadings and decrease in the residual variances). Both O-COF (for the ordinal model) and fidelity can be interpreted as correlations between the chosen scores and the factor they intend to measure. Their squared values can be interpreted as reliability coefficients. An O-COF (or a fidelity) value equal or larger than .90 would suggest an acceptable measurement accuracy. This cutoff is roughly equivalent to a reliability coefficient of about .80.
-
Sweet smoothing: The amount of variance destroyed per variable is reported.
-
Some bibliographical references have been up dated.
Version of the program: 10.10.01 (15th October 2019)
This version implements:
-
Least-squares exploratory factor analysis based on tetrachoric/polychoric correlations is a robust, defensible and widely used approach for performing item analysis. A relatively common problem in this scenario, however, is that the inter-item correlation matrix might fail to be positive definite. In order to correct not positive definite correlation matrices, FACTOR implements smoothing methods. The basic principle in the smoothing corrections is to change the relative weight of the diagonal elements of the correlation matrix with respect to the non-diagonal elements. The challenge of the smoothing methods is to change the relative weight of the diagonal elements of the correlation matrix while destroying as little variance as possible in the process. In the present release of FACTOR, Ridge and Sweet smoothing methods have been implemented. It must be noted that Ridge Smoothing is a linear smoothing method that impacts all the variables in the correlation matrix. To prevent this from happening, we propose Sweet Smoothing: the aim of this non-linear smoothing method is to focus the smoothing procedure only on the problematic variables while destroying as little variance as possible in the process. The researcher is allowed to choose between these two smoothing methods when analyzing a dataset.
-
In previous releases of FACTOR, KMO index was only reported based on Pearson correlations (even when analyzing ordinal data). Now that most careful smoothing methods are available in FACTOR, KMO index is reported also based on tetrachoric/polychoric correlation matrices.
-
McDonald's linear and ordinal omega reliabilities coefficients are implemented.
-
When computing principal component analysis, participants’ scores on the components are carefully handled. For example, they can now be stored in a separate file, and are reported to be “component scores” and not “factor scores”.
- This version corrects some internal bugs. These bugs were reported by some users when analysing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 10.9.02 (2nd May 2019)
This version implements:
-
The implementation of Bifactor model (PEBI) has been improved. Now the model can be computed without the rotation of the group factors. In addition, it has been observed that when a large number of group factors are defined by a low number of variables (that probably show low saturations in the group factors), PEBI outcome can frequently converge on local minima. In order to avoid it, a large number of random starts (i.e., 100) is used. In order to decide the final solution, the reported solution is the solution in which the worse defined group factor explains the largest amount of variance: this is done independently of the rotation criterion used, so that bifactor solution based on PEBI is more consistent among different analyses of the same data.
- This version corrects some internal bugs. These bugs were reported by some users when analysing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 10.9.01 (11th April 2019)
This version implements:
Diagonally-weighted factor rotation. As with weighted robust schemas in the extraction stage of factor analysis, robust rotation is expected to be particularly advantageous when the sampling errors of the bivariate correlations are considerably different and these errors can be estimated with reasonable accuracy. Different sampling errors are more likely to occur if the input correlations are tetrachoric and polychoric, because in this case the correlation matrix is estimated not jointly but pairwise. In order to compute a diagonally weighted factor rotation with FACTOR, the user has to select: (1) the robust factor analysis option, and (2) one of these three rotation methods: Promin, Weighted Varimax, or Weighted Oblimin. The output of the program informs the researcher that a robust rotation has been computed.
-
Conditional reliability function based on polychoric correlations. It consists on a graphic display of conditional reliabilities (7) against the estimated factor levels, and a minimally acceptable cut-off value for one of them (for example, a conditional reliability of 0.8). Graphically this cut-off is a horizontal line parallel to the factor scores axis. In order to compute the reliability function with an optimal precision, the number of nodes to compute EAP scores must be large. In FACTOR at least 20 nodes are recommended.
-
A frequent source of difficulties appears when the tetrachoric/polychoric correlation matrix turns out to be non-positive definite. It means that one or more eigenvalues is negative. To solve it, a smoothing procedure can be applied. However, if the negative values are large, most of the information in the correlation matrix is destroyed. When this is the case, the new release of Factor prints a detailed out so that the user has the maximum of information in order to try to solve this difficulty. In addition, Factor refuses to analyze smothered correlation matrices, if more that 60% of information has been destroyed during the smothering procedure. In addition, the smoothing algorithm deletes lowest amounts of variance in each iteration (0.001/sqrt(N)) in order to save a bit more of variance.
-
Some bibliographical references have been up dated, and doi numbers are included for all the references.
-
Confidence interval is computed for the added-value indices.
-
This release of Factor corrects some internal bugs. These bugs were reported by some users when analysing their own data. We are grateful to these users that help us to improve Factor. For example, in this release the user has a most efficient control of the variables to be included (or excluded) from the analysis.
-
A literature corner has been added at the Documentation section.
Version of the program: 10.8.04 (22th July 2018)
This version implements:
- This version corrects some internal bugs that appeared with the last update of Windows operative system. These bugs were reported by some users when analysing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 10.8.03 (7th May 2018)
This version implements:
-
When polychoric correlations are computed, the user is allowed how to decide whether to estimate EAP factor scores based on the linear model (faster, but less accurate) or the graded model. In the case of the graded model, the user is allowed to decide the number of nodes to be used: the larger the number of nodes, the more precise (and time consuming) are the factor score estimates. The default number of nodes is 20, and a maximum of 100 is allowed. Please note that different estimates of ORION reliability can be obtained when using the linear and the graded model in order to estimate EAP factor scores. All these feactures can be configured in the “Other specifications of factor model”.
-
FACTOR expects the data file encoded as ANSI. In the present release, the user can indicate that UTF-8 encoding (the default encoding when exporting data using SPSS program) has been used. Please, note the UNICODE encoding is not (yet) allowed; maybe someday.
-
When computing person reliabilities, individuals with reliability estimates under the threshold value of .10 are marked as Inconsistent Responder.
-
Robust chi-square computed in order to assess the goodness-of-it of models based on covariance dispersion matrices has been corrected.
-
This version corrects some internal bugs. These bugs were reported by some users when analysing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 10.8.01 (10th January 2018)
This version implements:
-
Measures initially designed to be single-trait often yield data that is compatible with both an essentially unidimensional factor-analysis
solution, and a correlated-factors solution. For these cases, new indices are implemented that aim at providing information for deciding
which of both solutions is the most appropriate and useful. The procedures implemented are a factor analysis extension of the added-value
procedures initially proposed for subscale scores in educational testing. They can be selected in FACTOR as "Added value of multiple factor score
estimates" in the "Other specifications of factor model" menu.
Pratt's importance measures. Wu & Zumbo (2017) propose to compute importance measures to indicate the proportions of the variation in each observed
indicator that are attributable to the factors (an interpretation analogous to the effect size measure of eta-squared).
The importance measures can further be transformed to eta correlations: a measure of unique directional correlation
of each factor with an observed indicator. These indices can be selected in FACTOR as "Display eta-squared and Pratt's importance measures" in the "Other specifications of factor model" menu.
- Factor score estimates are allowed to be saved in a separate text file for further analyses.
Version of the program: 10.7.01 (22th November 2017)
This version implements:
-
Objectively Refined Target Matrix (RETAM). When a target matrix is proposed by the user, RETAM helps to refine the target matrix allowing to free and to fixe elements of the target matrix. RETAM risks to capitalize a factor solution on chance (i.e., the factor model is fitted to the sample at hand, not to the population). In the RETAM crossvalidation study, the sample has been half-splitted in two random subsamples: RETAM procedure is applied in the first subsample in order to obtain a refined target matrix; and the refined target matrix is then taken as a fixed target matrix (without further refinements) for the second subsample. If the rotated loading matrix in the second subsample is congruent with the rotated loading matrix in the first subsample, then the researcher must be confident that the final solution has not just been fitted to the sample data, but also to the population data.
-
Conditional reliabilities function reports the statistical information corresponding to each score level of the latent trait. It is interpreted as the test information function in IRT context. The graphic shows (1) the conditional reliabilities against the factor score estimates as '*' marks, and (2) the cut-off value of 0.80 as a vertical dotted line.
- This version corrects some internal bugs. These bugs were reported by some users when analysing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 10.6.01 (13th November 2017)
This version implements:
-
Our new procedure for fitting a pure exploratory bifactor solution has been refined to be able to manage bifactor models with a single group factor. In our bifactor propousal the general factor is orthogonal to the group factors, but the loadings on the group factors can satisfy any orthogonal or oblique rotation criterion. The proposal combines Procrustes rotations with analytical rotations. The basis input is a semi-specified target matrix that can be (a) defined by the user, (b) obtained by using Schmid-Leiman orthogonalization, or (c) automatically built from a conventional unrestricted solution based on a prescribed number of factors. In order to compute an exploratory bifactor model, the user has to: (a) specify the number of group factors, (b) check “Exploratory Bifactor Model” in the “Other specifications of factor model” menu, and (c) select the rotation criterion for the group factors. In the outcome, the general factor is labeled as GF.
Version of the program: 10.5.03 (22nd June 2017)
This version implements:
- Two new indices to assess the quality and effectiveness of factor scores estimates: sensitivity ratio, and expected percentage of true differences. The sensitivity ratio (SR) can be interpreted as the number of different factor levels than can be differentiated
on the basis of the factor score estimates. The expected percentage of true differences (EPTD) is the estimated
percentage of differences between the observed factor score estimates that are in the same direction as the
corresponding true differences.
Version of the program: 10.5.02 (29th May 2017)
This version implements:
-
A new procedure for fitting a pure exploratory bifactor solution in which the general factor is orthogonal to the group factors, but the loadings on the group factors can satisfy any orthogonal or oblique rotation criterion. The proposal combines Procrustes rotations with analytical rotations. The basis input is a semi-specified target matrix that can be (a) defined by the user, (b) obtained by using Schmid-Leiman orthogonalization, or (c) automatically built from a conventional unrestricted solution based on a prescribed number of factors. In order to compute an exploratory bifactor model, the user has to: (a) specify the number of group factors, (b) check “Exploratory Bifactor Model” in the “Other specifications of factor model” menu, and (c) select the rotation criterion for the group factors. In the outcome, the general factor is labeled as GF.
-
Confidence intervals for ORION reliabilities based on bootstrap sampling techniques.
-
Equivalence testing for linear ML factor analysis (Yuan, Chan, Marcoulides, & Bentler, 2016).
- This version corrects some internal bugs. These bugs were reported by some users when analysing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 10.5.01 (20th Abril 2017)
This version implements:
- Robust goodness-of fit indices are computed based on (1) mean-corrected chi-square statistic, (2) mean and variance-corrected chi-square statistic (Satterthwaite, 1941), and (3) mean and variance-corrected chi-square statistic estimated as proposed by Asparouhov and Muthén (2010).
- Weighted Root Mean Square Residual (WRMR) index is computed in order to assess the model residuals.
- New person fit indices are implemented: Personal Correlation (rp) and Weighted Mean-Squared Index (WMSI) indices are computed using optimal threshold values to detect aberrant responses (Ferrando, Vigil-Colet, & Lorenzo-Seva, 2017).
- A new set of indices of factor determinacy, construct replicability and closeness to unidimensionality, aimed at assessing the strength and quality of the solution beyond pure model-data fit.
- A new menu to configurate advanced indices and computings.
- This version corrects some internal bugs. These bugs were reported by some users when analysing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 10.4.01 (21st October 2016)
This version implements:
- Bootstrap sampling in order to computed robust factor analysis. Bootstrap Confidence intervals are computed for a large number of indices.
- Implementation of Tetrachoric/Polychoric correlation based on unified Bayes modal estimation (MAP) approach.
- Robust exploratory factor analysis based on asymptotic variance covariance matrix for correlation coefficients is computed based on (a) analytical estimates, or (b) bootstrap sampling.
- Implementation of Robust Unweighted Least Squares factor analysis, Robust exploratory Maximum Likelihood factor analysis, and Diagonally Weighted Least Squares factor analysis.
- The number of factor to be retained is increased up to at least two variables per factor.
- BIC dimensionality test: Schwarz’s Bayesian Information Criterion is computed for a number of factors models, so that the model with the optimal number of factors (i.e., the model that corresponds to a lower BIC value) is detected.
- The user is allowed to disable all the procedures to assess the number of factors/components to be retained.
- New person fit indices are implemented: Personal Correlation (rp) and Weighted Mean-Squared Index (WMSI) indices are computed using optimal threshold values to detect aberrant responses (Ferrando, Vigil-Colet, & Lorenzo-Seva, 2017).
- This version corrects some internal bugs. These bugs were reported by some users when analysing they own data. We are grateful to these users that help us to improve Factor. In addition, the internal computing has been redesigned in order to increase computing speed: for example, polychoric correlation matrix is only computed one time in each analysis session (even if different analysis are carried out).
- Please note that Windows XP is not supported anymore.
Version of the program: 10.3.01 (7th July 2015)
This version implements:
- FACTOR is now compiled to run with Windows 64-bits. This feacture allows to analyse large datasets. We successfully tested FACTOR with a dataset of 10,000 cases, 500 variables, and 3 extracted factors. The user can decide which realease (32-bits or 64-bits) wants to download.
- Missing values in the dataset are allowed. Multiple Imputation in exploratory factor analysis is implemented based on Lorenzo-Seva & Van Ginkel (2015) proposal. Missing values must be identified using a numerical code.
- The implementation of Polychoric correlation has been polished to allow convergence even when some cathegories in a particular variable is never used.
- This version corrects some internal bugs. These bugs were reported by some users when analysing they own data. We are grateful to these users that help us to improve Factor.
Version of the program: 9.30.1 (January, 2015)
This version corrects an internal error in the management of the computer memory. This error was observed by some users that were analyzing large datasets. We are grateful to these users that help us to improve Factor.
Version of the program: 9.20 (February, 2013)
This version implements:
- Item Response Theory parameterization of factor solutions based on discrete variables.
- Expected a-posteriori (EAP) estimation of latent trait scores in IRT models.
- Semi-confirmatory factor analysis based on orthogonal and oblique rotation to a (partially) specified target.
- Assessment of the congruence between the target and the rotated loading matrix.
Version of the program: 8.10 (April, 2012)
This version implements:
- Greatest lower bound (glb) to reliability, and McDonald's Omega reliability index.
- GFI and AGFI are computed excluding the diagonal values of the variance/covariance matrix.
- Algorithm 462: Bivariate Normal Distribution by Donnelly (1973) is used to compute polychoric correlation matrix. In addition, polychoric correlation matrix is computed with more demanding convergence values.
- Tetrachoric correlation matrix is computed based on AS116 algorithm. This algorithm is more accurate accurate than the algorithm provided in previous versions of the program.
- Technical revisions to solve different errors that halted the analysis and that were reported by users.
Version of the program: 8.02 (March, 2011)
This version implements:
- A more friendly user reading data implementation. ASCII format data files can be separated using different characters, and missing values are eliminated from the data.
- Variable labels are allowed.
- The output data file can be specified.
- New analyses are implemented: Optimal Parallel Analysis, Hull method, and Person fit indices.
- Some analyses have been improved. For example, the polychoric correlations matrix is checked to be positive definite and smoothed (if necessary), and the non-convergent coefficients are changed by the corresponding Pearson coefficient.
- Technical revisions to solve different errors that halted the analysis and that were reported by users.
Version of the program: 7.00 (January, 2007)
This version implements:
- Univariate mean, variance, skewness, and kurtosis
- Multivariate skewness and kurtosis (Mardia, 1970)
- Var charts for ordinal variables
- Polychoric correlation matrix with optional Ridge estimates
- Structure matrix in oblique factor solutions
- Schmid-Leiman second-order solution (1957)
- Mean, variance and histogram of fitted and standardized residuals. Automatic detection of large standardized residuals.
In addition, a bug that halted the program during the execution has been detected and corrected.
Version of the program: 6.02 (June, 2006)
This version implements PA - MBS. It is an extension of Parallel Analysis that generates random correlation matrices using marginally bootstrapped samples (Lattin, Carroll, & Green, 2003).
In addition, indices of asymmetry and kurtosis related to the variables are computed. The inspection of these indices helps to decide if polychoric correlation is to be computed when ordinal variables are analyzed.
Version of the program: 6.01 (March, 2005)
This version implements the selection of variables to be included and excluded in the analysis.
|