Factor Analysis - Medical University of South Carolina

Factor Analysis Liz Garrett-Mayer, PhD Dept of PHS, Division of Biostats & Bioinf Biostatistics Shares Resource, Hollings Cancer Center Cancer Control Journal Club March 3, 2016 Motivating Example Goals of paper 1. See if previously defined measurement model of hopelessness in advanced cancer fits this sample. Confirmatory Factor Analysis.

2. Describe the factor structure in these two subpopulations (curative and palliative). Exploratory Factor Analysis. 3. 4. Evaluate stability of factor structure: does the factor structure stay the same after 12 months? Confirmatory Factor Analysis. (Exploratory) Factor Analysis Data reduction tool Removes redundancy or duplication from a set of correlated variables Represents correlated variables with a smaller set of

derived variables (aka factors). Factors are formed that are relatively independent of one another. Two types of variables: latent variables: factors observed variables (aka manifest variables; items) Examples Diet Air pollution Personality Customer satisfaction

Depression Quality of Life Some Applications of Factor Analysis 1. Identification of Underlying Factors: clusters variables into homogeneous sets creates new variables (i.e. factors) allows us to gain insight to categories 2. Screening of Variables: identifies groupings to allow us to select one variable to represent many

useful in regression (recall collinearity) 3. Summary: Allows us to describe many variables using a few factors 4. Clustering of objects: Helps us to put objects (people) into categories depending on their factor scores Perhaps the most widely used (and misused) multivariate [technique] is factor analysis. Few statisticians are neutral about this technique. Proponents feel that factor analysis is the greatest invention since the double bed, while its detractors feel

it is a useless procedure that can be used to support nearly any desired interpretation of the data. The truth, as is usually the case, lies somewhere in between. Used properly, factor analysis can yield much useful information; when applied blindly, without regard for its limitations, it is about as useful and informative as Tarot cards. In particular, factor analysis can be used to explore the data for patterns, confirm our hypotheses, or reduce the Many variables to a more manageable number. -- Norman Streiner, PDQ Statistics Exploratory Factor Analysis Takes a set of variables thought to measure an underlying latent variable:

Determines which ones hang together Identifies how many dimensions there are to the latent variable of interest Determines which are the strongest variables: which ones contain most of the information Which ones might be able to be removed due to either redundancy or because they dont hang with other variables. One of the primary goals of factor analysis is often to identify a measurement model for a latent variable This includes : Identifying the items to include in the model Identifying how many factors there are in the latent variable (i.e. the dimensionality of the

latent variable). Identifying which items are associated with which factors In our example: Hopelessness as measured by the Beck hopelessness scale Beck hopelessness scale Goals of this EFA What structure is there to hopelessness in these patient populations? Can we come up with components of hopelessness? If so, what do they look like? How do we do this?

Factor analysis is PURELY based on the correlation matrix (or covariance matrix) of your items. It searches for commonality among the items based on the correlations Think about it: if the items are all measuring the same traits. The correlation patterns can be used to see which of the items cluster together, which dont and how much of each item is simply noise. Note about Likert scale variables: Very noisy! Correlation matrices are more apt for truly continuous variables. Graphically, a two factor EFA (with 7 items in the scale)

F2 F1 y1 y2 y3 y4

y5 y6 y7 Mathematically, the 2 factor EFA (with 20 items in the scale) x1i 11 F1i 12 F2i e1i x2i 21 F1i 22 F2i e2i

x20,i 20,1 F1i 20,2 F2i e20,i Interpretations: F1 and F2 are the latent variables (e.g. F1i is the value of the ith persons F1) The s are called loadings and each one represents, in a loose sense, the correlation between each item from the scale and the factor. Graphically, a two factor EFA (with 7 items in the scale) 71 F1

12 F2 22 72 3 11

31 21 2 y1 y2 y3

y4 y5 y6 y7 e1 e2

e3 e4 e5 e6 e7 Some statistical stuff Without additional assumptions, the model would be unidentifiable

and also hard to interpret. Assumptions: F1 and F2 are statistically independent (uncorrelated) in most implementations F1 and F2 are each normally distributed with mean 0, variance 1 Conditional on the latent variables, the error terms are independent. Spangenberg Paper Recruited participants as part of a prospective observational study, investigating meaning-focused coping and mental health in cancer patients.

732 eligible adult cancer patients receiving treatment with curative or palliative intention in inpatient and outpatient cancer care facilities in Northern Germany were asked to participate. At baseline, 315 patients participated. At follow-up, 158 could be reassessed at 12 months. Results from Beck, curative treatment group (n=145) Factor 1 comprises items reflecting mainly pessimistic/resigned beliefs (e.g. Item 12), whereas

Factor 2 especially contains items reflecting positive beliefs explicitly referring to the future (e.g. Item 5). It is noteworthy that Factor 2 solely includes positively worded items, whereas Factor 1 includes negatively worded items only. Factor 1 includes Items 2, 9, 12, 14, 16, 17, and 20 (curative sample, Cronbach

alpha 0.88; palliative sample, alpha=0.85). Factor 2 includes Items 5, 6, 8, 10, 15 and 19 Lets reverse: why two factors? How do you figure out how many factors? That is, what is the dimensionality of the latent variable? For a dataset with, lets say 20 variables, fit a principal components analysis (aka PCA, which is, loosely, a kind of factor analysis). a) The PCA creates a matrix of weights from the data from which you can generate composite variables. b) The weights are chosen so that the 1st component (i.e. a weighted average of the variables) explains the maximum amount of variance in the variables.

c) The weights for the 2nd component are chosen to maximize the variance remaining in the data AFTER having already computed the 1st PC d) And so on for the remaining 18 (20 2 = 18) components. What? This isnt as abstract as it sounds There are scales out there that use this kind of component or composite variable approach. Example: 0.5x1 + 0.75x2 + x3 = z This approach just finds the optimal weights to maximize the variability explained. Eigenvalues Each of the principal components (aka eigenvectors) has a corresponding value called an eigenvalue which represents the amount of variability explained by the

component. The sum of the eigenvalues is equal to the number of variables in your analysis (e.g. for Spangenberg paper, the sum of the eigenvalues is 20). There are several rules of thumb for using the eigenvalues to help determine the number of components or factors to keep. 1. Keep as many components as have eigenvalues greater than 1. 2. use a screeplot to determine how many components to keep. 3. Preset a threshold for percent variance explained and keep enough to explain a sufficient amount of variance. Think about what it means to have an eigenvalue of 1 or greater? Screeplot examples

From Spangenberg: Initially, five eigenvalues were >1 in the curative sample (7.42, 2.04, 1.53, 1.25, and 1.01). In the palliative sample, four eigenvalues were >1 (7.13, 1.96, 1.43 and 1.14). The scree plot suggested a two factor structure in both patient groups.

Note: THESE SCREE PLOTS ARE INCOMPLETE In terms of percent variance explained Divide the eigenvalue by the number of variables in the model and you get the incremental variance explained.

You can also calculate cumulative variance of the first, for example, 3 components. Two factor solution Explains about 50% of the variance in the data. Thus, 50% of the information in the data is discarded when only two components are retained. However, there were TWENTY variables that you started with. And, its quite possible that there is a lot of noise Likert scale variables

How much is noise? How much is signal Glass half-full: With only TWO components, you can explain as much as almost TEN variables (on average). Pick number of factors Based on the examination of eigenvalues, determine the number of factors you want to retain in a factor analysis. Fit a factor analysis where you pre-specify the number of factors. Interpretable? The problem with PCA and an unrotated factor analysis is that the factors are hard to interpret.

The first component or factor usually has about equal loadings (+/depending on the direction of the item) for all items. The second component may have some high, some low loading, but usually not very interpretable. Solution? Factor rotation. Rotation: More statistical stuff In principal components, the first factor describes most of variability. After choosing number of factors to retain, we want to spread variability more evenly among factors. To do this we rotate factors: redefine factors such that loadings on various factors tend to be very high (-1 or 1) or very low (0)

intuitively, it makes sharper distinctions in the meanings of the factors We use factor analysis for rotation NOT principal components! How can we do this? Doesnt it change our answer? Statistically, it doesnt. The percent variance is the same, etc. For a factor analysis solution to be calculated, there have to be constraints, or assumptions. Some of them are Factors are normally distributed Factors have mean 0 and variance 1

In the initial solution, another constraint is that the first factor explains the most variance, the 2nd factor explains the next most conditional on the first factor, etc. What if we change this last assumption and instead create a different constraint to make it identifiable? That is, focus on shrinking loadings? Rotation types orthogonal: maintains independent factors (i.e uncorrelated factors) oblique: allows some dependence. Usually not terribly different from orthogonal, but loadings are often shrunk more towards 0 or 1 (or -1).

Spangenberg assumed that factors would be likely to be correlated, so they used oblique rotation. Aside Principal factors vs. principal components. The defining characteristic that distinguishes between the two factor analytic models is that in principal components analysis we assume that allvariability in an item should be used in the analysis, while in principal factors analysis we only use the variability in an item that it has in common with the other items. In most cases, these two methods usually yield very similar results. However, principal components analysis is often preferred as a method for data

reduction, while principal factors analysis is often preferred when the goal of the analysis is to detect structure. (http://www.statsoft.com/textbook/stfacan.html) Factor 1 comprises items reflecting mainly pessimistic/resigned beliefs (e.g. Item 12), whereas Factor 2 especially contains items reflecting positive beliefs explicitly referring to the future (e.g. Item 5).

It is noteworthy that Factor 2 solely includes positively worded items, whereas Factor 1 includes negatively worded items only. Factor 1 includes Items 2, 9, 12, 14, 16, 17, and 20 (curative sample, Cronbach alpha 0.88; palliative sample, alpha=0.85). Factor 2 includes Items 5, 6, 8, 10, 15 and 19

A few more details Keeping vs. dropping items Should look at the fitted model (before or after rotation) to determine the variables uniqueness. Communality = 1 Uniqueness Communality is a measure of how much is shared between the item and the latent variable structure Uniqueness is what is left over (i.e. noise) Uniqueness DOES depend on the number of factors retained. Example of uniquenesses

A few more details Estimation Next steps You can use your model to calculate the estimated factor scores for each subject in your dataset Confirmatory factor analysis: Restrictive approach which forces some of the arrows (i.e. loadings) to be zero. EFA: descriptive approach to determine structure CFS: test a particular structure. Applications?

CFA: compare structure to a fixed model F2 F1 72 4 11 31

21 2 5 6 2 2

y1 y2 y3 y4 y5 y6

y7 e1 e2 e3 e4 e5

e6 e7 Spangenberg: Used CFA CFA was used to determine if the patients in this study demonstrated the same factor structure for BHS as advanced cancer patients. They found that the structure in this sample of patients differed from previous studies. CFA

CFA is often thought of as a special case of structural equation models You make assumptions about how the variables are associated using arrows to join variables (both latent and observed). Stability Spangenberg et al. also evaluated structure over time. Did the factors stay the same in their structure (i.e. loadings, dimensionality)? Some measures said it was acceptable, other that is wasnt. Problem with this paper: they provide the test statistics, but do not show us the descriptive statistics (i.e. what were the loadings in the

fit on the 12 month data?) More details on the process And, a lot more math, statistics and matrices! http://people.musc.edu/~ elg26/teaching/psstats1.2006/factoranalysis.pdf

Recently Viewed Presentations

  • Auditory Nerve Laboratory: What was the Stimulus

    Auditory Nerve Laboratory: What was the Stimulus

    Auditory Nerve Laboratory: What was the Stimulus? Bertrand Delgutte HST.723 - Neural Coding and Perception of Sound Real and Idealized Spike Trains Spike train from inferior colliculus of awake rabbit (Devore) Fundamental assumption: All the information is contained in the...
  • Renal Function 3/19 - Winona

    Renal Function 3/19 - Winona

    Overview of Renal Function: 1) You have two kidneys that clean your blood when you are resting at very high metabolic cost in terms of % cardiac output and O2 demand. 2) Blood wastes, especially toxic nitrogenous ones, are removed...
  • Mobility, Time to Degree, and Institutional Practices ...

    Mobility, Time to Degree, and Institutional Practices ...

    Page 7. Enrollment Mobility Patterns (see Table 1) Challenge to be addressed in 21st century. Students of color, low-income, and academically underprepared students more likely to engage in enrollment mobility that extends time to degree
  • barrygjohnsonsrom.weebly.com

    barrygjohnsonsrom.weebly.com

    — 6 Go rather to the lost sheep of Israel. Matthew 15:24 (NIV) — 24 He answered, "I was sent only to the lost sheep of Israel." Luke 19:10 (NIV) — 10 For the Son of Man came to seek...
  • TMA Image Campaign - OSMAP and The Forum

    TMA Image Campaign - OSMAP and The Forum

    TMA Recommends. TMA member-physicians submitted 656 comment letters to CMS. Opening Up the Channels. AMA builds listening relationship with CMS. CMS team met with 100,000+ in listening sessions. Slavitt: "We listened and made changes based on your input" ...
  • French Revolution

    French Revolution

    Fall of the Bastille. After the success of the Tennis Court Oath, many Parisians began to act more violently. Louis XVI ordered his troops to concentrate its forces on Paris, but the King could not be sure of the loyalty...
  • Food Chains and Food Web - PC\|MAC

    Food Chains and Food Web - PC\|MAC

    Food Chains and Food Web What Are Some Parts of a Food Chain and a Food Web? * * * Key Words Consumer A living thing that eats other living things Food chain Pathway of energy through an ecosystem Food...
  • What is YASS - eduBuzz.org

    What is YASS - eduBuzz.org

    Open University modules provide an opportunity for students to deepen their existing knowledge or try a completely different subject. The first step towards a degree. Some YASS students like Open University study so much that they decide to carry on...