Statistical Analysis: Multiple Regression, MANOVA, Factor Analysis, and SEM

Multiple Regression (MR)

Standard MR is used when wanting to know the proportion of variance in a dependent variable (DV) that can be predicted by a set of independent variables (IVs). It is used to examine the importance of IVs using standard regression coefficients and semi-partial correlations. The semi-partial correlation is the unique variance in the DV predicted by an IV in the context of the total amount of variance the IV shares with the DV (square of the correlation between IV and DV). To answer this, the computer enters all IVs into the regression equation at the same time; regression coefficients for each IV are calculated, controlling all other IVs. When an IV is highly correlated with the DV but has a small standard regression coefficient, it means most of the variance shared with the DV is also shared with other IVs. When an IV correlates moderately with the DV but does not correlate with other IVs, it will have relatively large standard regression coefficients, meaning that it predicts a unique portion of variance in the DV.

Interpretations of regression coefficients must be done in light of correlations amongst IVs, the DV, and IVs. Regression coefficients are sometimes interpreted in the context of the proportion of unique variance in the DV predicted by the IV (square of the semi-partial correlation coefficient). With large samples and reliable IVs, bias is small, but with smaller samples (n 2 is inflated, so adjusted R2 corrects this bias. R2 adjusted = R2 for large sample sizes. Standard and unstandardized regression coefficients and semi-partial correlations have the same significance test; whether the regression coefficients and semi-partial correlations are significantly different from 0. Tolerance is 1-Ri2; the variance inflation factor is the amount of variance of the regression coefficient that is inflated because IVs are correlated. If tolerance becomes very small (multicollinearity: one variable is highly dependent on another), variance in all regression coefficients becomes very large, making it unlikely that any will be significantly different from 0.

Mathematical Basis for Analysis for All MR

The fundamental equation that models a simple regression for a research participant is: Y = a + b.X + e, where Y is the person’s score on the dependent variable, X is the person’s score on the independent variable, “a” and “b” are constants that need to be estimated, and “e” is random error. A least squares solution to the problem of estimating a and b is achieved by requiring that: Se2 is minimized. This is achieved mathematically using calculus (setting the partial derivatives of this equation to zero). This mathematical procedure results in two equations involving a and b: a = Ym – b.Xm (the intercept on the y-axis) and remembering that x = X – Xm and y = Y – Ym, b = S x.y / S x2 = b .Sy / Sx (the slope of the regression line). Here, b is called the unstandardized regression coefficient defined by the units of measurement which specify the slope of the regression line. (For example, if Y is salary and X is work experience, b is the increase in salary per year of job experience). In contrast, β is the standardized regression coefficient (the slope of the line when Y and X are standardized). In simple regression, β is the correlation between the dependent variable and the independent variable, but in multiple regression, β is not the same as the semi-partial correlation, although it is tested for significance with the same t-test. Generalizing to k independent variables in a multiple regression, we can calculate the regression coefficient for the ith independent variable as: bi = S xi.y / S xi2 = bi .Sy / Sxi.

Meaning of (and How to Interpret) Various Parts of Output

To check for specification error and assumptions of normality and homoscedasticity, standard residuals are plotted against the predicted values of the DV. In the scatterplot, no linearity should be seen; it should be symmetrically distributed around 0 and should scatter the same amount around 0 across the range of predicted values (should be like a box-rectangle). In the histogram, check for normal distribution (bell curve); outliers will be apparent. In the P-P plot (spinal plot), if ranked scores are equal to actual scores, it should look like a linear relationship (increase in one results in a similar increase in the other).

Decision Points

VIF > 10, condition index > 30, and/or tolerance .50; under collinearity diagnostics in output).

Hierarchical MR

: meaning of statistics used;enter variables in different blocks 2 see unique variance or what latter variables account 4 (more sophisticated).this allows us determine whether set of theoretically interesting ivs add 2 the variance accounted 4 in the dv over & above variance accted 4 by another set of ivs.the key statistic is r2 change & its test of significance.3 situations arise when using this type of mr.researcher knows from past research/theory that set of ivs r acct 4 some variance in  dv;he/she now wants 2 know if new set of ivs,acct 4 some of remaining variance in the dv.2nd situation is 2 hold unwanted variables constant (e.g.covariates) (i.e.not theoretically interesting but do account 4 variance in dv).2 control 4 variance,covariates entered 1st into regression equation as a block,followed by a block containing ivs of interest.3rd situation is when 1 is interested in testing interaction among several of the ivs;because these variables r corr the interactions r corr,with the main effects 4 these ivs.too high an intercorrelation creates severe multicollinearity so each iv is centred by subtracting raw scores from mean of that variable (difference score) 2 minimize problem.using centered ivs,the analysis strategy is 1st enter the main effects as a block then the 2 way interactions as a 2nd block,& 3 way interactions as a 3rd block.this ensures that the interactions @ every stage r not confounded with either main effects or earlier & simpler interactions.usually stop @ 3rd block because of complexity in interpretation.researcher usually interested in whether particular interaction is significant & so examine the sign of its reg.coeff.if significant then form of interaction is examined using raw data 2 see if consistent with hypothesis.examine the hypo directly in whatever ways seems most meaningful given that the interaction terms using the centred scores on the ivs r sign.overall sign test 4 r2 is same as 4 std mr.but sign test 4 reg w8s r not the same as 4 sem-partial corr coeffs  b/c final reg coeffs after all ivs have been entered r usually reported & these r not dependent on the order in which the ivs were entered into the reg equation.however,semi-partial corr coeffs r often reported when the iv is entered into equation (therefore dependent on the order of entry).thus,early entry of an iv implies that the common variance shared by this variable & other ivs not yet in the regression equation will be included in the sem-partial corr making it larger than would be if all ivs were entered @ same time.spss calculates r2 change when number of ivs r entered into the reg equation as a block,testing the general hypotheses regarding whether a block of ivs adds significantly 2 the variance accounted 4 by previous blocks of ivs already in the reg.equation.meaning & interpretation of output: always report r2 then r2 change

model summary

r

r square

adjusted r square

std.error of the estimate

change statistics

r square change

f change

df1

df2

sig.f change

.600

.360

.332

.7512

.360

12.954

2

46

.000

.889

.791

.772

.4394

.430

45.225

2

44

.000

a  predictors: (constant),sex,age

b  predictors: (constant),sex,age,worksat,friends
sex & age accounted 4 36% of the variance in lifesat in step 1,f(2,46)=12.954,p<.001. both=”” age=”” sex=”” were=”” unique=”” predictors=”” in=”” the=”” model>β =.473,t=4.012,p<.001 sex=””>,>β = .368,p=.003.the addition of worksat & friends on step 2 significantly improved the fit of the model accounting 4 an additional 43% of the variance in lifesat,fchange(2,44)=45.225,p<.001. once=”” again age=”” was=”” a=”” unique=”” predictor=”” in=”” the=”” model>β=.217,t=2.889,p=.006.as well,worksat & friends were unique predictors,β=.483,t=5.310,p<.001 t=”3.739,” p=”.001″ respectively.=”” overall these=”” predictors=”” account=”” for=”” a=”” total=”” of=”” the=”” variance=”” in=”” lifesat=”” age worksat friends=”” are=”” unique=”” life=”” satisfaction=”” when=”” all=”” variables=”” entered=”” into=”” model.>,>

anova

model

sum of squares

df

mean square

f

sig.

1

regression

14.620

2

7.310

12.954

.000

residual

25.957

46

.564

total

40.577

48

2

regression

32.082

4

8.021

41.544

.000

residual

8.495

44

.193

total

40.577

48

coefficients

unst&ard

ized coefficients

st&ardized coefficients

t

sig.

correlations

b

std.error

beta

zero-order

partial

part

(constant)

.680

.451

1.507

.139

age

.446

.111

.473

4.012

.000

.474

.509

.473

sex

.670

.215

.368

3.118

.003

.369

.418

.368

(constant)

-5.276e-02

.284

-.186

.854

age

.204

.071

.217

2.889

.006

.474

.399

.199

sex

.267

.137

.146

1.952

.057

.369

.282

.135

worksat

.459

.087

.483

5.310

.000

.797

.625

.366

friends

.350

.094

.341

3.739

.001

.729

.491

.258

a  dependent variable: lifesat

stepwise regression: post hoc approach;doesn’t test theory;doesn’t advance knowledge much.key distinction with 2 other types of mr is that there is no theory that guides ivs 2 include in analysis nor is there rationale 4 determining order in which variables r entered into reg equation.analysis proceeds purely on a statistical basis entering & omitting ivs solely in terms of their ability 2 add percentage of variance accounted 4 by those ivs already in reg equation (spss does this 4 u) this exploratory technique depends on having a large sample bc otherwise chance fluctuations in correlation matrix may result in import variables being excluded by others even though they may be more reliable predictors of the dv.this is why replication of results from sw is so important.1st iv entered will correlate highest with the dv;if 2 or more ivs r highly correlated then the other 1 1’t come into the equation causing misleading results.ivs r deleted @ each step that do not contribute 2 the variance accounted 4.criterion 4 deleting ivs is less strict than criterion 4 adding variables 2 ensure good predictors r not lost in the analysis too soon.spss default is that iv sign increases % of variance accted 4 @ <.05 but=”” once=”” in=”” the=”” equation only=”” have=”” to=”” increase=”” of variance=”” accted=”” for=”” at=””>,><.10 to=”” stay=”” in=”” the=”” analysis=”” for=”” next=”” step.=”” stops=”” when=”” ivs=”” not=”” equation=”” do=”” add=”” sign=”” of=”” variance=”” accted=”” thus=”” result=”” a=”” sw=”” reg=”” is=”” set=”” good=”” predictors=”” that if=”” results=”” replicate can=”” be=”” used=”” predict=”” impt=”” criteria.=”” these=”” also=”” suggest=”” theoretical=”” directions.=”” sum=”” semi-partial=”” corrs=”” each=”” variable=”” at=”” stage=”” it=”” enters=”” equal=””>2.suppressor variables: sometimes an iv can improve the prediction of the dv when it correlates with some of the other ivs,even though it does not corr with the dv.including this variable in reg equation suppresses variance in other ivs that does not correlate with the dv.so the predictive power of these ivs is increased.can be identified by comparing each iv’s reg coeff 2 its corr with the dv.in classic suppression,a suppressor variable will not correlate with dv but will have sign reg coeff.s variables can also have + corr with the dv but have a – reg coefficient,& vice versa.in such complex cases,interp is hard & re-analyzed with suspected suppressors is needed.look 4 a much smaller possible n/s corr with dv in comparison with the size of std reg coeff or incongruity b/w the sign of corr & reg coeff.then leave out the suspected suppressor,repeat analysis & see if further problems with ivs remain.

manova used in cases where there is a complex relationship between a set of interacting independent variables & a set of dependent variables.addresses same questions as anova.it allows us 2 examine the causal influence of the ivs on a set of dvs both al1 (main effects) & in interaction.with equal n’s,main effects & interactions r independent meaning the results r clear.the amt of variance in the dvs explained by the ivs is impt 2 know & can be determined using manova.it involves testing the influence of ivs upon a set of dvs.it is possible 2 determine which of the dvs is most impacted by the ivs;the relative impact of ivs on each dv can be estimated.can be extended 2 include covariates;influence of ivs on the dvs controlling 4 impt covariates can be assessed.manova allows tests of specific hypotheses (specifying particular contrasts within an  overall interaction such as comparison of tx group with placebo ctrl group & no tx ctrl group @ posttest,as well as overall main effects & interactions,& 4 this reason do not need 2 run t-tests).manova doesn’t give partial eta squared.terms of assumptions: multivariate normality & detection of outliers (i.e.assumes various means in each cell of the design & any linear combination of these means r normally distributed).provided there r no outliers in each cell of the design the analysis is robust 2 this assumption.rot achieve fairly equal n’s within each cell;@ least 20 + k,# of dvs 2 ensure assumption not seriously violated.homogeneity of variance-covariance matrix—assumes that the variance-covariance matrix 4 the dvs within each cell is equal.impt 2 remove outliers b4 checking this assumption because greatly influence values in va-co matrix.if cell sizes relatively =,outliers have been dealt with,then analysis is robust 2 this assumption.if cells sizes unequal,use bx’s m test of homo of va co matrices 2 check.problem only if sign  meaning p<.001 cell=”” sizes=”” un=”.” if=”” large=”” associated=”” with=”” larger=”” variances=”” covariances=”” sign=”” levels=”” are=”” conservative.=”” only=”” when=”” smaller=”” that=”” tests=”” too=”” liberal=”” indicating=”” some=”” effects=”” really=”” not=”” type=”” error=”” rate fmax=”” is=”” ratio=”” of=”” largest=”” to=”” small=”” variance.=”” as=”” ok provided=”” within=”” sample=”” a=”” ratio.=”” glm=”” manova=”” calculate=”” approx.=”” power which=”” probability=”” the=”” f=”” value=”” for=”” particular=”” multi=”” main=”” effect=”” or=”” interaction=”” differences=”” among=”” means=”” in=”” population=”” identical=”” sample.=”” also=”” est=”” size=”” each=”” partial=””>2.linearity: assumes that all dvs & covariates r linearly related within each cell of the design.scatterplots used 2 check this,but only gives rough guide as the sample size within each cell is quite small. homogeneity of regression & reliability of covariates: when covariates used,or researcher wants 2 use roy-bargmann stepdown procedure 2 examine relative impt of individual dvs,rels between dvs & covariates must be same in every group of the design.this assumes that the relship between the dvs & covariates r the same within every group in the design (i.e.slow of the regression line must be same in every exp coniditon.).before running a mancova or roy-bargmann procedure,the pooled interactions among covariates & the ivs must be shown 2 be n/s (usually p<.01 used=”” to=”” detect=”” sign=”” of=”” these=”” pooled=”” interaction=”” terms.covariates=”” must=”” be=”” reasonably=”” reliable=”” greater=”” than=”” .80 using=”” unreliable=”” covariates=”” can=”” result=”” in=”” effects=”” the=”” ivs=”” on=”” dvs=”” being=”” under=”” or=”” over=”” adjusted=”” making=”” results=”” mancova=”” suspect.=””>practical issues & limitation: if dvs r highly corr,btr 2 combine into composite & use a simpler analysis.usually dvs r moderately corr.common research strategy is that sets of dvs bracketing a general but loose construct r entered into separate manovas. homo va-co matrices cannot be tested if the # of dvs is greater than or = 2 # of research participants in any cell of the design.power of analysis is low when # of respondents is slightly larger than # of dvs.can result in n/s manova but individual anovas being significant;very undesirable.rot of 20 +  k,& strategy of analyzing small sets of dvs that often measure a general,loosely defined construct.power 4 manova complex as it depends on relship among the dvs.when cell sizes r un=,main effects corr with interactions.adjustments have 2 be made so interpretations of effects r clear.method1(unique/sstype(3) (used when designs have equally impt cells) all effects r calculated partialling out every other effect in the deign(similar 2 std mr);method2 (sequential/sstype1;used when sample sizes reflect relative sizes of population from which they r drawn),when there is a hierarchy 4 testing effects starting with main effects (& covariates) which r not adjusted,then the 2 way interaction terms which r adjusted 4 main effects,etc.method 3 (also sequential/sstype1) used when wanting 2 specify a particular sequence 2 the hierarchy in which main effects & interactions r tested.although dvs r intercorrleated,not desirable 2 have redundancy among them.glm & manova analyses output the pooled within cell corrs among the dvs,& manova prints determinant of the within-cell variance-co matrix,which should be great than .0001.if these indices suggest multicoll a problem,redundant dv can be deleted from analysis or a pca d1 on pooled within-cell corr matrix.factor scores from this r then used as dvs in manova.glm will guard against extreme multicoll by calculating a tolerance 4 each dv & comparing it 2 10-8.analysis will not run unless tol less than this value.mathematical basis 4 analysis: given that there is more than 1 dv in manova,this analysis includes the sscp matrix,s,among these dvs.sign tests 4 the main effects & interactions obtained through the manova procedure compare ratios of determinants of the sscp matrices calculated from between group differences & pooled within-group  variability.the key point 2 grasp here is that the determinant of a sscp matrix can be conceptualised as an estimate of the generalized variance minus the generalized covariance in this matrix.a manova of a between-subjects factorial design uses the sscp matrix,s,4 each effect in the design which r derived by post multiplying the matrix of difference scores 4 the effect (mean – gr& mean) by its transpose.  the determinants (the generalized variance) of these matrices can then be used 2 test whether the effect is significant or not.the computer creates a series of matrices,summing over the levels of the factor/design,with the order always matching the # of dvs.finally,the average within cell sscp matrix is estimated.all these matrices r symmetrical square matrices with an order determined by the #of dvs in the analysis.  therefore,they can be added 2 1 another.  in particular,the sscp within cell error matrix can be added 2 each of the matrices associated with the main effects & interactions (or,more generally,2 any sscp matrix derived from a contrast).  when this is d1,a statistics called wilks’ lambda,l,can be calculated as follows: l    =     ½ serror ½  /  ½  seffect  +  serror ½this statistic can be converted into an approximate f test which is outputted by the computer along with its degrees of freedom & statistical significance..in analysis of variance,the proportion of the variance accounted 4 by each main effect & interaction can be calculated.  (the comp linearly w8s the means).similarly,in manova the proportion of variance accounted 4 by the linear combination of the dvs that maximizes the separation of the groups specified by a main effect or an interaction is simply: h2(effect size)   =  1  –   l,remembering that wilks’ lambda is an index of the proportion of the total variance associated with the within cell error on the dvs.however,the h2values  4 any given manova tend 2 yield high values that sum 2  a value greater than 1.  meaning of statistics used: sometimes a researcher wants 2 test specific hypothesized contrasts—most easily d1 through manova by specifying (in the comm&s) a set of orthogonal contrasts = 2 # of df in overall main effect or interaction.b4 running a mancova,the homo of reg assumption must be checked.2 do this,specify the variable,sex 4 example,as a covariate then re run a manova in which sex is 1 of the ivs.if assumption within each cell of the design is not correct,then sex will interact with the ivs in the design.therefore,manova is used 2 test the pooled effect of all interaction (after main effect 4 sex 1st).if wilks’s lamda is not sign than the test shows that the assumption of homo of reg hs not been met (i.e.showing that the pooled interactions between the covariate & remaining factors in the design r n/s).if not sign than u cannot run a manova b/c data is distorted.if 2 or more covariates r used,effects r pooled & then pooled effects examined in interaction with ivs.in syntax,the covariate is added in 1st line with the term “with x” after the variables r listed.the terms in the residual sscp matrix 4 mancovas r reduced because we r adjusting 4 the covariates..if it is an effective covariate it reduces their values.if wilks lamba or pillais trace suggests the covariate is significant,it is an effective covariate & mancova should be performed.meaning of various parts of the output.report l,f test 4 l,& h2.bx’s m test— if bx’s m test is not significant even tho it tends be robust 2 this assumption,this indicator shows that the homogeneity of the variance –covariance matrix is met & so researchers would use wilk’s lambda rather than pillai’s trace 2 examine the significance of the effects. mauchly’s sphericity test can be ignored.even if significant,it shows that the dv’s correlate & this is ok b/c,in bs design,there is no sphericity assumption (unlike a design with a within subjects factor).descriptives statistics show cell means,marginal means,& gr& mean.multivariate tests–don’t worry about intercept.the observed power column is the power of the design 2 detect population differences among the means in the main effect or interaction that r identical 2 the differences among the means found in the sample.levene’s test of equality of error variances

f

df1

df2

sig.

speed

.456

5

54

.807

accuracy

1.908

5

54

.108

note:  this the test of the homogeneity of variance assumption 4 each dependent variable separately.  the analysis is robust 2 this assumption.example interpretation: the results of the main analysis (using wilkes lambda because the homogeneity of the variance – covariance matrix assumption is not violated) shows that the main effect 4 teacher is significant,l =  0.776,   f(2,113) =  16.27,p 2 = .224.the interaction between teacher & city is significant,l =  0.781,  f(4,226) =  7.43,p 2  = .116.main effects 4 teacher & 4 the interaction r also significant when looking @ the univariate tests of each dv. examining the means shows that the number of reported family emergencies went up in saskatoon & regina 4 the teachers teaching academic subjects who were affected by the cuts,but not 4 the hs teachers teaching technical subjects/trades.thus,the cuts did have a negative impact on the morale of the affected high school teachers in saskatoon & regina.tests of bs effects table summarizes the univariate analyses of variance on each dv.residual sscp matrix (pooled within cells matrices): the glm program does not give the determinant but shows that the pooled within cell correlation between the 2 dvs is r=0.426,meaning the dvs correlated quite highly across the cells of the design.when interpreting manova output 4 univariate tests 4 specific contrasts—ex: the difference is significantly different from 0,showing that typing speed is faster in the 3 hours 4 4 weeks condition.the output indicates that this other contrast is also significant,t(54)=5.43 p<.001 showing=”” that=”” typing=”” speed=”” is=”” greater=”” in=”” the=”” low=”” intensity=”” condition=”” than=”” other=”” conditions.=”” determinant=”” of=”” pooled=”” covariance=”” matrix=”” dvs=”” shows=”” whether=”” multicollinearity=”” a=”” problem=”” or=”” not approaches=”” can interpret=”” univariate=”” analyses=”” unless=”” you=”” find=”” significant=”” multivariate=”” test=”” homogeneity=”” variance.=””>,>decision points (pros & cons of options).using a manova circumvents high experimentwise error rate (i.e.btr than running 21 t-tests).manova combines dvs in different ways 2 maximize separation of conditions specified by each comparison in the design (not available through anova)..this can be used 2 identify those dvs that clearly separate impt social groups (known as discriminant function analysis).if heterogeneity of reg found,slopes of reg lines differ,meaning there is interaction b/n covariates & the ivs.if this occurs mancova is inappropriate analysis 2 use.criteria 4 statistical significance: researchers usually use approximate f statistic from wilks’ lamda as criterion 4 whether an effect in manova is sign or not-assuming homo of variance-cov.3 other criteria (equivalent only when effect being tested hs 1 df),slightly different b/c they create linear combination of dvs that maximizes separation of groups in slightly different ways.pillai’s trace is robust 2 the assumption of homo of vari-cov matrix,esp when unequal n’s in the cells of the design.it is more conservative & robust 2 this assumption,& is best used when problems with research design.it is derived by extracting eigenvalues associated with each main effect & interaction in the design—larger eigenvalues,correspond 2 larger %’s of variance accounted 4 by these effects.manova allows u 2 test smaller effects b/c it is more sensitive.  1ce manova hs id’ed sign main effects &/or interactions,u want 2 know which set of dvs is most affected by ivs.usually look 4 sign univariate tests of these effects 4 each dv using a bonferroni adjustment so that the type 1 error rate is not inflated.however,bonferroni is not completely accurate because it assumes the effects obtained r independent of 1 another,but that is not that case when dvs r intercorrelated.roy-bargman is another way 2 overcome this problem,as the researcher specifies a sequence of dvs in order of important,then conducts an anova on the dv of most impt,an ancova adjusting 4 the effects of the 1st dv on the 2nd most impt dv.we never use this because we never know which dv is higher in the order.3rd way 2 approach this is use the loading matrix (raw discriminant function coefficients) from a discriminant function analysis ( a specialized use of manova never used in psy).impt- manova more powerful & versatile than glm b/c 1) determinant of the pooled-variance-cov matrix is in output2) simple main effects analysis is more powerful & uses the pooled error term from the whole design3) only program that allows pooling of interaction terms with the covariate 2 test the homogeneity of reg assumption necessary b4 running a mancova4) only program that provides option of roy-bargmann’s stepdown analysis5) only program that allows ressearchers 2 conduct a pca on the pooled correlation matrix specifying the relships among the dvs6) easier 2 use the syntax when specifying special contrasts.  however, glm results r displayed more clearly,& it provides partial eta squared,power estimates,& adjusted means 4 mancova more easily. 
split plot design: sometimes we use a split-plt design (contains b/s factors & w/s factors;eg.tx & ctl group compared over time from pre-2-post tests 2 follow-up.problem with this using anova is that often sphericity assumption is not met,but with a special form of manova,called profile analysis,u do not need 2 meet sphericity assumption.the profile analysis tests whether the treatment & control groups differ (the levels hypothesis – tested by the main effect 4 the between-subjects factor),whether the intervention affects the treatment group (the parallelism hypothesis – tested by the interaction between the between-subjects factor & the within-subjects factor),& whether there r historic,maturational or other systematic trends over time 4 the treatment & the control groups (the flatness hypothesis – tested by the main effect 4 the within-subjects factor).2 avoid sphericity problem,scores on adjacent levels of w/n subjects iv r subtracted from 1 another creating segment scores which r then used as multiple & correlated dv variables in a 1 way manova.if multivariate test is sign,the slopes representing the relationship b/n the dv & the w/s iv r not parallel (parallelism hypothesis).there is only 1 dv so the bs factor is tested using a univariate f test.manova analysis uses the segment scores rather than raw original scores in the data set.the test examines whether the average slope of the segments across groups is significantly different from 0 (flatness).a different linear combination of the dvs defined by the segments is used 2 test the main effect (flatness) & the interaction (parallelism);linear combinations that maximize the separation of the groups involved & that correlate with 1 another.output- u can tell if sphericity assumption is violated by looking @ how much the factors correlate (so if pre- & post-test correlate more strongly than the follow up,there is a violation).determinant will show whether there r problems with multicolinearity or not among the dvs.multivariate test 4 homo of dispersion matrices (variance-cov);remember there hs 2 be unequal n’s & bx’s m with a p<.001 for=”” it=”” to=”” be=”” violated.=”” transformation=”” matrix=”” shows=”” how=”” the=”” computer=”” formed=”” orthogonal=”” contrasts.=”” average=”” within=”” cell=”” correlations=”” is=”” in=”” tables.=”” they=”” have=”” dvs=”” used=”” by=”” manova=”” test=”” x=”” main=”” effect y=”” interaction.=”” mauchly sphericity=”” assumption=”” violated=”” if=””><.05 indicating=”” that=”” the=”” researcher=”” should=”” interpret=”” results=”” of=”” manova=”” tests=”” rather=”” than=”” split-plot=”” anova=”” tests.=”” f=”” score=”” is=”” exact=”” when=”” there=”” are=”” levels but=”” it=”” estimated=”” if=”” levels.=”” then=”” univariate=”” main=”” effect=”” listed=”” interaction.=”” usually test=”” more=”” powerful=”” its=”” assumptions=”” met.=”” best=”” to=”” greenhouse=”” geiser=”” or=”” huyn-feldt=”” in=”” section=”” glm.=”” partial=”” eta=”” squared=”” used=”” estimating=”” effects=”” sizes=”” from=”” multivariate=”” analysis with=”” better=”” use=”” wsquared.=””>,>
factor analysis: fa is statistical procedure that allows us 2 identify sets of items that reflect constructs.it is not exact statistical procedure; provides several plausible alternative solutions which cluster the items entered into analysis somewhat differently.if theory & research suggest p underlying constructs,researchers may bracket this # of factors in the analysis & interpret final solution as reflecting some or all of these constructs.can be used 2 explore relships among variables (exploratory fa) or 2 confirm a theory that posits certain constructs r interrelated (confirmatory fa).sem is now used 4 confirmatory fa b/c theory is usually tested by examining relaships among theoretical constructs.so fa is more confined 2 exploratory stage of research (e.g.developing measures).another use is 2 create relatively independent composites of items that can be used 2 predict practically impt outcomes.factor loading is the correlation between an original item & a factor!.factor loading matrix shows corrs 4 each factor,usually in order of size.name of each factor is chosen by researchers from those items that correlate highly with a factor & that do not corr with other factors.fa goal is 2 produce linear combinations of variables that acct 4 variance that is shared among variables (common variance),excluding unique variance & random error,trying 2 maximize common variance.idea is that the common variance reflects psy constructs that cause research pts 2 respond the way they do 2 each item.thus,fa excludes from consideration the unique variance associated with each variable in the analysis.principal axis factoring method 4 extracting factors is iterative;it maximized the amt of the common variance accted 4 by each factor in turn.choosing factors: default is that linear combinations of original p variables that have eigenvalues 1>paf can be examined 2 see if certain # of factors acct 4 most variance;ie.look @ scree plot,which plots eigenvalues against # of factors (look 4 clear change in slope from the cliff;exact place where slope changes is up 4 debate (it is subjective interp).scree allows person 2 choose 2 or 3 solutions that seem most likely & then 2 rotate 2 achieve as much simple structure as possible.inspect residual corrs b/c corrs in the .05 2 .10 range,or a few greater than .10 suggest presence of another factor.orthogonal rotation 4 simple structure: after pa rotation & solutions r chosen from scree test,each solution is now rotated so that factors have simple structure,where most variables r assoacted with 1 & only 1 factor.so it rearranges the variance among factors so simple structure is enhanced.varimax rotation is often used 2 maximize the variance of the loadings within each factor—it makes low loadings become lower & high loadings become higher,while still explaining same variance;increasing simple structure within each factor.4 fa,ssl 4 a variable,b4 rotation 2 achieve simple structure,is = 2 commonality 4 that variable.sum of commonalities is = 2 total common variance accted 4 by solution.sum of the ssl within a factor is the variance accted 4 by that factor in the solution.total variance in the data set is the # of variables in that data set.ß these values allow 1 2 calculate proportion of common variance accted 4 by each factor,proportion of total variance accted 4,& proportion of total variance that is accted 4 by solution as a whole. principal comp1nts analysis (pca) produces linear combinations of variances that estimates all of the variance.used 2 produce linear combinations of variables that have predictive power but do not necessarily reflect underlying psych constructs.this iterative procedure maximizes the total variance accted 4 by each factor.pca is good 4 predictive purposes,but not 4 theoretical purposes.terms of assumptions: linear related variables ;good but not essential 2 have fairly normal distribution;examine items 2 ensure there is not extreme skewness or kurtosis is sufficient.multicollinearity;check 4 severe multi by examining the squared multiple corrs when each variable is the dv & all other variables r the ivs in a reg equation (i.e.look @ initial communalities).practical issues & limitations:1) corr matrix must have reasonable estimates of the actual relship among the variables that r being analyzed (ie.u need variability in variables) cannot pool samples from different studies 2 increase variability of scores.2) detect both univariate & multivariate outliers.3) consider sample size prior 2 conducting analysis;rot: tf consider n=100 poor;n=200 fair,& n=300 good &,ratio of # of respondents 2 # of variables should be @ least 5:1 & preferably 10:1 2 achieve a stable solution. 4) do not perform an fa when corr matrix contains mostly n/s corrs.refer 2 kaiser-meyer-olkin’s measure of sampling adequacy 4 an overall index of the size of partial corrs.it is ratio of sum of the squared corrs divided by the sum of the squared corrs plus the sum of the square partial corrs.rot: kmo should be >.6 in order 2 say whether worthwhile doing fa.   mathematical basis: it takes a correlation matrix or a var-cov matrix & creates linear combinations of variables called factors that best satisfy a criterion.goal of this analysis 2 find a small # of factors that can reproduce original corr matrix fairly well.however,we also want 2 interpret the factors psychologically;which requires a rotation,which re-apportions the variance among the factors,resulting in simple structure.ss is when most variables r only associated with 1 factor.researchers also require that the factors be orthogonal 2 further increase solutions interpretability.in fa there r no ivs & dvs.  only a matrix containing corrs among p variables (a p x p square matrix).  the mathematical problem is 2 create another smaller p x m matrix composed of a small number of factors “m” called a factor loading matrix which,when post-multiplied by it’s transpose,re-creates the corr matrix.thus,the fundamental equation of fa is:r = a .at .where a is the factor loading matrix containing the corrs between the variables & each factor (the columns of the matrix).2 understand intuitively how this is d1,consider the responses of each research participant as a point defined by co-ordinates in p dimensional space.  if these points actually cluster within a space defined by m of these dimensions (where m 2) & the commonalities 4 all the variables in the analysis r estimated through an iterative procedure.the initial values that the comp uses as commonality estimates r the squared multiple correlations found by regressing each variable on all the others in the analysis.  these initial estimates r then r placed in the diagonals of the correlation matrix.  the procedure then iterates several times,estimating the commonalities each time,until the solution stabilizes.the final commonalities r the squared multiple correlations found by regressing each variable on the factors.clearly,all p factors r needed 2 reproduce the correlation matrix exactly.  however,the researcher is not interested in this solution.  rather he or she expects (hopes!) that the p variables r,2 some degree redundant,so that m factors that underlie the p variables can account 4 the majority of variance in the data set.  the advantage of producing an eigenvalue matrix is that the proportion of variance accted 4 by each factor is known.therefore,all the researcher need do is inspect the eigenvalues & then delete those that r small creating a new  m x m diagonal matrix.this smaller matrix can then be used 2 reproduce the original correlation matrix (or the original variance – covariance matrix).essentially reducing the eigenvalue matrix in rank 2 an m x m matrix,reduces the factor loading matrix,a,2 a p x m matrix.this matrix is then used 2 reproduce the original correlation matrix using the equation,rr  = a .  at  where rr  is the reproduced correlation matrix of the same rank as the original correlation matrix.  the residual correlation matrix,rres can then be calculated using the following simple equation: rres  =  r  –  rr.when the solution is adequate (the researcher hs chosen the “correct” number of factors & reproduces the original corr matrix well),the values in this residual matrix r small (spss prints out the number of residual correlations greater than 0.05).orthogonal rotation: accomplished by multiplying the factor loading matrix by an m x m transformation matrix.then it rotates every pair of factors in turn,maximizing the variance of the square of the factor loadings 4 each pair of factors @ each step.the iterations converge on a solution that maximizes the variance of the square of the factor.factor loadings tell us about the relative contribution that a variable makes 2 a factor.oblique or non-orthogonal rotations: sometimes ss is not achieved through orthog rotation b/c clusters of points in m dimensional space r not ortho 2 1 another.oblique rotation used which maximized simple structure by using non-orthogonal co-ordinated (factors r corr with 1 another).meaning & how 2 interpret various parts of the output: initial values in diagonal of corr matrix r not set 2 1,but rather r the squared multiple corr of each variable with all other variables.then iterative procedure converges on final solution in which commonalities r the squared multiple corr b/n each variable & factors.4 data sets with more than 20 variables,commonality est’s r reasonably good estimates & they do not influence the solution very much,b/c there r many more corrs than commonalities upon which the solution is based.variables with low commonality r unrelated 2 other variables in the data set;less than 20 variables will influence solution.2 force n factors,use subcommand: criteria factor (n).corr matrix.doesn’t matter how many rotations it takes,but rather whether rotations converge,meaning its stable.4 corr matrix,describe whether any corrs above .90,& any that r low;overall range,& direction of relships.determinant,even low (e.g.0.01770) does not mean very close 2 0,so multicollinearity still not a problem @ that level.anti-image corr (don’t look @ diagonal) r negative of the partial correlations b/n 2 variables partialling out all other variables in analysis,&,if <.1 it=”” is=”” small and=”” means=”” there=”” are=”” a=”” smaller of=”” factors=”” underlying=”” the=”” variables.=”” with=”” larger=”” data=”” set all=”” partial=”” corrs=”” tend=”” to=”” be=”” small.=””>,>kmo should be >.6,meaning sufficient sampling adequacy,therefore worthwhile doing an fa. in commonalities section of paf,the initial column explains the % of variance accted 4 by the other variables (e.g.52% of age’s variance is accounted 4 by the other variables)’;extraction column is the % of variance accted 4 in the variable by the factors (e.g.so the factors (2) acct 4 45% of variance in age).so the extraction/final commonalities r the squared multiple corrs of the variables when regressed upon the factors.the sum of the whole extraction column is the total amt of common variance extracted by analysis.total variance explained thru paf example the eigenvalue 4 factor 1 is 4.941,& explains 26% of total variance before the extraction.after factor 3,the changes in successive eigenvalues r relatively small (<.275 and=”” this=”” is=”” taken=”” as=”” evidence=”” that=”” there=”” are=”” probably=”” between=”” two=”” four=”” factors.=”” factor=”” eigenvalue=”” explains=”” of=”” the=”” total=”” variance while=”” variance.=”” factors=”” together=”” account=”” for=”” a=”” before=”” extraction.=”” note:=”” initial=”” eigenvalues=”” side calculated=”” using=”” pca=”” comp=”” time so=”” acted=”” different=”” when=”” paf=”” done=”” on=”” those=””>),>how 2 report the common variance of 2nd rotation;divide the ssl 4 that factor by the sum of commonalities (total ssl;entire column) (e,g.2.903/4.465=65.02%.screeplot—ex: the scree plot suggests 2 or 3 factors,& the number of factors that appear with eigenvalues greater than 1 is approximately 5.factor loading matrix/pattern matrix: in the pattern matrix its evident that the majority of the variables r @ least fair 2 excellent measures of the factors more specifically,it is clear that variables ss3 & ss14 r excellent measures of factor 1,or purer measures of factor 1 than other variables,because they both have factor loadings above .71 (accounting 4 approximately 50% of overlapping variance).hope residuals r small because then u’ve replicated 1st corr matrix,& its ok if even 10% of residuals r >.05.there r 31 (18%) non-redundant residuals with absolute values greater than .05.this indicates that the 3 factor solution is “good” in the sense that it can reproduce 82% of the correlations in the corr matrix well. pca output: difference is that initial commonalities r est 2 be 1,& final commonalities r the max variance that can be accted 4 in data set by linear combinations of variables (called comp1nts).comp1nt matrix: shows the factor loadings (comp1nts) 4 the analysis.scree plot is the same.fa & pca acct 4 different %’s of the total variance accounted 4;4 both the rotation takes the solution from each method & tries 2 create simple structure by reapportioning this variance among the factors.can compare fa with pca in terms of reproducing corrs & accounting 4 total variance.after varimax rotation u get an improved factor loadings.final 2 tables shows factor loadings 4 rotated comp1nts & amt of rotation needed 2 achieve greater simple structure.factor loadings rot: must be greater than .45 (20% shared variance with the factor b4 it is said 2 load on a factor.poor=.32 (10%);fair=.45 (20%);good=.55 (30%);very good=.63 (40%);excellent=.71 (50%).after i decide upon solution with certain factors,its possible 2 compute scores 4 each pt on each factor.many researchers prefer 2 select variables that load strongly on 1 factor & no others.then the respondents scores on these variables r averaged 2 gener8 overall score on dimension defined by items in the factor (start of development of a new scale).pro: it does not highlight particularly strong corrs among sets of variables that occur by chance & those unlikely 2 replicate in the same way.f loadings that r poor-fair could be btr with another sample;could be low due 2 idiosyncracies of the sample.factor scores r computed using a reg approach;reg coeffs computed using equation;then factor scores 4 the participants r computed by w8ing their std by these reg.coefficients,& then be used as predictor variables (or ivs) in future analyses.oblique rotation: pattern matrix: the #s r regression coefficients 4 variable on factor 1,when controlling 4 factor 2,3,etc.this is normally interpreted as variables with high loadings in this matrix rep the factor uncontaminated by other factors in the solution. structure matrix: sometimes gr8er than 1 b/c not correl;structure matrix is corr  of each variable with the factor (same as orthog) but,u cannot add squared loadings 2 obtain total variance acted 4 b/c factors r corr.  pattern matrix shows f loadings partialling out remaining factors (std reg coeffs).if the factor corr matrix says they correlate weakly or not @ all,stick 2 orthogonal rotation.decision points (pros & cons of various options): use orthogonal usually b/c easier 2 interp,but,if theory & past research suggest factor should be corr,oblique rotation will be chosen (e.g.most use oblimin).degree 2 which factors r allowed 2 corr can be set by delta value;as value of delta approaches 0.8,factors r allowed 2 corr very highly,causing interp problems as the 2 factors may be 1.oblique does same extraction as b4 with orthog,but,instead of rotating orthog axes in multi-dimensional space 2 create simple structure,axes r rotated away from 90 degrees so as 2 maximize simple structure.advan is that clusters of data points in multi-dimen space defined by factors r clearly id’ed with 1 dimension,although dimen r inter-corr.obrot gives factor pattern matrix,factor structure matrix (instead of factor loading matrix),& corrs.among factors (factor corr matrix) in output.*problem with fa or pca is that there r no readily available criteria against which 2 test the solution

sem: allows us 2 more fully test a theory b/c we can only able 2 manipulate some of the pertinent variables that influence program pts & testing theory is not addressed adequately by conventional hypo testing research methods,even tho they provide strong support 4 central parts of the theory.sem allows us 2 examine the interaction of an entire set of variables in a field setting that includes important social outcomes.the extent 2 which the solution can reproduce the variance-covariacne matrix among variables in the data set becomes a test of the “fit” of the theoretical model (similar 2 fa criterion that the factor solution should reprod the corr matrix among the variables entered.it is possible 2 compare different theories in terms of their relative goodness of fit.degree which model fits the data is 1 impt outcome of interest,but so is the values of path coefficients that est strength & direction of both direct & mediated relships among variables in model.path coefficients: r unbiased estimates of population parameters & can be tested 2 see if they r sign different from 0.relative value of std path coeffs (range -1 through 0 2 +1) indicates their impt in predicting specific outcomes.sem used when researcher is developing tests which measure construct adequately.psy use sem techniques 2 test adequacy of a theoretical model & 2 estimating strength of causal paths using path coefficients.this allows reliability of measuring instruments 2 be estèd by specifying measurement model & a structural model & testing viability of both types of model simultaneously.sem also assess the plausibility of a theoretical model with recursive paths (paths that go in only 1 direction;this is limiting as people react 2 events with a sense of agency;so complex feedback loops or reciprocal interactions r more realistic).if only 1 measured variable (manifest var) is used 2 index each underlying construct in a theoretical model,the sem procedure conducts a path analysis.bentler weeks model takes a reg approach 2 sem.in this model,both latent & manifest variables can be exogenous or endorgenous.why is it possible 2 estimate the reg coefficients;because it is assumed that: 1) the ivs r measured w.o.error 2) the ivs have a direct causal influence on the dv & no other variables 3) the residuals associated with the dv r not correlated with the ivs (no specified).these assumptions allow us 2 fix the values of many paramaters that could theoretically vary,& it is the specification of these parameters that allows the comp 2 est unique values 4 reg.coefficients using a least squares solution.unlike mr,sem asks us 2 choose which parameters 2 fix & which 2 estimate.cannot allow all possible parameters 2 vary freely b.c this would always result in an under-identified model (not enough df available 2 estimate parameters in the model).thus the research must define both the measurement model & the structural model in a way that enough parameters r fixed in value so that there r dfs available 2 test the plausibility of the model in its entirety.that is,the model must be over-identified (see math basis 4 bw reg equation).no consensus on which gf indice is best;but hu & bentler suggest 1 residual based & 1 comparative fix index should be reported.rot: if 1 latent factor in model,must have 3 variables measuring this construct & their errors must be uncorrelated.same applies 4 2+ latent factors provided each set of measured variables only load on 1 factor & that the factors r allowed 2 covary.sometimes 2 indicators is enough provided no variances & covariances among the factors r 0.in order 4 the factor 2 have meaning,1 of the measured variables is used 2 scale the latent variable by setting path coefficient 2 1.called a marker variable.if not d1,identification problems result.establishing whether a particular model can be tested or is over-identified is complex,so the best strategy is 2 apply the guideliens & run the analysis.if program signals a problem,suggesting linear dependence between parameters,fix more values & re run the analysis.when sem is used purely 4 confirmatory fa,the theory defines a construct in a certain way is tested,but the relship of that construct 2 other constructs is not explored.

terms of assumptions1)all variables must have linear relationships with each other.2)outliers must be id d & dealt with prior 2 the main analysis.3) the variables in the analysis should  be normally distribution (multivariate normality)4)absence of multicollinearity is necessary as the computer executes matrix inversions in each iteration.check determinant of var cov matrix 2 examine this assumption.
practical issues & limitations: 1)due 2 data sets that contain variables measured on different scales,programs have difficulty analyzing,resulting in tremendously different covariances,so rescale some of the variables before running the analysis.2)possibility of a specification error (when we havnt included a variable that should be included or if we leave out a causal path) haunts any researcher using sem.solution is examine the residual variance – covariance matrix.residuals should be small & centred around zero.non-symmetrical  residuals (some small & some large) suggest that the model estimates some parameters well & others poorly.  1 reason 4 this is that a causal path between variables in the model hs been mistakenly set 2 zero (the theory is wrong).  if tru,then post hoc procedures can be used which suggest how the model can be “improved” by adding paths.then replication using another sample is required.the other reason why residuals r large & non-symmetrical is that the model is misspecified.  there is no easy solution 2 this problem but @ least the analysis pushes the researcher 2 examine the theory more critically.3) large sample sizes r required in order 2 run modern sem program.rot: minimum sample size 4 all sem programs can be est by multiplying the # of parameters that the program is estimating by 10.  this means that usually eqs requires a sample size of @ least 200 research respondents & other programs require more.  however,experienced applied researchers with messy data say that even that #  may not be enough 4 the program 2 converge on a final solution.  that is,with smaller sample sizes &,therefore,more unstable estimates,the program simply may not be able 2 find an optimal solution.  part of this problem can be caused by the default start values used by the sem program being very different from the actual values of the parameters.  therefore,if estimates of these parameters can be obtained from past research,they should be specified as the initial start values in the analysis.4)sem is based upon a math procedure that tests the ability of a theoretically derived model 2 reproduce the variance – covariance matrix among measured variables in a  data set.the use of the variance – covariance matrix preserves the scale of the original variables.  rescaling these variables by,4 example,adding or subtracting a constant does not change the results of  the analysis.  however,rescaling variables through the use of sample statistics is more problematic as it alters the value of the c2  statistic that is the basis 4 testing the goodness-of-fit of the model.  when variables r standardized,the rescaling involves sample statistics because deviations from the mean r divided by the sample’s standard deviation.  hence,the developer of eqs,peter bentler,warns researchers not 2 use correlations whenever possible.thus,after the analysis on the variance – covariance matrix hs been d1,the computer calculates these standardized path coefficients from the unstandardized path coefficients & their standard errors.analyze the variance – covariance matrix,not the correlation matrix,whenever possible,because corr matrix involves standardization.1 limitation of sem is that there must be @ least 1 fixed marker variable in the diagram.mathematical basis: uses matrix algebra 2 solve a set of simultaneous equations that r specified by a particular theory.equations r derived directly from a path diagram that shows relship among theoretical constructs (the structural model) & relships among the theoretical constructs & measures of these constructs (measurement model).using these equations & initial start-up values,sem programs follow an iterative algorithm which converges on an optimal solution.(it is the different iterative procedures that distinguishes the various sem programs.bearing this in mind,the fundamental bentler-weeks regression equation can be expressed as:  h          =      b    .     h  +        g   .    x       (where h (eta)  is a  q x 1 vector of the q endogenous (dependent) variables;b (beta) is a  q x q square matrix of path (regression) coefficients which r estimates of  the relationships among the endogenous variables;  g (gamma) is a  q x r matrix of path coefficients which r estimates of the relationships between the endogenous variables & the exogenous (independent) variables;& x (xi) is a  r x 1 vector of the r exogenous variables.notice that this method involves solving q equations.  that is,there is an equation 4 each of the  q  endogenous variables & there r no equations 4 the exogenous variables because their variability is explained by variables outside the model.  however,the  r  exogenous variables have variances & covariances that need 2 be estimated.  these variances & covariances r in an  r x r   variance – covariance matrix called f  (phi).altogether,then,the parameters that need 2 be estimated r in the b,g,&  f matrices  & the path diagram is used 2 set some of these parameters 2 fixed values (usually 0 or 1) so that there r enough degrees of freedom available 2 test the goodness of fit of the model 2 the data.  start values 4 the parameters r then entered into the matrices.these start values can be set by the computer or they can be estimated & entered by the researcher.  the computer then estimates the variance – covariance matrix among all the measured variables using the criterion 4 convergence specified by the researcher & compares it 2 the actual variance – covariance matrix.  new parameter estimates r calculated & entered as start values in the next iteration.  the computer stops the iterations when the estimate of the variance – covariance matrix cannot be improved.  notice in this form of structural equation modelling 1) the b,g,&  f matrices contain parameter estimates 4 both the measured & the latent variables (factors) & 2) the h & x vectors r not estimated but derived directly from the data set.2 common ways of achieving convergence: maximum likelihood (ml) & then generalized least squares (gls) method.ml converges on an est variance-cov matrix which maximizes probability that the differ between the est & the samples varian-cov matrices occurred by chance.in contrast,gls method converges on an est vari-covari-matrix that minimizes the sum of the squared differences b.n elemets in the est & the samples vari covari matrix.they yield similar solutions.both r good if variables distributed normally & sample size is adequate.meaning & how 2 interpret various parts of the output: ovals r constructs or latent variables or factors;rectangles r measured or manifest variables;exogenous variable (only arrows away from): is an iv;endogenous variable (hs arrows 2 it)s: is a dv,but sometimes also an iv.arrow is a causal path;double arrow is a corr.latent factors have disturbance (which is error in prediction);endogenous manifest hs error terms (all errors r exogenous variables).note: latent factors or constructs influence how we respond 2 measures (arrows 2 the manifest variables).manifest is measured;latent is not measured.example of explanation of path diagram: set the path from the latent factor (sports motivation) 2 interest (a measured variable reflecting 1 of the comp1nts of sports  motivation) 2 1.0.  this is the marker variable.  any of the 3 comp1nts (interest,involvement,& importance) could have been set as the marker variable & the program would run.(if this is not d1 then the output gives an error msg indicating linear dependency in the model.) (link it 2 the syntax,i.e.,v1 = 1 f1 + e1;).path diagram simultaneously tests measurement & theoretical model. * indicate a path coefficient or variance that needs 2 be estimated.total stars equals # of parameters that need 2 be estimated.errors r not correlated when there r no double arrows.paths from errors & disturbances r fixed @ 1 usually but variance is estimated,b.c most researchers not interested about estimating path efficient from unknown variables outside the model.variances of variables can be set 2 1.btr 2 set path coefficient from latent variable 2 1 of its measured or indicators.# of equations 2 write is = 2 # of endogenous variables.4 each endogenous variable can write out equations from looking @ the model.e.g.v1=*f1 + 1e1.output: *want chi square 2 be significant; basic chi square test of goodness of fit (very dependent on sample size,so often ignored);this statistic tests how well the est vari-covari matrix fits the actual vari-covar matrix among the measured variables.df r = 2 amt of unique information in the sample var-covar matrix minus the # of parameters that need 2 be estimated.p*=p(p+1) divided by 2.p is measured variables.eqs wont run if too many parameters r being estimated.hope 4 a ns x2.even small differences r sign when sample size is large.so other goodness of fit indices have been developed 2 correct 4 this.std path coeffs (-1 2 0 2 +1)  from latent variables 2 their measured counterparts r factor loadings.report std coeffs not unstd coefficients,bc unstd path coefficients r on different scale.clearly multicollinearity is not a problem in this data set as the determinant is not extremely small or close 2 0.rot tabachnick & fidell have suggested that a good fitting model should have a ratio of the chi square 2 the degrees of freedom less than 2.bentler-b1tt normal fit index (nfi) is often used comparative index gfi: values greater than .9 indicates a good fit.  however,nfi underestimate the fit of a model when relatively small samples r used (less than 200).comparative fit index* hs replaced it;it uses a noncentrality parameter;varies from 0 2 1 & is btr est of gfi 4 smaller samples;rot good if greater than .9.another popular comp gfi is rmse of approximation,which compares the model 2 a jst identified mode;smaller values r desired.good model is indicated if less than 0.06.tends 2 reject models that fit well when sample size is small.residual gfis;root mean square residual index (rmr);based on the average of squared diffs between each element of the sample vari covar matrix & corresponding element of the est vari covari matrix,but values r dependent on the original measured variables in the model.use the std (srmr)* 2 determine fit;small values (small residuals) indicate a good fit rot: less than .08,indicates a good fit.in order 2 test any model,it hs 2 be over-identified.this means that there is a unique solution 2 the math procedure which results in ests 4 all parameters that r allowed 2 vary freely in the model,& that @ least 1 df available 2 test this model using chi-squared.# of parameters that must be estimated must be less than the total # of df.decision points (pros & cons of various options):  when dealing with small sample (less than 200),u worry about the need 2 include enough measured variables 2 adequately specify the measurement model with the need 2 restrict the # of parameters est by the model as a whole.the solution is 2 create a small number of parcels made up by averaging the responses 2 several of the original measured variables (questionnaire items) with the minimum of 3 parcels per latent variable.parcels reduce the # of parameters w’v 2 estimate.2 do parecelling,an fa is d1 on all items measuring the construct,then 3 items with highest f loadings r used 2 anchor the 3 parcels.the 3 items with the next highest loadings r added 2 the anchors in reverse order,& so on.together these 3 parcels r used as the manifest variables in the sem analysis rather than original items. output: gives entire var-cov matrix among variables.must take the total df & subtract from # of parameters being estimated.hs 2 be greater than 1.average absolute std residuals & average off diagonal std residuals should be quite small 4 the model 2 be a good fit (so less than 0.05). largest standardized residuals section;tells u whether relationships r modeled well or not—rot: if these values or residuals r greater than .100,it does not mode well.if less than .100,models well.histogram shows if residuals r centered on 0 or not,& whether symmetrical.if there is a tail,may be a specification error.if not centered on 0,or not symmetrical,may be a specific error.(e.g.this histogram shows that not all residuals r centred on zero (80% r between –0.1 & 0.1) & r not completely symmetrical & there is a tail in the positive range.this information indicates that the model does contain a specification error,suggesting that either a variable is missing or a path is missing).iterative summary section e.g.this output shows how the function specified by the estimation method converges & stabilizes on a minimum value (.70260) in 7 iterations.then it gives equations with paths fixed,& then unstd path coeffs: e.g.the previous equations r the unstandardized path coefficients 4 each endogenous variable.this equation shows that the unstandardized path coefficient between sports motivation (f1) & involvement (v2) is .600.  the standard error 4 this statistic is .032 & the z score is .600 / .032  =  18.465.  as this z score is greater than 1.96,it is highly significant @ the 5% level,as indicated by @.then sometimes 1 est parameter is linearly dependent upon others;computer will give an error msg concerning this;do not trust output if this is present.consider elimianating this variable & re running.then estimated variances r printed & covariances of exogenous variables with std errors & z scores.inspect these 2 see if they r reasonable;u need knowledge from past research 2 determine this.e.g.variance 4 sports related values (v6) is 13.47.it is difficult 2 determine the reasonableness of this estimate because knowledge of variances & covariances from past research is unknown.if negative variances r presented,comp cannot compute;& analysis is seriously flawed,& 1 must reassess whether 2 use sem.std solution is next;the standardized path coefficients r the factor loadings of the measured variables on the latent factors.the squared multiple correlations (the square of the path coefficient) r est of the proportion of the variance of the measured variables which is shared with the underlying factor.this is a commonality estimate 4 the variable on the factor &,equivalently,an estimate of its reliability.  4 example,the reliability of the math self-concept scale (v12) is .773.  this number indicates the proportion of variance in the measured variable that measures the underlying maths self-concept construct (f2).marker variables & error  terms have std coefficients different from 1 bc of the standardization procedure.if wanting 2 know statist sign,set another variable as marker & rerun the analysis;will give same values 4 path coefficients because 2 solutions r=.corrs among latent factors is given in last table;informs us of whether the constructs r independent of each other or not.if solution can be improved,add or remove 1 path @ a time (tf).lagrange multiplier test is a post hoc test which shows which parameters could be added 2 the model 2 improve gf;both multi & univariate tests r conducted,but mv more impt as it identifies parameters that can be added into model in a stepwise fasthion.chi square should be significant,means model will be improvd;will show which parameter paths should be added.only make changes if it makes sense theoretically.rerun the new model 2 check & exaimine the impact of changes on all parameter estimates.wald test: used 2 delete parameters  (set them 2 0) & so make the model more restrictive.usually d1 after the lmt.b.c.parameters r being set 2 0,results of the chi square should be ns.output is similar 2 the lmt.