The flashcards below were created by user
pzazz
on FreezingBlue Flashcards.

What is a factorial design?
Any study with more than one independent variable (factor)

What are the benefits of factorial design?
ESP:
 Economic
 Scientific
 Practical

What is the E of the benefits of factorial design?
1. Economical: Can look at several IVs simultaneously

What is the S benefit of factorial design?
2. Scientific: Allows us to examine:
 Effects of each IV on the DV
 Effect of the COMBINATION of the IVs on the DV

What is the P benefit of the factorial design?
3. Practical: More likely publishable

Example of factorial design
Cell: represents the data from subjects who get a particular combination of the IVs

What is a matrix?
A matrix represents the "cell means" for each group

Example of a matrix
 M= 10 2 8 10

Example naming conventions
We call this a 2x2 design because there are two IVs, each with 2 levels

What is the naming convention for:
1. Drug Type (Type 1, Type 2, Type 3)
2. Therapy Type (Type A, Type B)
3 X 2 = 2 IVs 1 w/ 3 levels, 1 w/ 2

What is the naming convention for:
1. Drug Type (Type 1, Type 2, Type 3)
2. Drug Dose (Dose 1, Dose 2, Dose 3)
3. Therapy Type (Type A, Type B)
 3 x 4 x 2 = 3 IVs
 1 w/ 3 levels, 1 w/ 4 levels, 1 w/ 2 levels

What is another name for a test with 2IVs?
Twoway ANOVA

What is another term for a test with 3 IVs?
Threeway ANOVA

What are the concerns of having too many variables?
 Statistical concerns
 more variables
 less power
 need more subjects
 Practical

What information does a factorial design give us?
 Tests for the effects of each independent variable separately
 And the effects from combining independent variables (interaction)

What is a marginal mean?
& separate effect of each IV:
M of 1 level of an IV averaged across 1 level of another IV.
 6
 6 9
*Average cells for each IV

Calculate the overall effect of each IV
 M of Therapy A = 9
 M of Therapy B = 6
 M of Drug 1 = 6
 M of Drug 2 = 9
There is a significant effect of each IV = Main Effect for both Drug Type and Therapy Type

Explain combinations:
 Different levels of different independent variables
The effects of Drug 1 are best when combined w/ Therapy B

What is an interaction?
 When the effects of an IV are different at the different levels of another IV
  the combined (joint) effects of a pair of IVs on the DV

The effects of 1 IV must be interpreted in terms of the levels of the other IV
 Does not mean that one IV depends on (influences) the other IV
 They are still independent on one another (drug type can not influence therapy type)

What ways can you describe interactions?
 Words
 Numbers: the differences in cell means across one row (or column) will be different than the other differences in cell means across another
 Graphs: when a departure from parellilism exists, a significant interaction might also exist

How to describe interactions using words:
 The effect of Therapy Type X is different for different types of drugs.

How to describe interactions with numbers:
 Compare the differences (subtract) in cell means from one row to another...
 Therapy A: 10  8 = 2
 Therapy B: 2 10 = 8
If the pattern of differences is not the same across rows, then there might be an interaction...

How to describe interactions using graphs:
 Technically, the correct way to display a factorial design is with a bar graph...
 If the bars from the two levels of an IV aren't parallel across the levels of the other IV, there is an interaction

 Typically, people present a line graph
 It doesn't matter which variable that's on the X axis

What are the possible outcomes?
 1. Only 1 Main Effect is significant
 2. Both Main Effects, but not the Interaction
 3. Both Main Effects AND the Interaction
 4. Only the Interaction is Significant
 5. Nothing Significant

Example of 1 Significant Main Effect
10  10 = 0, 5  5 = 10

Example of 2 Main Effects, But No Interaction
10  5 = 5, 20  15 = 5

Example of Significant Interaction w/ No Main Effects
 Chart

How do we interpret studies with Interactions?
Quanitative (noncrossover) vs. Qualitative (crossover) interactions
 Be very cautious in interpreting the main effects if a study has a significant Interaction
 sometimes the main effects are important above and beyond the interaction
 sometimes not

Quantitative Interaction
But the interaction doesn't cross over
Interaction effects magnitude of results, but not overall pattrn

Qualitative Interaction
 Can't interpret the main effects without understanding the interaction...
The interaction is driving the data

What is an Interaction Effect?
 A situation in the factorial ANOVA in which a combination of variables has an affect that could not be predicted from the effects of the two variables individually
 i.e. combination of sensitivity and test difficulty

What is a twoway factorial design?
Factorial research design in analysis of variance with two variables that each divide the groups.

What is twoway analysis of variance?
Analysis of variance for a twoway factorial design?

What is grouping variable?
A variable that separates groups in analysis of variance.

What is a oneway analysis of variance?
Analysis of variance in which there is only one grouping variable.

What is main effect?
 The difference between groups on one grouping variable ina factorial design in analysis of variance; result for a grouping variable, averaging across the levels of the other grouping variable(s)
 i.e. one for sensitivity and one for test difficulty

What is the cell mean?
Its the number inside the box, duh

What are marginal means?
The means of each particular level. M of cell 1 + cell 2. M of cell 1 + cell 3. Etc. and so forth.

Interaction Effect get it straight now
The impact of one grouping variable depends on the level of another grouping variable.

What happens if there is an interaction?
The pattern of differences in cell means across one row will not be the same as the patterns of differences in cell means across another row.

Describing an interaction with a graph
Whenever there is an interaction, the pattern of bars on one section of the graph is different from teh pattern on the other section of the graph

Relation between Interaction y Main effects
A study can have any combinatin of interaction and main effects.

Even when there is an interaction, sometimes the main effect holds up over and aboce the interaction. That is, the main effect may be there at every level of the other grouping variable, but even more strongly at some points than at others.
In this result, the main effect for arousal holds up over and above the interaction. The effect for arousal is there for both easy and hard tasks; in both cases, low arousal produces the least performance, moderate the next most, and high arousal the most. There is still an interaction because how much high arousal produces better performance than moderate arousal is more for hard than for easy tasks.

What is a correlation?
An association between scores on two variables.

What is linear correlation?
A relation between two variables tht shows up on a scatter diagram as the dots roughly following a straight line.

What is a curvilinear correlation?
A relations between two variables that shows up ona scatter diagram as dots following a systematic pattern that is not a straight line.

What would no correlation look like?
If you plotted income and shoe size random dots everywhere.

What is a positive correlation?
 A positive slope. A relation between two variables in which high schores on one go with high scores on the other, mediums with mediums, and lows with lows.
 On a scatter diagram, the dots roughly follow a straght line sloping and to the right.

What is a negative correlation?
 A negative slope. A relation between two variables in which high scores on one go with low scores on the other, mediums with mediums, and lows with highs.
 On a scatter diagram, the dots roughly follow a straight line sloping down and to the right.

What determines the strength of correlation?
How much there is a clear pattern of some particular relationship between two variables.

What is the correlation coefficient?
Looking at a scatter diagram gives you a rough understanding of the relationship between two variables. A correlation coefficient gives you the exact number that determines the direction and strength of this relationship.

Logic of the linear correlation:
 Need to determine what is a high score and what is a low score. + deviation = raw score above the mean.  deviation = raw score below the mean.
 X  M_{x}
 Y  M_{y}

What is a product of deviation score?
When you multiply the deviation score of one variable with that of another. If the product is positive (+ x + = + ;  x  = +) then the correlation is positive. Vice versa.

How does the product of deviation scores show us that there is not a linear correlation?
Add them up, if they are close to zero, then there is not a linear correlation.

Determining strenght of correlatio with sum of product of deviation scores:
The larger the number, the stronger the correlation. But this can also be misleading because how large is large? Hopefully very large. Bigger is always better.

Determining the direction of the correlation with the sum of the product of deviation scores:
 + = positive correlation
  = negative correlation
...straightforward enough for you, eh?

What are the properties of the correction number?
 1. It gets larger with more people.
 2. It gets larger as the scores for each variable have more variation.
[(SS _{x})(SS _{Y})] ^{1/2} Divde the sum of the product of deviation scores by this correction number.

What is the Pearson's correlation coefficient?
The result of dividing the sum of the products of deviation scores by the correction number.
 + or : tells you the direction
 0  1 : tells you the strength of the correlation

Forumla
Sum of the Products of Deviation Scores
Σ[(X – Mx) * (Y My)]

Formula
Pearson's r
 r = Σ[(X – Mx) * (Y My)]
 √(SSx * SSy)
*The correction number controls for N and SD

Testing for statistical significance in a correlation:
Is it significantly different than zero?
Null = In the population the true relation between the two variable is no correlation (r = 0)

Formula
Cutoff for Significance on a Distribution of Correlation Coefficients

What is the df in a t test for a correlation?
df = N  2

What is direction of causality?
The path of causal effect:
If X is thought to cause Y, then the direction of causality is from X to Y

What are the three possible directions of causality?
 1. X could be causing Y
 2. Y could be causing X
 3. Z could be causing both X and Y

Correlation v. Correlational*
*not hormotional
 Correlational research design any design other than a true experiment.
 A correlational research design is not necessarily statistically analyzed using the correlation coefficient, and some studies using the experimental research designs are most appropriately analyzed using a correlation coefficient.

How do you compare correlations with each other?
Square the correlations = r^{2}
This is the proportionate reduction in error.

What is the restriction in range?
 Like age in our lab.
 Situation in which you figure your corrleation but only a limited range of the possible values on one of the variables is included in the group studied.

What is unreliability of measurement?
One of the reasons why the dots may not fall close to the line is inaccurate measurement.

How do outliers influence the interpretation of correlations?
They fuck up most statistical analysis.

What is the influence of a nonlinear pattern on the interpretation of correlations?
The correlation coefficient only works if the relationship is linear. This is why it is important to use a scatterplot before you calculate Pearson's r.

What is Spearman's rho?
The equivalent of a correlation coefficient for rankordered scores.

What problems can affect the interpretation of correlations?
 1. Outliers
 2. Nonlinear patterns
 3. Restrictions in range
 4. Inaccurate measurements

What is a predictor variable?
*usually X
Variable that is used to predict scores of individuals on other variables

What is a criterion variable?
*usually Y
A variable that is predicted

What is a linear prediction rule?
Formula for predicting a person's score on a criterion variable based on the person's score on one or more predictor variables

What is the regression constant (a)?
Particular fixed number added into the prediction.

What is the regression coefficient (b)?
 Number multiplied by a person's score on a predictore variable as part of a linear prediction rule.
 The slope of the regression line.

Formula
Linear Prediction Rule
Y = a + (b)(X)
 Y: person's predicted score on the criterion variable
 X: person's score on the predictor variable
 a: regression constant
 b: regression coefficient

What is the least squarred error principle?
 The difference between a prediction rule's predicted score on the criterion variable and a person's actual score on the criterion variable is called error. Take each error and square it.
 Minimizes distance between actual data and predicted values.

What is the sum of the squarred errors?
Sum of the squared differences between each predicted score and actual score on the criterion variable.

How do we find a on for the least squares linear prediction rule?
a = MY – (b)(MX)

What are the assumptions of Pearson's r?
 Normality
 Homoscedasticity
 Relationship is linear

Experimental v. Correlational
 Experimental: manipulation (only of IV) ; control (all other variables constant)
 Correlational: anything that is not a true experiment; when we don't directly manipulate the IV

How do we find b for the least squares linear prediction rule?
 b = Σ[(X – Mx) * (Y – MY)]
 SSx

How would a bivariate regression be a hypothesis test?
 To see if X predicts Y (relationship between these variables)
 Testing to compare slopes across studies

What is the standardized regression coefficient?
 Regression coefficient in standard deviation units.
 It shows the predicted amount of change in standard deviation units of the criterion variable if the value of the predictor variable increase by one standard deviation.

Formula
Standardized Regression Coefficient
β = b * (√SSX /√SSY)

β and r?
They are the same.

Hypothesis testing bivariate regression:
 β acts as r so can be used on t distribution.
 t test for the correlation applies to both types of regression coefficients.

Prediction of hypothesis test β:
The hypothesis test for a regression coefficient (for both b and β) tests whether the regression coefficient is significantly different from 0.

What are chisquare tests?
 Hypothesistesting procedures used when the varaibles of interest are nominal variables.
 Comparing an observed frequency distribution to an expected frequency distribution.

What is a chisquare test for goodness of fit?
Examines how well an observed frequency distribution of a nominal variable fits some expected pattern of frequencies.

What is a chisquare test for independence?
Examines whether the distribution of frequencies over the categoreies of one nominal variable is unrelated to the distribution of frequencies over the categories of a second nominal variable.

What is the df for a chisquare distribution?
df = N_{categories}  1

Formula
ChiSquare Goodness of Fit
X^{2} = E [(Observed  Expected)2]/Expected

What are the problems of regression and correlation?
 1. Outliers
 2. Unreliable Measurement
 3. Reliability
 4. Heteroscedasticity

When do we use chisquares for goodness of fit?
When we have one nominal variable

When do we use chisquares test for independence?
When we have > one nominal variable.

What happens if null is true and we had 100 subjects?
 Proportion of smart/dumb people will be equal (even if we dont know waht those proportions will be)
 The two variables will be independent

Formula
Expected Value
[Total for that row / total # of subjects ] * total for that column

What is the df for chisquare test of independence?
df = (# of rows  1) * (# of columns  1)

What are conditional proportions?
The proportion of subjects with a particular combination of traits.

What are the measures of effect size for chisquare tests?

What are the assumptions of chisquare tests?
 1. Independence: each observation is independent of all other observations
 2. Minimum expected frequency size: depends on who you ask
 3. NOT NORMALITY: this is a nonparametric test

What do we do with "real" data?
 1. Run with the data as is
 2. Delete the case
 3. Transform the data
 4. Do something else (nonparametric)

What types of tranformations are there?
 + Square Root Transformation: positively skewed data
 ++ Logarithm Transformation
 +++ Inverse Transformation
  Reflect scores

When all else fails...
NONPARAMETRIC

Parametric  Nonparametric
 Dep t test  Wilcoxon Sign Rank Test
 Ind t test  Mann Whitney U Test
 One way ANOVA  Kruskal Whallis Test
 Pearson's r  Spearman's rank order

What will happen in a nonparametric test if the null hypothesis is true?
 ≈ High and low scores
 Medians will be ≈

What is a NONPARAMETRIC test?
Used to analyze ordinal data

What is the hypothesis testing in chi squares?
 Goodness of Fit
 Null: the distribution of peoples across categories will be the same.
 Test for Independence
 Null: two populations are the same. proportions for both methods are the same.

What is the thing to fear?
Itself.

