Statistics Final Exam

Card Set Information

Author:
pzazz
ID:
56137
Filename:
Statistics Final Exam
Updated:
2010-12-15 06:17:44
Tags:
Statistics Psychology
Folders:

Description:
Ch. 10-15
Show Answers:

Home > Flashcards > Print Preview

The flashcards below were created by user pzazz on FreezingBlue Flashcards. What would you like to do?


  1. What is a factorial design?
    Any study with more than one independent variable (factor)
  2. What are the benefits of factorial design?
    ESP:

    • Economic
    • Scientific
    • Practical
  3. What is the E of the benefits of factorial design?
    1. Economical: Can look at several IVs simultaneously
  4. What is the S benefit of factorial design?
    2. Scientific: Allows us to examine:

    • -Effects of each IV on the DV
    • -Effect of the COMBINATION of the IVs on the DV
  5. What is the P benefit of the factorial design?
    3. Practical: More likely publishable
  6. Example of factorial design

    Cell: represents the data from subjects who get a particular combination of the IVs
  7. What is a matrix?
    A matrix represents the "cell means" for each group
  8. Example of a matrix
    • M= 10 2 8 10
  9. Example naming conventions
    We call this a 2x2 design because there are two IVs, each with 2 levels
  10. What is the naming convention for:

    1. Drug Type (Type 1, Type 2, Type 3)
    2. Therapy Type (Type A, Type B)
    3 X 2 = 2 IVs 1 w/ 3 levels, 1 w/ 2
  11. What is the naming convention for:

    1. Drug Type (Type 1, Type 2, Type 3)
    2. Drug Dose (Dose 1, Dose 2, Dose 3)
    3. Therapy Type (Type A, Type B)
    • 3 x 4 x 2 = 3 IVs
    • 1 w/ 3 levels, 1 w/ 4 levels, 1 w/ 2 levels
  12. What is another name for a test with 2IVs?
    Two-way ANOVA
  13. What is another term for a test with 3 IVs?
    Three-way ANOVA
  14. What are the concerns of having too many variables?
    • Statistical concerns
    • -more variables
    • -less power
    • -need more subjects
    • Practical
  15. What information does a factorial design give us?
    • -Tests for the effects of each independent variable separately
    • -And the effects from combining independent variables (interaction)
  16. What is a marginal mean?

    & separate effect of each IV:
    M of 1 level of an IV averaged across 1 level of another IV.

    • 6
    • 6 9

    *Average cells for each IV
  17. Calculate the overall effect of each IV
    • M of Therapy A = 9
    • M of Therapy B = 6
    • M of Drug 1 = 6
    • M of Drug 2 = 9

    There is a significant effect of each IV = Main Effect for both Drug Type and Therapy Type
  18. Explain combinations:
    • Different levels of different independent variables

    The effects of Drug 1 are best when combined w/ Therapy B
  19. What is an interaction?
    • When the effects of an IV are different at the different levels of another IV
    • - the combined (joint) effects of a pair of IVs on the DV
  20. The effects of 1 IV must be interpreted in terms of the levels of the other IV
    • -Does not mean that one IV depends on (influences) the other IV
    • -They are still independent on one another (drug type can not influence therapy type)
  21. What ways can you describe interactions?
    • -Words
    • -Numbers: the differences in cell means across one row (or column) will be different than the other differences in cell means across another
    • -Graphs: when a departure from parellilism exists, a significant interaction might also exist
  22. How to describe interactions using words:
    • The effect of Therapy Type X is different for different types of drugs.
  23. How to describe interactions with numbers:
    • Compare the differences (subtract) in cell means from one row to another...
    • Therapy A: 10 - 8 = 2
    • Therapy B: 2 -10 = -8

    If the pattern of differences is not the same across rows, then there might be an interaction...
  24. How to describe interactions using graphs:
    • Technically, the correct way to display a factorial design is with a bar graph...
    • -If the bars from the two levels of an IV aren't parallel across the levels of the other IV, there is an interaction
    • Typically, people present a line graph
    • It doesn't matter which variable that's on the X axis
  25. What are the possible outcomes?
    • 1. Only 1 Main Effect is significant
    • 2. Both Main Effects, but not the Interaction
    • 3. Both Main Effects AND the Interaction
    • 4. Only the Interaction is Significant
    • 5. Nothing Significant
  26. Example of 1 Significant Main Effect
    10 - 10 = 0, 5 - 5 = 10
  27. Example of 2 Main Effects, But No Interaction


    10 - 5 = 5, 20 - 15 = 5
  28. Example of Significant Interaction w/ No Main Effects
    • Chart

    • 10 - 1 = 9
    • 1 - 10 = -9
  29. How do we interpret studies with Interactions?

    Quanitative (non-crossover) vs. Qualitative (crossover) interactions
    • Be very cautious in interpreting the main effects if a study has a significant Interaction
    • -sometimes the main effects are important above and beyond the interaction
    • -sometimes not
  30. Quantitative Interaction
    But the interaction doesn't cross over

    Interaction effects magnitude of results, but not overall pattrn
  31. Qualitative Interaction
    • Can't interpret the main effects without understanding the interaction...

    The interaction is driving the data
  32. What is an Interaction Effect?
    • A situation in the factorial ANOVA in which a combination of variables has an affect that could not be predicted from the effects of the two variables individually
    • i.e. combination of sensitivity and test difficulty
  33. What is a two-way factorial design?
    Factorial research design in analysis of variance with two variables that each divide the groups.
  34. What is two-way analysis of variance?
    Analysis of variance for a two-way factorial design?
  35. What is grouping variable?
    A variable that separates groups in analysis of variance.
  36. What is a one-way analysis of variance?
    Analysis of variance in which there is only one grouping variable.
  37. What is main effect?
    • The difference between groups on one grouping variable ina factorial design in analysis of variance; result for a grouping variable, averaging across the levels of the other grouping variable(s)
    • i.e. one for sensitivity and one for test difficulty
  38. What is the cell mean?
    Its the number inside the box, duh
  39. What are marginal means?
    The means of each particular level. M of cell 1 + cell 2. M of cell 1 + cell 3. Etc. and so forth.
  40. Interaction Effect- get it straight now
    The impact of one grouping variable depends on the level of another grouping variable.
  41. What happens if there is an interaction?
    The pattern of differences in cell means across one row will not be the same as the patterns of differences in cell means across another row.
  42. Describing an interaction with a graph
    Whenever there is an interaction, the pattern of bars on one section of the graph is different from teh pattern on the other section of the graph
  43. Relation between Interaction y Main effects
    -A study can have any combinatin of interaction and main effects.
  44. Even when there is an interaction, sometimes the main effect holds up over and aboce the interaction. That is, the main effect may be there at every level of the other grouping variable, but even more strongly at some points than at others.
    In this result, the main effect for arousal holds up over and above the interaction. The effect for arousal is there for both easy and hard tasks; in both cases, low arousal produces the least performance, moderate the next most, and high arousal the most. -There is still an interaction because how much high arousal produces better performance than moderate arousal is more for hard than for easy tasks.
  45. What is a correlation?
    An association between scores on two variables.
  46. What is linear correlation?
    A relation between two variables tht shows up on a scatter diagram as the dots roughly following a straight line.
  47. What is a curvilinear correlation?
    A relations between two variables that shows up ona scatter diagram as dots following a systematic pattern that is not a straight line.
  48. What would no correlation look like?
    If you plotted income and shoe size- random dots everywhere.
  49. What is a positive correlation?
    • A positive slope. A relation between two variables in which high schores on one go with high scores on the other, mediums with mediums, and lows with lows.
    • On a scatter diagram, the dots roughly follow a straght line sloping and to the right.
  50. What is a negative correlation?
    • A negative slope. A relation between two variables in which high scores on one go with low scores on the other, mediums with mediums, and lows with highs.
    • On a scatter diagram, the dots roughly follow a straight line sloping down and to the right.
  51. What determines the strength of correlation?
    How much there is a clear pattern of some particular relationship between two variables.
  52. What is the correlation coefficient?
    Looking at a scatter diagram gives you a rough understanding of the relationship between two variables. A correlation coefficient gives you the exact number that determines the direction and strength of this relationship.
  53. Logic of the linear correlation:
    • Need to determine what is a high score and what is a low score. + deviation = raw score above the mean. - deviation = raw score below the mean.
    • X - Mx
    • Y - My
  54. What is a product of deviation score?
    When you multiply the deviation score of one variable with that of another. If the product is positive (+ x + = + ; - x - = +) then the correlation is positive. Vice versa.
  55. How does the product of deviation scores show us that there is not a linear correlation?
    Add them up, if they are close to zero, then there is not a linear correlation.
  56. Determining strenght of correlatio with sum of product of deviation scores:
    The larger the number, the stronger the correlation. But this can also be misleading because how large is large? Hopefully very large. Bigger is always better.
  57. Determining the direction of the correlation with the sum of the product of deviation scores:
    • + = positive correlation
    • - = negative correlation

    ...straightforward enough for you, eh?
  58. What are the properties of the correction number?
    • 1. It gets larger with more people.
    • 2. It gets larger as the scores for each variable have more variation.

    [(SSx)(SSY)]1/2 Divde the sum of the product of deviation scores by this correction number.
  59. What is the Pearson's correlation coefficient?
    The result of dividing the sum of the products of deviation scores by the correction number.

    • + or : tells you the direction
    • 0 - 1 : tells you the strength of the correlation
  60. Forumla
    Sum of the Products of Deviation Scores
    Σ[(X – Mx) * (Y- My)]
  61. Formula
    Pearson's r
    • r = Σ[(X – Mx) * (Y- My)]
    • √(SSx * SSy)

    *The correction number controls for N and SD
  62. Testing for statistical significance in a correlation:
    Is it significantly different than zero?

    Null = In the population the true relation between the two variable is no correlation (r = 0)
  63. Formula
    Cutoff for Significance on a Distribution of Correlation Coefficients
    • t = r
    • √[(1-r2) /(N – 2)]
  64. What is the df in a t test for a correlation?
    df = N - 2
  65. What is direction of causality?
    The path of causal effect:

    If X is thought to cause Y, then the direction of causality is from X to Y
  66. What are the three possible directions of causality?
    • 1. X could be causing Y
    • 2. Y could be causing X
    • 3. Z could be causing both X and Y
  67. Correlation v. Correlational*


    *not hormotional
    • Correlational research design- any design other than a true experiment.
    • A correlational research design is not necessarily statistically analyzed using the correlation coefficient, and some studies using the experimental research designs are most appropriately analyzed using a correlation coefficient.
  68. How do you compare correlations with each other?
    Square the correlations = r2

    This is the proportionate reduction in error.
  69. What is the restriction in range?
    • Like age in our lab.
    • Situation in which you figure your corrleation but only a limited range of the possible values on one of the variables is included in the group studied.
  70. What is unreliability of measurement?
    One of the reasons why the dots may not fall close to the line is inaccurate measurement.
  71. How do outliers influence the interpretation of correlations?
    They fuck up most statistical analysis.
  72. What is the influence of a non-linear pattern on the interpretation of correlations?
    The correlation coefficient only works if the relationship is linear. This is why it is important to use a scatterplot before you calculate Pearson's r.
  73. What is Spearman's rho?
    The equivalent of a correlation coefficient for rank-ordered scores.
  74. What problems can affect the interpretation of correlations?
    • 1. Outliers
    • 2. Non-linear patterns
    • 3. Restrictions in range
    • 4. Inaccurate measurements
  75. What is a predictor variable?

    *usually X
    Variable that is used to predict scores of individuals on other variables
  76. What is a criterion variable?

    *usually Y
    A variable that is predicted
  77. What is a linear prediction rule?
    Formula for predicting a person's score on a criterion variable based on the person's score on one or more predictor variables
  78. What is the regression constant (a)?
    Particular fixed number added into the prediction.
  79. What is the regression coefficient (b)?
    • Number multiplied by a person's score on a predictore variable as part of a linear prediction rule.
    • The slope of the regression line.
  80. Formula
    Linear Prediction Rule
    Y = a + (b)(X)

    • Y: person's predicted score on the criterion variable
    • X: person's score on the predictor variable
    • a: regression constant
    • b: regression coefficient
  81. What is the least squarred error principle?
    • The difference between a prediction rule's predicted score on the criterion variable and a person's actual score on the criterion variable is called error. Take each error and square it.
    • Minimizes distance between actual data and predicted values.
  82. What is the sum of the squarred errors?
    Sum of the squared differences between each predicted score and actual score on the criterion variable.
  83. How do we find a on for the least squares linear prediction rule?
    a = MY – (b)(MX)
  84. What are the assumptions of Pearson's r?
    • Normality
    • Homoscedasticity
    • Relationship is linear
  85. Experimental v. Correlational
    • Experimental: manipulation (only of IV) ; control (all other variables constant)
    • Correlational: anything that is not a true experiment; when we don't directly manipulate the IV
  86. How do we find b for the least squares linear prediction rule?
    • b = Σ[(X – Mx) * (Y – MY)]
    • SSx
  87. How would a bivariate regression be a hypothesis test?
    • To see if X predicts Y (relationship between these variables)
    • Testing to compare slopes across studies
  88. What is the standardized regression coefficient?
    • Regression coefficient in standard deviation units.
    • It shows the predicted amount of change in standard deviation units of the criterion variable if the value of the predictor variable increase by one standard deviation.
  89. Formula
    Standardized Regression Coefficient
    β = b * (√SSX /√SSY)
  90. β and r?
    They are the same.
  91. Hypothesis testing bivariate regression:
    • β acts as r so can be used on t distribution.
    • t test for the correlation applies to both types of regression coefficients.
  92. Prediction of hypothesis test β:
    The hypothesis test for a regression coefficient (for both b and β) tests whether the regression coefficient is significantly different from 0.
  93. What are chi-square tests?
    • Hypothesis-testing procedures used when the varaibles of interest are nominal variables.
    • Comparing an observed frequency distribution to an expected frequency distribution.
  94. What is a chi-square test for goodness of fit?
    Examines how well an observed frequency distribution of a nominal variable fits some expected pattern of frequencies.
  95. What is a chi-square test for independence?
    Examines whether the distribution of frequencies over the categoreies of one nominal variable is unrelated to the distribution of frequencies over the categories of a second nominal variable.
  96. What is the df for a chi-square distribution?
    df = Ncategories - 1
  97. Formula
    Chi-Square Goodness of Fit
    X2 = E [(Observed - Expected)2]/Expected
  98. What are the problems of regression and correlation?
    • 1. Outliers
    • 2. Unreliable Measurement
    • 3. Reliability
    • 4. Heteroscedasticity
  99. When do we use chi-squares for goodness of fit?
    When we have one nominal variable
  100. When do we use chi-squares test for independence?
    When we have > one nominal variable.
  101. What happens if null is true and we had 100 subjects?
    • Proportion of smart/dumb people will be equal (even if we dont know waht those proportions will be)
    • The two variables will be independent
  102. Formula
    Expected Value
    [Total for that row / total # of subjects ] * total for that column
  103. What is the df for chi-square test of independence?
    df = (# of rows - 1) * (# of columns - 1)
  104. What are conditional proportions?
    The proportion of subjects with a particular combination of traits.
  105. What are the measures of effect size for chi-square tests?
    • Phi Coefficient
    • Cramer's
  106. What are the assumptions of chi-square tests?
    • 1. Independence: each observation is independent of all other observations
    • 2. Minimum expected frequency size: depends on who you ask
    • 3. NOT NORMALITY: this is a nonparametric test
  107. What do we do with "real" data?
    • 1. Run with the data as is
    • 2. Delete the case
    • 3. Transform the data
    • 4. Do something else (nonparametric)
  108. What types of tranformations are there?
    • + Square Root Transformation: positively skewed data
    • ++ Logarithm Transformation
    • +++ Inverse Transformation
    • - Reflect scores
  109. When all else fails...
    NONPARAMETRIC
  110. Parametric ---------- Nonparametric
    • Dep t test ----------- Wilcoxon Sign Rank Test
    • Ind t test ------------- Mann Whitney U Test
    • One way ANOVA -- Kruskal Whallis Test
    • Pearson's r ----------- Spearman's rank order
  111. What will happen in a nonparametric test if the null hypothesis is true?
    • ≈ High and low scores
    • Medians will be ≈
  112. What is a NONPARAMETRIC test?
    Used to analyze ordinal data
  113. What is the hypothesis testing in chi squares?
    • Goodness of Fit
    • Null: the distribution of peoples across categories will be the same.
    • Test for Independence
    • Null: two populations are the same. proportions for both methods are the same.
  114. What is the thing to fear?
    Itself.

What would you like to do?

Home > Flashcards > Print Preview