Research Methods

Card Set Information

Research Methods
2011-12-05 17:15:18

Show Answers:

  1. Analysis of variance (ANOVA):
    • Estimate of systematic variance
    • Estimate of error variance
    • Calculate a ratio
  2. Simple, one way analysis of variance (ANOVA)

    F= systematic var. + error var. =
    Error variance
    • F = Between groups var.
    • Within groups var.
  3. Between groups variance=
    Within groups variance=
    • Between= variance of group means
    • Within= average of variance found within each group
  4. Simple analysis of variance (anova)
    *Scores almost always vary (differ) from subject to subject... 2 reasons:
    • 1. Systematic variance
    • 2. Error or unsystematic variance
  5. 1. Systematic Variance
    Difference due to different levels of I.V.
  6. 2. Error or unsystematic variance
    Differences due to uncontrolled, random factors (not confounding)
  7. Error or unsystematic variance-uncontrolled variance:
    • 1. individual differences
    • 2. random variations in testing
    • 3. measurement error
    • 4. experimental treatment not the same for each subject
  8. If IV influences DV:
    systematic variation of i.v. should cause systematic variation of the d.v. = systematic variance
  9. Differences in iv should cause:
    differences in group means
  10. If IV doesnt have an effect:
    group means are not likely to be exactly the same

    No systematic var.
  11. An effect means:
    that there is a real difference in means or that IV and DV are correlated
  12. If an effect, variations in the IV:
    would not reduce variance within groups variance
  13. If an effect:
    Assume that differences in group means should be:
    greater than differences within any single group
  14. If an effect:
    Difference within a group :
    just due to chance or uncontrolled factors
  15. If an effect:
    Differences betwen groups due to:
    controlled and uncontrolled factors
  16. Understanding basics of anova tell you how to design and run experiments:
    • 1. large differences between groups
    • 2. small differences within groups
  17. Would size of sample influence F ratio?
    • No, not directly-not in the equation
    • Influences whether F ratio is statistically significant
  18. For a significant effect:
    • 1. large differences among group means
    • 2. small differences within each group (not confound idea)
    • small individual differences
    • reliable tests or measures
    • consistent application of each level of iv
  19. Effect =
    difference (or variability) in means
  20. Possible to see an effect if the study includes:
    a comparison of the influence of different levels of a predictor or independent variable
  21. Factorial design- more than 1 IV:
    • Each level of each IV occurs with every other level of all the independent variables
    • Uses all possible combinations of levels of 2 or more independent variables
  22. Main effect:
    the effect (differences in means) of one iv, averaging,ignoring, or collapsing over the other iv's
  23. Factorial Designs
    • 1. Economy
    • 2. Experimental control
    • 3. Check generality of the effect of an IV
  24. Factorial designs:
    1. Experimental control-
    reduce error variance
  25. Interaction:
    the effect of one iv depends on the level of another iv
  26. Factorial design:
    Look for three things if there are two i.v.:
    • main effect of iv A on dv
    • main effect of iv B on dv
    • Interaction between A and B
  27. Describing the results of factorial designs:
    • 1. Set up a table of means for each condition
    • 2. Draw a figure
    • 3. Describe the means for each main effect followed by a description of the results of the ANOVA
  28. Statistical Hypothesis Testing:
    • 1. Any t, F, U, orr could be due to chance/luck
    • 2. If a result (a, t, F...) occurs by chance infrequently decide result is statistically significant
    • 3. Typical reasoning
    • 4. examples
  29. 3. Typical reasoning:
    often decide that if an outcome is unusual/rare must be due to more than chance.
  30. Null hypothesis:
    presumed true unless statistical evidence suggests otherwise
  31. The null hypothesis is tested because:
    we have information about chance, not how the independent variable works.
  32. Type I error:
    Rejuect Ho but it is actually true
  33. Type II error:
    Faile to reject the Ho but Ho is false
  34. What evidence must you collect in order to conclude that X is a cause of Y?
    • 1. Show that changes in Y didnt occur until change in X (temporal precedence rule)
    • 2. Show that X and Y are related (covariation rule)
    • 3. Rule out other explanations forthe relationship between X and Y (internal validity rule)
  35. Characteristics of a true experiment:
    • 1. manipulation of X
    • 2. Comparison of the effects of various levels of X on Y
    • 3. Subjects begin the experiment equivalent on all levels
    • 4. control over all other important variables so that all subjects are treated the same except for X (random assignment)
  36. Program Evaluation Basic Strategy:
    -To show the program caused a change in client
    • a. Change in client occurred after the introduction of the program
    • b. participating on not participating in program covaries with client success
    • c. Rule out other explanations for the relationship between the program and client success
    • d. plausible causal mechanisms linking program to client success
  37. If there is an effect: F =
    greater than 1
  38. If no effect F=
    around 1
  39. More diversity =
    more error variance
  40. Interrupted time series design
    • often encountered in quasi experimental research
    • Have a single experimental group for which we have multiple observations before and after naturally occuring treatment.
    • Instead of observing 1 or 2 3rd grade classes we could observe 3rd grade classes over several years
    • We need to know when the time series is interrupted by some treatment
    • Then we compare observations beforea nd after the treatment to see whether it had any effect.
  41. Threat to internal validity in case studies:
    • Source of causation;
    • baseline condition maturation
    • history
    • selection bias
  42. Ways to enhance internal validity in case studies
    • Deviant case analysis (a non equivalent control)
    • detective work
  43. Threat to internal validity for interrupted time series:
    • changes in participants and environment
    • delayed effects
  44. Ways to enhance internal validity in interrupted time series:
    • nonequivalent control group
    • detective work
  45. Threats to internal validity for subject variables
    • dimensions on which to match
    • regression artifacts
  46. Ways to enhance internal validity for subject variables:
    • matching
    • include true independent variable
    • see interactions
  47. Threats to internal validity in Age as a variable:
    • Confoundings with time of testing
    • generation of birth
  48. Ways to enhance internal validity in Age as a variable:
    • Cross sequential design
    • include a true i.v. & seek interaction
    • Converging operations
  49. ______ _________ masks true behavior in quasi experiments and matching studies.
    regression artifacts
  50. All non parallel lines=
  51. p =
    • Significance level
    • it does notindicate repeatability
  52. one way anova- Error Variance (denominator) estimated by:
    Calculating within groups variance (variance within each group pooled)
  53. Krauter pseudo F ratio=
    • range of group means
    • (range of scores in gp 1+ gp 2+gp 3 etc..
    • # of groups
  54. 2 reasons why scores in an experiment almost always vary:
    • 1. Systematic variance
    • 2. Error or unsystematic variance
  55. Mann-Whitney U test
    • a simple inferential statistic that can be used in place of a t test.
    • It can be used in many instances in which you have tested twoindependent groups of subjects
  56. How to do mann whitney u test:
    • 1. put the scores in order from smallest to largest
    • 2. For each score in the group w/ the lower mean, count the # of scores that are smaller in the other group. Thus you'll have 1 # for each score in the group with the lower mean that tells you how many scores are smaller in the other group than that score\
    • 3. Addthese numbers together
    • 4. For each score in the group with the lower mean, count the # of scores in the higher group that are tied or are the same. Add these ties together and divide by 2
    • 5. Add theresults of C and D together to obtain the mann whitney u
  57. Program Evaluation:
    • Is a particular program actually delivering the services it is designed to deliver?
    • Is a law having the desired effect?
  58. Program Evaluation
    Basic Strategy-
    To show program caused change in client;
    • a. Change in client occurred after the introduction of the program
    • b. Participating or not participating in the program covaries with client success
    • c. Rule out other explanations for the relationship between the program and client success
    • d. Plausible causal mechanisms linking the program to client success
  59. What kind of design is:
    After school program and social skills...Every one gets the same treatment and all measured the same way
    • One shot case study
    • There is no comparison
    • Descriptive research
  60. If there is success in a one shot case study design, does it show the program caused the success?
    • Does it answer...
    • Change in the client occurred after intro to program
    • Having or not having program covaries with client success
    • Rules out other explanations for relationship between program and client success

    • Extremely low internal validity
    • Cant even show that the client changed during the program
  61. Ongoing flow of events interrupted by the introduction of treatment-
    interrupted time series
  62. Pre or non experimental design
    answers the questions...
    • pretest post test design (before-after) o x o
    • Potentially suffer from all the threats of within subjects design
    • change occurred after intro to program
    • Program/no program varies with client success
    • rule out other explanations for the correlation b/w program/noprogram and client success
  63. X O
    one shot case study
  64. O X O
    Pret test post test
  65. X O
    Static Group Comparison (ex post facto)
  66. r O X O
    r O - O
    Pretest posttest control
  67. Quasi Experimental Design
    Might manipulate X, but dont compare groups that are formed based on random assignment

    More opportunity to discredit alternative interpretations of data
  68. Quasi experimental designs: To change non-experimental (pretest-posttest) to quasi exp. design-
    • Add observations
    • 1. Additional times before and after program introduced-time series studies
    • 2. Additional people who havent received the program (non equivalent control group - no random assignment - STATIC GROUP COMPARISON
    • OXO -> OXO
    • O-O (non equivalent control group
  69. Time Series Design
    Get many measures before and after some natural or planned intervention
  70. Pretest-posttest design also becomes a quasi experimental design if:
    add a comparison group
  71. To improve a non experimental design:
    Increase number of observations, add comparison group or both
  72. Program evaluation-quasi experimental designs

    Can distinguish between effect of program and many other variables:
    • 1. comparison and "program" group have same amount of time to mature
    • 2. history should influence both groups equally
    • 3. testing should influence both groups equally
  73. problem with program evaluation-quasi experimental designs:
    • Finding a good comparison group...
    • Cant randomly assign-between groups variable: ex post facto
  74. Possible selection bias problem:
    groups different because of the way they were selected or assigned to groupss
  75. Program evaluation:
    Quasi experiments summary
    • 1. no random assignment
    • 2. quasi-comparison using additional measures before and after or non equivalent control groups
    • 3. moderate control
    • 4. often field based
    • 5. may never be able to eliminate confounding variables