Research Methods

Card Set Information

Research Methods
2011-10-12 15:54:18
Research Methods

Research Methods Fall Midterm 2011
Show Answers:

  1. What are the 3 assumptions we make about scientific theories?
    General Scientific -- at some point we believe that something is real.

    Paradigmatic -- Which methods make sense and which don't. They may evolve over time.

    Domain -- How generalizable is it?
  2. What is a variable?
    any thing or concept that can take on more than one value
  3. What is a concrete variable and give and example.
    A concrete variable is something that you can take a direct measure of.

    e.g. how tall are you
  4. What is an abstract variable
    and abstract variable is something that can not be directly measured, you must measure other things that correlate with the thing you are interested in.

    e.g. how smart are you
  5. What is a hypothetical construct?
    A hypothetical construct is an abstract concept inventer to refer to a variable that can not be directly observed.

    e.g. Intelligence
  6. Why do we use operation definitions?
    To make abstract concepts concrete enough to study.
  7. What is a mediating variable?
    a variable that that must be present for a relationship between to variable to occur.

    e.g. if parents are home kids won't get in the liquor cabinet, but if parents are home they will.
  8. What is a moderating variable?
    There is a relationship between the two variables, but it changes depending upon if there is a moderator present.

    e.g. more births in Stockholm during the fall probably because more people are staying in during winter.
  9. What are mills 3 rules for causality?
    • Cause must preced effect
    • Cause and effect must covary
    • There must be no other plausible explanation for the effect.
  10. What is quantitative research?
    • Data in number form
    • Often indicates quantity or frequency
  11. What is qualitative research?
    • Data reflects categories
    • Phenomenological research that focuses on what the meaning of things are.
  12. What are the 6 research strategies frequently used?
    • Experimental
    • Field Experiments
    • Quasi-experimental
    • Correlational
    • Case Study
    • Single Subject
  13. What is an experimental research design?
    • Researcher is in control and manipulating the independent variable.
    • Attempt to determine causality
    • e.g. irritate people then ask them to deliver a shock to their partner. sometimes have a gun present near shock switch. measure if shocks are bigger with gun present.
  14. What are field experiments?
    • experiments done in the real world instead of the lab.
    • harder to control some things
    • e.g. cover someone in blood and send them out to a park watch and see how many people are willing to help them
  15. What is quasi-experimental research?
    • Nonrandom assignment of participants
    • Still try to look at whether we can detmine causality, but not as strongly as we can with experimental research.
    • e.g. divide a classroom up by kids last names and complete an education study.
  16. What is correlational research?
    • also called naturalistic
    • looking at relationships between one or more predictors and one or more criteria.
    • Do not manipulate an IV
    • can not determine causality
    • e.g. looking at the correlation between SAT scores and college GPA
  17. What is a case study?
    • looking at one case or one individual
    • may be difficult to generalize unles syou have multiple case studies
    • Can not attribute causality
  18. What is single subject research?
    • Single individual is the focus, but it can be experimental if you manipulate an IV
    • Good to use in private practice
  19. What are the 6 types of research defined by time?
    • Cross sectional
    • Longitudinal
    • Cohort Sequential
    • Time Series
    • Prospective
    • Retrospective
  20. What is cross sectional research?
    • Look at a variable at a particular point in time, but across groups
    • We can show that there are relationships, but can not show causality
    • e.g. looking at word knowledge using groups at different ages.
  21. What is longitudinal research?
    • Look at a variable in the same subjects over a period of time.
    • can show a causal relationship
    • e.g. measure attachment in babies, then again at 5, 10, 15..etc.
  22. What is cohor sequential research?
    Try to speed up longitudinal by getting groups at different ages that have common factors and looking at variables
  23. What is Retropective research?
    • Look at things after the fact to see if we can tell what caused them.
    • e.g. food poisoning. if a bunch of people got sick, what do they have in common
  24. What are the two utilization criteria for research findings?
    • Truth test -- is it believable
    • Utility test -- how useful is it
  25. What are the 3 strategies for determining the validity of measurements?
    • Content
    • Criterion
    • Construct
  26. What is content validity?
    • Does this cover what we think it should cover?
    • Do the things we are looking at logically get to the thing we are trying to study?
    • e.g. if we want to know about intelligence does measuring math ability get to it?
    • often demonstrated through agreement between experts.
  27. What is criterion validity?
    • Does this measure correlate with other measures that we think it should?
    • must be calibrated against a known measure or against itself over time
  28. What is predictive criteron validity?
    Testing the measure over time to see if the same results are found
  29. what is concurrent criterion validity?
    • comparing the measure to another establish measure.
    • e.g. comparing the results of your intelligence test to the results of an established intelligence test. If you see similar scores you have concrete criterion validity.
  30. What is discriminant validity?
    • The degree to which two things are not similar
    • e.g. math knowlege and reading knowledge may not correlate.
  31. What is Convergent validity?
    • The degree to which things that should be similar are.
    • e.g. math knowledge and math knowledge should correlate
  32. What is reliability in regards to measurement?
    • Deals with the consistency of a measure
    • How well will we be able to reproduce a score?
  33. What are the 4 types of reliability?
    • Test-retest
    • Alternate forms
    • Internal consistency
    • Inter-rater
  34. What is test-re-test
    • repeat measures twice and see if we get similar results
    • e.g. measure people's height twice
    • e.g. give people a word knowledge test today and then tomorrow
    • must be careful that subjects may remember some of what was on the first test and this could affect their scores on the second
  35. What are alternate forms?
    • Make up two tests that are supposed to measure the same thing and give the subject both of them.
    • may be difficult to create two tests that are really the same but different
  36. What is internal consistency?
    • measure the correlation between different items on the same test.
    • split halves test...write one test split it in two and see if halves correlate
  37. What is Cronback's coefficient alpha?
    • a statistical analysis that tells us how well items agree with each other.
    • If they agree we have high reliability
  38. What is inter-rater reliability?
    • If we get measures from different sources how much to they agree.
    • e.g. if you have two people observing the same behaviors are their rating similar?
  39. What are populations?
    • who do we expect to be able to generalize the finding to?
    • e.g. reinforcement can be generalized across a large population, but tourette's may not.
  40. What is the sample?
    The subset of people from the population who are actually being studied
  41. What is a random sample?
    everyone in the population has the same probability of being in the sample
  42. What is a representative sample?
    Sample parallels the population on important things
  43. What are the 4 questions you should ask with respect to sampling?
    • Who should we obsrve
    • When are we observing them
    • What are we going to observe
    • Where are we going to observe them
  44. How do you get a random sample?
    • Find the population
    • Determine sampling frame
  45. What is a sampling frame?
    a list of all members of the population
  46. What is stratified random sampling?
    • Split the population into groups based on attribues, then sample randomly from the groups
    • Difficult to get a truly random sample, and the population you want to generalize to may be incredibly large.
    • e.g. divide the population into male and female and then randomly sample
  47. What is Cluster sampling?
    • Natural groups exist, so we take a random sample from that.
    • e.g. middle school kids may be questioned for marketing research
  48. What is systematic sampling?
    • take every nth person in the sampling frame.
    • may run into bias if there are periodic or cyclic ordering in the sampling fram
    • e.g. take every third patient that signs in at a doctors office.
  49. What is convenience sampling?
    • subjects are picked because they are easy to get.
    • may leave out important groups, or get sampling bias because certain people tend to participate
    • e.g. psychology students at univeristy
  50. What is snowball sampling?
    • see if particpants can put you in contact with other people that might be good participants.
    • e.g. people being treated for a rare disease may know other people with the same disease.
  51. What is purposive sampling?
    • You want to look at a specific group so you purposely select people from that group.
    • e.g. asking to use schizophrenics at a state hospital.
  52. What is the power of a study?
    the ability to detect an effect
  53. What 4 factors affect power?
    • the size of the effect
    • the size of the type I error
    • the variability
    • reliability of the measures used
  54. What is a type 1 error?
    • False positive result
    • denoted by alpha
    • equals the significance level
  55. What is a type 2 error?
    • False negative
    • beta
    • 1-beta is the power
  56. What is the standard deviation squared?
    • the variability of the thing being studied
    • e.g. look at the difference int he size of coins. Pennies and nickels are easy to tell abpart because they are all basically made the same size, not much deviation from the norm.
  57. What are Huff's five questions?
    • Who says so
    • How do they know
    • What's missing
    • Did someone change the subject
    • Does this make sense
  58. What are confounds?
    two or more variables are combined so the effect of them can not be separated.
  59. What are 3 types of confounds?
    • Natural -- age, race, cohort
    • Treatment -- some don't want to take drugs
    • Measurement -- if we give a questionnaire out that is higher than participants reading level we may not be measuring what we think we are.
  60. What is internal validity?
    the ability to make causal attribution about variables based on the design and results of the study
  61. What is external validity?
    The ability to generalize the results from the study samply to another population or situation
  62. What are the 7 threats to internal validity?
    • History
    • Maturation
    • Testing
    • Instrumentation change
    • Statistical regression
    • Selection effects
    • Reactivity
  63. How is history a threat to internal validity?
    • During an experiment subjects may be exposed to life events that impact the outcome.
    • e.g. 9/11, or starting school for little kids
  64. How is maturation a threat to internal validity?
    • Maturation effects occur over the normal passage of time. normal growth.
    • e.g. kids may have more word knowlege at the end of and experiment than they did at the beginning simply because they are maturing.
  65. how is testing a threat to internal validity?
    • completing pretests may affect outcome.
    • e.g. if we look at word knowledge subjects may remember some of the words from the pretest during the posttest
  66. how is instrumentation change a threat to internal validity?
    • changing what you are using to measure, instructions, or methods can all affect the outcome of a study.
    • e.g. observers may not agree overtime, you need to recalibrate their definitions of behaviors
  67. How is statistical regression a threat to internal validity?
    • group mean tends to get closer to the norm upon retesting
    • e.g. if you have a group that is on the extreme end of anxiety they will tend to be closer to norm when retested.
  68. how is selection a threat to internal validity?
    • the sample may be biased.
    • nonrandom assignment can affect outcome
    • random assignment may not be as representative as we thing.
    • e.g. age, race, and SES can affect outcome
  69. how is reactivity a threat to internal validity?
    • evaluation apprehension -- don't want to look bad infront of wife
    • Novelty effects -- may not behave the same after weeks of study.
  70. How do you control experimenter bias?
    • blind or masked experiment.
    • This means that the person doing the research doesn't know group assignments.
  71. What are the 4 subject roles?
    • Good subject
    • Negativistic subject
    • Faithful subject
    • Apprehensive subject
  72. What is the good subject role
    subject wants to give information consistent with what they perceive the goals of the study to be.
  73. what is the negativistic sbjuect role?
    subject provides information contrary to their perceived goal of the study.
  74. What is the faithful subject role?
    subject tries to follow the instructions to the letter
  75. What is the apprehensive subject role?
    subject is concerned about performance and being judged.
  76. What are demand characteristics?
    • cues that are inadvertently part of the experimental situation that influence how subjects react.
    • e.g. a baseline measure on PTSD symptoms based on self report alerts the participant the we are trying to reduct PTSD synptoms
  77. What is a double blind experiment?
    both the frontline researcher and the participants are unaware of group assignments
  78. What are experimental expectancies?
    • experimenter's expectations regarding the results may influence the outcome
    • e.g. if experimenters are aware of what group subjects are in, they may inadvertently treat them differently.
    • e.g. clever hans. horse was stomping unitl experimenter reacted, not because he could count
  79. How can we control demand characteristics?
    • reduce the cues you give people
    • figure out how to motivate them to give you a more realistic answer
    • role play control groups
  80. What are the 3 components of external validity?
    • Structural -- what are our methods
    • Functional -- real world vs. experiment
    • Conceptual -- does it matter to people
  81. What 4 things characterize experimental research?
    • Use of control group
    • Random assignment to groups
    • Researcher manipulates the IV and observes impact on DV
    • maximum possible attribution of causality
  82. What is efficacy?
    proves causal relationship between what we are doing and what is hapening
  83. What is effectivness?
    Causality is already established, but we want to know if it will work for our particular client.
  84. When should you use test-retest
    to evaluate reliability over time.
  85. when should you use internal consistency?
    when items are taken from a theoretical population of items
  86. When shouldn't you use internal consistency?
    when symptoms, behaviors etc. are not causally related to each other
  87. how do you evaluate content validity?
    • are the items in a test consistent with the definition
    • agreement between experts
    • alternative method
  88. how do you evaluate criterion validity?
    • do the scores correlate with other scores that it should
    • does an ability measure correlate with performace
  89. how do you evaluate construct validity?
    do items converge or diverge?