Research Methods

Card Set Information

Author:
faulkebr
ID:
103033
Filename:
Research Methods
Updated:
2011-09-26 22:57:45
Tags:
Lecture
Folders:

Description:
Research Hypothesis and Variables
Show Answers:

Home > Flashcards > Print Preview

The flashcards below were created by user faulkebr on FreezingBlue Flashcards. What would you like to do?


  1. Most important reasons for doing research:
    • 1. to solve practical problems - applied research
    • 2. To satisfy curiosity and gain scientific understanding - Basic Research
  2. Reasons for doing research:
    To solve practical problems - Applied Research
    • -Motivation: SPECIFIC need
    • -Purpose:knowledge applied to a very specific situation
    • -Application: narrow
  3. Reasons for doing Research
    To satisfy curiosity and gain scientific understanding: Basic Research
    • -Motivation:curiosity-desire for explanation
    • -Purpose:basic principles
    • -application: broad
  4. Research Hypothesis
    Version 1: Somewhat abstract
    1
    2
    • 1. theoretical constructs (concepts or conceptual variables likely not directly observable)
    • 2. State (make educated guess)
    • a. relationship between theoretical
    • constructs OR
    • b. that one theoretical construct
    • causes another
    • Ex: violent TV content - aggression
    • Dreaming in color - creative
    • Bumper stickers - Road Rage
  5. Research Hypothesis
    Version 2: (More specific and testable):
    convert abstract concepts (theoretical constructs) into specific (testable) variables
  6. Variable -
    -has a range of values (levels)
  7. Force =
    mass X acceleration
  8. Weight =
    mass X gravitational field strength
  9. Canadian dollar =
    1.18 X US dollar
  10. Fine Motor skills is associate to:
    Gender
  11. Quantative
    continuous numerical scale
  12. Categorical
    Discrete
  13. Kinds of Variables:
    Experiments (cause)
    • -Independent Variable
    • -Dependent Variable
    • -Control Variable
  14. Independent Variable
    E changes (manipulates) (possible cause)

    first in time: independent of the results of the study.
  15. A level of a variable -
    one value of an I.V.

    An independent variable has at least 2 levels of a treatment. If not- NOT a variable
  16. Dependent Variable
    measured by E (possible effect)

    Second in time: depends on I.V.
  17. Control Variable
    could be independent variable but held constant (not the same as control group or condition)
  18. Kinds of variables:
    Non-experiments (no manipulatinion
    (relationships)
    • -Predictor variable
    • -Outcome or Criterion variable
  19. Predictor variable-
    • No manipulation
    • Possible cause; first in time
  20. Outcome or Criterion Variable
    Possible EFFECT; second in time
  21. I.V. before ___
    P.V. before ___
    D.V. ; C.V.
  22. Comparing effects of different amounts of variable; one amount for each subject;
    Between subjects comparison
  23. Comparing effects ofdifferent amounts of variable; all amounts for each subject:
    within subjects comparison
  24. Operational Definitions:
    Exact description of procedure used to generate independent variable: how to produce each level of the independent variable

    Exact description of how to "obtain" dependent variable: how to measure or recognize a particular thing or characteristic
  25. Operational definitions for independent variables:
    Statement describing what to do (produce, create, construct, generate) different amounts or levels of the variable

    Example: fear vs no fear / frown vs no frown (categories)
  26. operational definitions for dependent variables (& predictor and outcome variables):
    statement of what to do to measure (or how to measure) a particular thing or characteristic

    ex: fear; smile vs frown; neuroticism
  27. O.D. for dependent variables must be:
    • -Clear, precise, objective...so repeatable by others
    • -Captures at least some part of the concept you are trying to measure (valid)
    • -Can be done consistently (reliably); clarifies the meaning of the concept

    Practical
  28. Research Hypothesis: Watching violent cartoons increases aggression in children

    I.V.?
    D.V.?
    I.V. - Cartoons with violence and Cartoons without violence (Describe criteria for violence)

    D.V. - Number of times child hits BoBo doll (criteria for hit?)
  29. An operational definition purposes:
    • 1. public verification
    • 2. Blueprint for experiment
    • fear causes thigmotaxis
    • agreed upon methodology, systemati
    • observation
    • 3. Evaluation of research
    • captures ideas that researcher
    • claims to test. quality of research
  30. examples of established relationships
    • gas, pressure, temperature
    • crowds and helping
    • noise and performance
    • weight loss and activity
  31. Good theory if:
    it explains a variety of facts
  32. If its not a risky prediction then;
    poor predictive power
  33. Testing research hypotheses and theories:
    Develop a number of different hypotheses/theories

    design and conduct "critical tests" of each

    Gather (1) falsifying and (2) confirming (supporting) evidence
  34. Theory/Hypothesis/Explanation:
    Lots of supporting evidence and little or no falsifying evidence and closest to the truth

    (at the moment)
  35. Induction:
    Collect specific observations to infer general principles
  36. Inductions are not:
    logically valid
  37. Reasearch Hypothesis can be written in _____ ____ format.
    "if-then"
  38. research hyp. in "if then" format

    If the hy. that the full moon increases aggression is true, then there should be more fights in bars around town on nights of full moons. (antecedent)

    consequent?
    Consequent: there were more fights
  39. Fallact of affirming the consequent
    • 1. If P, then Q
    • 2. Q
    • 3. Therefore P

    • If smith is a mother, then she is female
    • Smith is a female
    • Therefore smith is a mother
  40. Results NEVER prove a hy. tobe correct
    (fallacy of affirming the consequent)
    always possible new data will be collected
  41. Modus Tollens Argument
    Logically valid

    • if p then q
    • not q
    • therefore not p
  42. Measurement
    the act of characterizing observations
  43. Measurement

    1.
    1. Develop o.d. - clear procedures for measuring or classifying observations and producing variables: publicly verifiable, reliable

    a.)Decide in advance what you are going to measure(esp. important in naturalistic ob.)

    b.)decide how to assign numbers (or categories) to observations
  44. Confirmation Bias:
    tend to notice and look for evidence that confirms expectations
  45. Measurement:

    2.
    Collect data
  46. Measurement
    3.
    Summariza data

    Data from individuals not representative

    a.) measures of central tendancy

    • Mode
    • Median
    • Arithmetic mean
    • Geometric mean
  47. Different Indices of Variability
    All measures of variability show how much scores from the standard, baseline or reference
  48. Different indicies of variability

    Variance or Mean Square (MS):
    the average of the squared distances each score is from the mean
  49. How do you change to a measure that is similar to the average distance a set of scores is from the mean?
    Take the square root: standard deviation
  50. Frinding variance:
    • 1. find mean
    • 2. subtract mean from each score
    • 3. square resulting number
    • 4. add together
    • 5. divide by N
  51. Standard Deviation
    proportional to the average distance a set or scores is from the mean
  52. Characteristics of measurement:
    • Sensitivity
    • Reliability
    • validity of a test/measurement
  53. Sensitivity of test/measure

    Sensitivity:
    increases with the number of possible values that can be consistently used.

    • differentiate
    • discriminate
    • distinguish
  54. Reliability of a test/measure
    stability orconsistency-sometimes over time; repeatable, reproducible

    precise in sense that random, UNSYSTEMATIC errors of measurement are minimal/nonexistent
  55. Reliability of a test/measure

    Classic approach to reliability:
    Obtained Score = true score + unsystematic measurement error

    (look at many scores from many individuals)
  56. Unsystematic measurement error influences:
    variability of scores - reduces precision
  57. unsystematic measurement errors random should:
    balance out.
  58. How do you determine true score?
    use arithmetic mean
  59. If you use arithmetic mean, the average errors should;
    equal 0
  60. Systematic measurement is;
    "true" variance
  61. Reliability =
    • Systematic variance
    • _________________
    • systematic+error variance


    • Total Variance
    • (will get a number between 0 and 1)
  62. Test-Retest Reliability
    Consistency of a measure from one time to the next
  63. Systematic vs. unsystematic (random) measurement error ....
    • Unsystematic errors - cancel out
    • Systematic errors - do NOT cancel out
    • Unsystematic errors - influence variablity
    • Systematic errors - influence mean - (poor variability)
  64. Sources that influence test-retest reliability:
    • Time:influenced by time between tests
    • Change
    • Carry over
  65. Parallel Forms Reliability
    Consistency of the results of a test constructed in the same way and from the same content area.
  66. Interitem (internal consistency) Reliability:
    consistency of results across items within a test designed to measure same construct.
  67. 1. Average inter-item correlation
    2. Split-half reliability
  68. Coefficient alpha;
    Average of all possible split half reliability
  69. Inter-rater or inter-observer reliability
    degree to which different raters or observers give consistent estimates of the same phenomenon
  70. Validity of a test/measure

    Measures or test measures what;
    • it is supposed to
    • it is designed to
    • purports to measure
  71. Content validity of test/measure
    extent to which it uses items or content representative of the area (concept) you are trying to measure
  72. Content validity of test/measure
    -Starts with -
    Construction of test--theory and research

    • 1.Items truly from content area
    • 2.Items are representative same of
    • content area
  73. Criterion Validity of test/measure
    Degree to which a test correlates with some direct and independent measure of what you are trying to measure.

    • IQ
    • boss's ratings of salesmanship
    • weights? crimes?
  74. Direct (usually behavior) and independent measure:
    a single criterion/operational definition
  75. Predictive validity-
    Future
  76. Concurrent Validity -
    Same time
  77. Construct validity of a test/measure
    Extent to which a test or measure can be shown to measure a particular theoretical construct-conceptual variable (unobservable abstract trait or feature)

    • 1. test produces "numbers" distinct from that produced by a measure of another construct
    • 2. Based on accumulated evidence
  78. Construct validity of test/measure

    1. Define clearly;
    characteristic or trait(construct) to be measured-theoretical relationships etc.

    Sensation seeking: a trait describing the tendancy to seek novel, varied, complex, and intense sensations and experiences and the willingness to take risks for the sake of the experience
  79. Construct validity of a test/measure

    2.
    Correlate test with a variety of measures that should be positively, negatively, or not correlated with characteristic (construct)
  80. Construct validity of a test/measure

    3.
    Examine the pattern of results using a diverse body of evidence.

    • Convergent validity
    • Discriminent Validity
    • Criterion Validity
  81. Convergent validity correlations-
    strong cor. between test and conceptually similar measures
  82. Discriminent Validity Correlations-
    low cor. between test and measures of different theoretical constructs
  83. Criterion Validity correlations
    strong cor. between test and direct and independent measure (ex. behavior)
  84. Often reliability means:
    that test is correlated with itself-reproducible
  85. Campbell and Stanley: two criteria regarding experimentation
    • internal validity
    • external validity
  86. Internal Validity:
    degree to which a study measures the effect of hypothesized cause or independent variable

    -Did different levels of the treatment (IV) cause the change in the outcome(DV)?
  87. External Validity:
    Degree to which the finings can be generalized to other subjects, and to other situations (settings, levels of variables, ways of measurement etc)
  88. Response Acqiescience:
    yea-saying (or the opposite: response deviation--nay saying)
  89. Types of external validity
    • ecological validity
    • population validity
  90. Population validity
    the degree to which sample scores can be generalized to the target population
  91. False Consensus Effect
    tend to overestimate the extent to which other people share or behaviors, attitudes, and beliefs
  92. Construct Validity
    the degree to which the independent and dependent variables accurately reflect or measure what they are intended to.
  93. External Validity
    the extent to which one can generalize from the research setting and participant population to other settings and populations
  94. Internal Validity
    refers to whether one can make causal statements about the relationship between variables
  95. Reliability
    refers to the consistency of behavioral measures
  96. Split-half Reliability
    involves dividing the test items into 2 arbitrary groups and correlating the scores obtained in the 2 halves of the test
  97. Stratified Sample
    divides the population into smaller units
  98. Power
    occurs when a statistical test can detect effects
  99. Meta-Analysis
    reletively objective technique for summarizing across many studies investigating a single topic

    example: lunar-lunacy hypothesis. full moon makes for more aggressin
  100. Psychophysical Scaling
    scaling of concepts such as brightness
  101. Psychometric Scaling
    applies when concepts, such as depression, are measured but usually do not have clearly specified inputs
  102. Weber's Law states:
    For a particular sensory modality, the size of the difference threshold relative to the standard stimulus is a constant.
  103. Summated Rating Scale
    A popular way of assessing psychological traits that do notseem to lie on a known physical scale.

    Provides a score for a psychometric property of a person that derives from how that person responds to several statements about a topic that clearly are favorable or unfavorable
  104. Standard Error of the Mean
    is the standard deviation of a distribution of sample means

What would you like to do?

Home > Flashcards > Print Preview