CHFD 5110 Exam 2

Card Set Information

Author:
jacquiroxx
ID:
71304
Filename:
CHFD 5110 Exam 2
Updated:
2011-03-07 19:51:54
Tags:
Research
Folders:

Description:
Research Methods
Show Answers:

Home > Flashcards > Print Preview

The flashcards below were created by user jacquiroxx on FreezingBlue Flashcards. What would you like to do?


  1. Measurement
    The process of observing and recording the observations that are collected as part of a research effort
  2. Construct Validity
    Are you measuring what you inteded to measure?
  3. Translation Validity
    • Under the umbrella of construct validity
    • Focuses on where the operationalization (ie- measure) is a good translation of the construct
  4. Face Validity
    • On it's face, does the operationalization look like a good translation of the construct?
    • Does it seem like it fits for this?
  5. Content Validity
    Operationalization is checked against the relevant content domain for the construct
  6. Criterion-Related Validity
    • The performance of your operationalization (ie- measure) is checked against some other criterion
    • A prediction of how the operationalization will perform on some other measure based on your theory or construct
  7. Predictive Validity
    • Under criterion-related validity
    • Operationalization's ability to predict something it should theoretically be able to predict
    • ie- Is your SAT score able to predict your success in college?
  8. Concurrent Validity
    • Under criterion-related validity
    • Operationalization's ability to distinguish between groups that it should theoretically bea ble to distinguish between
    • ie- Can a measure of depression distinguish between people who are depressed and those who aren't?
  9. Convergent Validity
    • Under criterion-related validity
    • Degree to which the operationalization is simuilar to other operationalizations to which it should theoretically be similar
    • You'll see this one the most
    • ie- Does your measure of depression correlate highly with another measure of depression?
    • You want high correlations, but not TOO high :)
  10. Discriminant Validity
    • Under criterion-related validity
    • Degree to which the operationalization is not similar to other operationalizations to which it theoretically shouldn't be similar to
    • ie- Do you like your therapist & How well you cope in your family... they SHOULDN'T correlate
    • You want low correlations
  11. Inadequate Preoperational Explication of Constructs
    • Threat to construct validity
    • A really long way to say that people didn't do their jobs. They didn't define what they were doing well enough
  12. Mono-Operation Bias
    • Threat to construct validity
    • Pertains to treatment or program
    • Used only one version of the treatment or progam
    • ie- Trying only one dosage of a drug in a drug trial instead of trying multiple dosages
  13. Mono-Method Bias
    • Threat to construct validity
    • Pertains to the measures or outcomes
    • Only used one type of measure instead of several
    • This is a problem because of accuracy- we need multiple ways to look at results!!
  14. Hypothesis Guessing
    • Threat to construct validity
    • People guess the hypothesis and respond to it rather than respond naturally
    • This could cause you to mislabel the "cause." You'll attribute effect to treatment rather than good guessing
  15. Evaluation Apprehension
    • Threat to construct validity
    • Ppl make themselves look good just because they're in a study
  16. Experimenter Expectancies
    • Threat to construct validity
    • Can bias consciously or unconsciously
  17. Confounding Constructs & Levels of Constructs
    • Threat to construct validity
    • Conclude that the treatement has no effect when it's only that level of the treatment which has none (might need more or less to see a result)
  18. Interaction of Different Treatments
    • Threat to construct validity
    • Ppl get more than 1 treatment
    • Labeling issue
    • ie- Don't only get tutoring from your program, also get help from family at home
  19. Interaction of Testing & Treatment
    • Threat to construct validity
    • Does the testing itself make the groups more sensitive or receptive to the treatment?
    • Labeling issue
  20. Restricted Generalizability Across Constructs
    • Threat to construct validity
    • You didn't measure your outcomes completely
    • You didn't measure some key affected constructs at all
    • ie- Viagra was supposed to help with blood pressure
  21. Reliability
    • Consistency or repeatability of a measure
    • Getting the same result every time you do the study
    • Based on True Score Theory of Measurement
  22. True Score Theory of Measurement
    • Observed score = True ability + Random error
    • We never really have the true score!
  23. Random Error
    • Caused by any factors that randomly effect measurement of a variable across the sample
    • Things that happen to you on a daily basis but don't change the dynamics of the entire group
    • ie- One person is having a bad day the day you administer a depression measure
  24. Systematic Error
    • Caused by factors that systematically affect measurement of a variable across the sample
    • Does affect the group average
    • ie- Temperature in the room
  25. Error Component
    There's an observed score, a true score, and an error
  26. Ways to Reduce Measurement Error
    • Pilot test the measures
    • Train interviewers/observers
    • Double-check data
    • Statistically adjust for error
    • Use multiple measures of the same construct
  27. Reliabilty Ratio
    • Reliabilty ranges from 0 to 1
    • It's the percet of the true score that is there
    • Shows how much of a true score we have and how much error we have
  28. Inter-Rater Reliability
    • One type of reliability
    • Assesses the degree to which different raters/observers give consistent estimates of the same phenomenon about the estimate of the true score
    • Percent agreement
    • Usually the cut-off for reliability is .80 (about 80% accurate)
  29. Test-Retest
    • One type of reliability
    • Correlation btw 2 observations on the same test administered to the same (or similar) sample on 2 diff occasions
    • Time btw observations is crucial
    • Looking for stability over time
  30. Parallel-Forms
    • One type of reliability
    • Correlation btw to observations on parallel forms of a test administered to the same sample
    • 2 forms of the "test" can be used independently but it requires that you generate a lot of items (b/c you need to create 2 diff versions)
    • Assumes randomly divided halves are equivalent
    • Looking for stability across forms
  31. Internal Consistency
    • One type of reliability
    • Single test administered to a sample on one occasion
    • Assesses the consistency of the results for diff items for the same construct w/in the measure
    • Average item-total correlation
    • How consistent is the measure inside itself?
    • Probably see Conbach's alpha (a) the most (the average of all possible split half correlations)
  32. Cronbach's Alpha (a)
    • Part of internal consistency reliability
    • The average of all possible split-half correlations
  33. Reliable, Not Valid
  34. Valid, Not Reliable
  35. Neither Valid nor Reliable
  36. Both Reliable and Valid
  37. Relationship of Validity & Reliability
    • Both are important, but aren't related
    • You can have one without the other
    • You want it to be reliable AND valid if you can
  38. Nominal Level of Measurement
    • Numerical values simply name the attribute
    • No ordering of values is implied (the number is a place holder, not an indication of how good or bad something is)
    • Numerical values are simply "short codes" for longer names
    • ie- "1" for male and "2" for female
  39. Ordinal Level of Measurement
    • Attributes can be rank-ordered
    • Distances/intervals between attributes have no meaning (you can rank who's higher and who's lower, but you don't know the gap between them)
    • ie- Degrees of agreement (strongly agree/agree/neutral...)
  40. Interval Level of Measurement
    • Attributes can be rank-ordered
    • Distances/intervals between attributes do have meaning
    • ie- Temperature on Fahrenheit scales- if it's 65 today and 75 tomorrow, we can say tomorrow will be 10 degrees warmer
    • Ratios don't make sense (80* isn't twice as much as 40*)
    • No meaninful Absolute Zero (0* is not the absense of heat)
  41. Ratio Level of Measurement
    • Always a meaninful Absolute zero
    • ie- Anything you count or Income in dollars
    • Ratios make sense (If you made $2 and I made $4, I made $2 more than you did)
  42. Survey
    • Frequently used method in our field
    • Questionnaires & interviews
  43. Structured Questions
    • Dichotomous response format (ie- "yes" or "no")
    • Questions based on level of measurement (ie- rank order of preferences)
    • Filter or contingency questions (1st question determines qualification to answer the next one)
  44. Filter or Contingency Questions
    • First question determines qualification or necessary experience to answer the next question
    • ie- "If no, skip to question 4."
  45. Double-Barrelled Questions
    Questions with more than one response
  46. Structured Response Formats
    • Fill-in-the-Blank
    • Check the Answer
    • Circle the Answer
  47. Unstructured Response Formats
    • Written text
    • ie- "Tell me more about your experience with..."
  48. Indexes*
    • When you combine 2+ variables to reflect a more general construct
    • ie- SES Index
  49. Constructing an Index
    • Conceptualization
    • Operationalization & Measurement
    • Development of rules for calculating the score
    • Validation of the index score
    • CODV!!
    • Consider- does it match what I'm trying to measure??**
  50. Scaling
    • Take qualitative ideas (like willingness to have immigrants in your country) and come up with a way to measure them quantitatively
    • Where a lot of error occurs
    • Typically yields a single numerical score that represents the construct of interest
    • ie- Degree of depression
    • Is a process, not a response scale... so it requires more development
    • Seen more than indexing
  51. Purpose of Scaling
    • Hypothesis Testing: Is the construct or concept a single dimensional one?
    • Exploration: What dimensions underlie some ratings?
    • Scoring: For assigning values to responses
  52. One-Dimensional Constructs
    • Higher/Lower
    • ie- Age, Height, GPA
  53. Two-Dimensional Constructs
    • Constructs must be related
    • ie- Shared activity & communication to measure marital satisfaction
  54. Likert Scaling
    • Unidimensional scaling methods
    • Summative scale
    • Scaling process, not response-format
    • Start w/ large set of items you think all reflect the same construct
    • Have a group of judges rate the items on how much they think it relates to the overall concept
  55. Item-Total Correlations
    • Likert scaling
    • Throw out items with the total (summed) score across all items
  56. Internal Consistency
    • Likert scaling
    • For each item, get the average rating for the top 1/4 of judges and the bottom 1/4
    • Better discrimination
  57. Qualitative Measures
    • If it's not numbers, it's qualitative
    • Consists of words
    • Used to generate new theories/hypotheses, Achieve deep understanding of an issue, Develop detailed stories to describe a phenomenon
    • More about understanding something, not proving something
  58. Quantitative Data
    Consists of numbers
  59. Qualitative Traditions
    • A big umbrella, kind of like a theory
    • Includes: Ethonography, Phenomenology, Field research, and Grounded theory
  60. Ethnography
    • Qualitative tradition
    • Study within the context of a culture
    • What only, no interaction
    • Concerned with cause & effect
    • Sometimes called "naturalistic research"
    • Used mostly in anthropological research
    • Most common approach is participant observation
    • No limits of what will be observed and no real ending point
  61. Phenomenology*
    • Qualitative tradition
    • Interested in the phenomenon from the perspective of the participants
    • Acknowledge that objectivity is impossible to ascertain and often incl. a section in their report about themselves to acknowledge biases
  62. Field Research
    • Qualitative tradition
    • Researcher observes a phenomenon in its natural state (in situ)
    • An observation in the environment where they'd normally be
    • No implemented controls or experimental conditions to speak of (nature is emphasized over culture here!)
    • Only concerned with observing. Not concerned with cause & effect
    • Especially useful in observing social phenomena over time
  63. Grounded Theory
    • Qualitative tradition
    • To develop a theory, grounded in observation, about a phenomenon of interest
    • Build a theory from the ground up or go in with preconceived notions
    • Interview changes as you learn things
    • Eventually get to a place where you've fully explored the phenomenon & have an idea what's going on
  64. Qualitative Methods
    Think of these not as the umbrella, but as separate umbrellas under tha main umbrella of traditions
  65. Participant Observation
    • Qualitative Method
    • Researcher becomes a participant in the culture being observed
    • Takes some time
  66. Direct Observation
    • Qualitative Method
    • Researcher not a member of the culture being studied but remains unobtrusive
    • Just observing- not getting involved
    • Don't get as much info as you would if you were participating
  67. Unstructured Interviewing
    • Qualitative Method
    • Direct interaction btw. the researcher and respondent
    • No set direction- just kind of see where it goes
    • There's always a little bit of structure, though, since you're interviewing this person for a reason
  68. Case Studies
    • Qualitative Method
    • Intensive study of a specific individual or specific context
  69. Traditional Criteria for Judging Qualititative Research
    • Internal validity
    • External validity
    • Reliability
    • Objectivity
  70. Alternative Criteria for Judging Qualitative Research
    • Credibility
    • Transferability
    • Dependability
    • Confirmability
  71. Credibility
    Est. that the results are credible from the perspective of the participant
  72. Transferability
    Degree to which results can be generalized to other contexts
  73. Dependability*
    • Description by the researcher of changes w/in the context & how these might affect conclusions
  74. Confirmability
    Degree to which others can confirm or corroborate the results
  75. Indirect Measures
    • Unobtrusive measure
    • The researcher collects data w/out the participant being aware of it
  76. Content Analysis
    • Unobtrusive measure
    • Systematic analysis of text in order to identify patterns
  77. Secondary Analysis*
    When you use data that's already been collected (ie- census report) for new research

What would you like to do?

Home > Flashcards > Print Preview