measurement and data quality

The flashcards below were created by user jam110007 on FreezingBlue Flashcards.

  1. measurement
    the assignment of numbers t represent the amount of an attribute present in an object or person, using specific rules
  2. advantages of measurements
    • removes guess work
    • provides precise information 
    • less vague than words
  3. levels of measurement [4 levels/classes]
    • nominal
    • ordinal
    • interval
    • ratio
  4. nominal
    the lowest level, involves using numbers simply to categorize attributes i.e. gender and blood types. DO NOT have quantitative meanings
  5. ordinal
    ranks people based on relative standing on an attribute [putting things into categories but… rec 2:35] i.e. Braden scale. It DOES NOT tell us how much greater one level is than another
  6. interval
    objects ordered on a scale that has equal distances between points on the scale

    occur when researchers can rank people on an attribute and specify the distance between them i.e. psychological tests, an IQ score. Measures CAN BE averaged, and many statistical procedures require interval data
  7. ratio
    equal distances between score units; there is a rational, meaningful zero

    the highest level of measurements. Have a meaningful zero and thus provide information about the absolute magnitude of the attribtrary zero point i.e. weight, height, distance.
  8. which is the highest level of measurement?
  9. we can think of quantitative data as consisting of two parts. what are they?
    • true component
    • error component
  10. obtained score
    • is the observed score, An actual data value for a participant 
    • i.e. a patients heart rate or a score on an anxiety scale
  11. true score
    is what we would get if we had an infallible measure – its what we obtain if everything is exact [environment etc.]
  12. error of measurement
    the error of measurement, caused by factors that distort measurement 

    some errors are random, while others are systematic representing a source of bias
  13. obtained score equation
    ™Obtained Score = True score ± Error
  14. factors contributing to measurement errors
    • situational contaminants
    • transitory personal factors [e.g., fatigue]
    • response-set bias
    • administration variations
    • item sampling
  15. situational contaminants
    • scores can be affected by the conditions under which they are produced 
    • i.e. environmental factors
  16. transitory personal factors
    temporary states such as fatigue, hunger, or mood can influence people's motivation or ability to cooperate, act naturally, or do their best
  17. response-set bias
    enduring characteristics of respondents can interfere with accurate measures
  18. administration variation
    the ways in which an instrument is realized or put into use
  19. item sampling
    errors can reflect the sampling or items used to measure an attribute
  20. psychometric assessment
    is an evaluation of the quality of a measuring instrument
  21. hey criteria in a psychometric assessment
    • reliability 
    • validity [of an instrument]
  22. reliability
    the consistency and accuracy with which an instrument measures the target attribute
  23. reliability assessments invlive computing a _?_
    reliability coefficient
  24. reliability coefficients can range from ...?
    • from .00 to 1.00
    • coefficients below .70 are considered unsatisfactory 
    • coefficients of .80 or higher are desirable
  25. what are the three aspects of reliability of interest to quantitative researchers
    • stability 
    • internal consistency 
    • equivalence
  26. stability
    the extent/degree to which scores are similar on two separate administrations of an instrument
  27. stability is evaluate [assessed] by what?
    test-retest reliability
  28. what does test-retest reliability  require
    • requires participants to complete the same instrument on two occasions 
    • appropriate for relatively enduring attributes [e.g, creativity]
  29. internal consistency
    the extent to which all the items on an instrument are measuring the same unitary attribute [can measure the same trait

    appropriate for most multi-item instruments

    the most widely used approach to assessing reliability [in nursing research]

    assessed by computing coefficient alpha [Cronbach's alpha]. Alphas ≥.80 are highly desirable.  the higher the coefficient = more accuracy
  30. internal consistency is evaluated by what?
    by administrating instrument on one occasion
  31. equivalence
    • the degree of similarity between alternative forms of an instrument or between multiple rater/observers using an instrument 
    • most relevant for structured observations
    • other definition 
    • primarily concerns the degree to which two or more independent observers or codes agree about scoring on an instrument
  32. equivalence is assessed by?
    assessed by comparing agreement between observations or rating of two or more observers [interobserver/interrater reliability]
  33. reliability principles
    • low reliability can undermine adequate testing of hypotheses 
    • reliability estimates vary depending on procedure used to obtain them
    • reliability is lower in homogeneous then heterogeneous sample 
    • reliability is lower in shorter than longer multi-item scales.
  34. validity
    the degree to which an instrument measures what it is supposed to measure

    remember that reliability and validity are NOT independent qualities of an instrument. however an instrument CAN be reliable without being valid
  35. four aspects of validity
    • face validity 
    • content validity 
    • criterion-related validity 
    • construct validity
  36. an instrument's _?_ reliability does not provide evidence for its _?_, but _?_  _?_ of a measure is evidence of _?_  _?_
    an instruments _low_ reliability does not provide evidence for its _validity_, but _low_  _reliability_ of a measure is evidence of _low_  _validity_
  37. face validity
    • refers to whether the instruments looks as though it is an appropriate measure of the construct 
    • based on judgement; no objective criteria for assessment
  38. content validity
    the degree to which an instrument has an adequate sample of items for the construct being measured

    An instruments content validity in based on judgment. There are no totally objective methods for ensuring adequate content coverage
  39. content validity is evaluated by what?
    by expert evaluation, often via a quantitative measure -> the content validity index [CVI]
  40. content validity index [CVI]
    • Indicate the extent of expert agreement
    • value of 0.90 or higher is the standard for establishing excellence in a scale's content validity
  41. criterion-related validity
    • the degree to which the instrument is related to an external criterion
    • It assists decision-makers by giving them some assurance that their decisions will be fair, appropriate, and, in short, valid.
  42. validity coefficient is calculated by what?
    by analyzing the relationship between scores on the instrument and criterion
  43. two types of criterion-related validity
    • predictive validity
    • concurrent validity
  44. predictive validity
    the instruments ability to distinguish people whose performances differs on a future criterion
  45. concurrent validity
    the instrument's ability to distinguish individuals who differ on a present criterion
  46. what is a desirable coefficient value for criterion-related validity
    0.70 or higher
  47. construct validity is concerned with what types of questions?
    • what is this instrument really measuring?
    • does it adequately measure the construct of interest?
  48. methods for assessing construct validity
    • known-groups technique
    • testing relationship based on theoretical predictions
    • factor analysis
  49. construct validity is a key criterion for assessing what?
    assessing research quality and is most often linked to measurement. it is essentially a hypothesis-testing endeavor, typically linked to theoretical conceptualizations
  50. known-group technique
    groups are expected to differ on the targets attributes are administered the instrument, and group scores are compared
  51. testing relationships based on theoretical predictions
    can be fallible but offers supporting evidence
  52. factor analysis
    a method for identifying clusters of related items for a scale. identifies and groups together different measures into a unitary scale on how participants reacted to the items, rather than base on the researcher's preconceptions
  53. what needs to be evaluated for screening and diagnostic instruments?
    • specificity 
    • sensitivity
  54. specificity
    the instrument’s ability to correctly identify noncases, that is, to screen out those without the condition [yielding true negatives]
  55. sensitivity
    the instruments’ ability to correctly identify a “case” [yielding true positives]—i.e., to diagnose a condition
  56. likelihood ratio
    ™Summarizes the relationship between sensitivity and specificity in a single number

    –LR+: the ratio of true positives to false positives

    –LR-: the ratio of false negatives to true negatives
Card Set:
measurement and data quality
2013-12-05 23:11:29
3215 final
3215 final
Show Answers: