Research methods weeks 5-8

Card Set Information

Author:
komail
ID:
288626
Filename:
Research methods weeks 5-8
Updated:
2014-11-10 00:40:50
Tags:
Research methods weeks
Folders:
Research methods weeks 5-8
Description:
Research methods weeks 5-8
Show Answers:

Home > Flashcards > Print Preview

The flashcards below were created by user komail on FreezingBlue Flashcards. What would you like to do?


  1. What percentage of clinicians say they understand the research?
    What percent of articles are considered accessible by those with basic stats training?
    • 33%
    • 21%
  2. What were the main points about the ELITE1 and ELITE 2 trial?
    ELITE1 showed losartan had 50% less risk of mortality, but mortality was a secondary endpoint. 

    In ELITE2, the primary endpoint was mortality, and it showed that mortality was unchanged with losartan. 

    Thus don't use secondary endpoints to make judgements.
  3. What are some stats on highly positive results that end up being contraindicated?
    • analyzed 115 articles published from 1990-2003 in major journals and speciality journals that had over 1000 citations.
    • -49 reported evaluations of health care interventions
    • 45 claimed the interventions were effective
    • by 2004, 5/6 non-randomized studies, and 9/39 randomized trials were contraindicated or exaggerated
  4. How much of research may not be randomized?
    80%
  5. What was a problem with the RALES study?
    • spironolactone vs placebo
    • very stringent inclusion/exclusion criteria
    • Spironolactone was protective in the study

    In real life, the patients had a higher risk of hyperkalemia

    • Why? 
    • Pts not monitored ascarefully, lower compliance, interaction with other drugs (Ace-inhibitors)
    • 24% got hyperkalemia, compared to 1.7%
  6. What is the main recommendation of the RALES study?
    spironolactone is protective in heart failure, but only in a select few patients who meet inclusion criteria.
  7. What were the main limitations of the study that looked at the risks for new users of NSAIDs?
    • Confounding [non randomized, lack of detailed clinical info]
    • Availability and use of drugs [ cox-2 have a restricted status, non-selective are available OTC]
    • Duration of use was short-term
    • Generalizability of patient population
  8. What is a time-series? What is the major goal?
    A set of observations on the values that a variable takes at diff. times

    Goal: to develop a mathematical model that describes pattern to allow future prediction.
  9. How can you figure out random fluctuations in a time series?
    Take the trend, and subtract the cyclical components, the remaining portion are the random fluctuations.
  10. What is autocorrelation? What is an example?
    error term associated with any observation isrelated to error term of other observations.

    • ex. Durbin-Watson statistic
    • Ljung-box statistic
  11. What is stationarity?
    mean and variance of stochastic processes are consistent overtime, and the covariance depends only on distance or lag between the 2 time periods. 

    ex. Augmented Dick-Fuller test
  12. What is an inception cohort?
    Excludes patients with history of disease, just recent diagnosis/onset
  13. Strengths of a cohort study
    • Time sequence; exposure precedes outcome
    • collect info on all relevant predictions/confounders
    • minimizes recall bias
    • estimates incidence
    • can assess multiple outcomes
  14. Weaknesses of cohort studies
    • Large # of people needed
    • loss to follow up
    • expensive, and resourceintensive
    • status of subject may change (switch between control and exposure)
    • may miss subclinical forms of outcome
    • inefficient for studying rare outcomes
    • Prospective: time consuming
    • Retrospective: limited control over sampling and choice/quality of predictor variables
  15. What are four aspects in which cohorts and RCTs differ? and how so?
    • Population: diverse vs. highly selected
    • Intervention: patient/provider vs randomized
    • Follow-up: longer followup vs short follow-up
    • Analysis: sophisticated multivariate techniques for confounding adjustments vs. simple-to-sophisticated
  16. How can you tell if an exposure is a cause of an outcome?
    Exposure is a cause of an outcome if exposure at a given level results in a different out come than would have occurred without that level of exposure
  17. Why establish causality?
    • Guide to predict, prevent, diagnose , treat
    • Treatable or reversible cause
    • Clinical and research
  18. What is the bradford-Hill criteria for establishing causality?
    • Temporality
    • strength of association
    • dose-response
    • reversibility
    • consistency
    • biological plausability
    • specficity: Causation is likely if a very specific population at a specific site and disease with no other likely explanation.
    • analogy
  19. How might you alter your cohort study research design to help establish causality?
    • Ensure entire group not experienced outcome
    • observed time period needs to be meaningful in disease context
    • complete follow up
  20. What are some ways you can distinguish causality from association?
    • chance (random error)
    • bias (in selection or measurement)-> systematic error
    • confound (another type of bias)
    • effect modifier (interactions)
  21. Selection bias (according to cohort study lecture)
    • systematic error in creating intervention groups, causing them to differ with respect to prognosis
    • groups differ in baseline characteristics due to ways in which participants were selected for study or assigned to study groups
    • -occurs at design phase, may impact external or internal validity
  22. How can you control selection bias?
    • randomization
    • restriction
    • matching
    • stratification
    • adjustment/standardization
    • sensitivity analysis
  23. What is sampling bias?
    Study sample taken from population of interest should be representative of the population

    • -inclusion/exclusion criteria
    • -volunteer bias
    • -referral centre bias
  24. What is assembly bias?
    • one group is more susceptible to outcome than another (aka susceptibility bias)
    • due to differences in extent of disease, presence of co-morbidity, prior treatments.
  25. What is migration bias?
    • patients drop out or move from one group to another 
    • loss-to-follow up
    • cross-over
  26. When can measurement bias occur? 
    What are some types of measurement bias?
    Can occur at design, conduct, or analysis phase

    • observation
    • classification
    • information
    • detection
    • ascertainment : differential way data is collected
    • recall
    • screening bias (aka lead-time bias)
  27. How can you control measurement bias?
    • blinding
    • exposure and outcome data should be collected similarly
    • standard outcome definition
    • choose an objective/hard outcome
  28. What is confounding?
    Estimated intervention effect is biased because of some difference between comparison groups apart from the planned intervention, such as baseline characteristics, prognostic factors, or concomitant interventions
  29. How to control for confounding?
    • Actively exclude or control for confounding variables
    • cohort studies: matching
    • RCT: stratification
    • controlling for confounding by measuring known confounders and including them as covariates in multivariate analysis.
  30. How can you determine the importance of a bias?
    Strength and direction

    • strength: robustness of findings when bias is controlled
    • direction: bias less important if association is significant despite bias in opposite direction
  31. What are the implications of bias?
    • real, spurious, indirect
    • spurious: due to selection bias, measurement bias or chance
    • indirect: due to confounding
  32. How can you improve follow-up for cohort studies
    • exclude those likely to be lost (moving, unwilling)
    • obtain info to allow future tracking
    • during follow-up: periodic contact with subjects
    • checking vital stats from OHIP/registries
  33. What is a hazard ratio?
    • Similar to RR but time-to-event analysis, has an associated time value. 
    • Usually shown as a survival rate

    "Twice as many people developed HT in 6 months"
  34. What are some critical appraisal tools for cohort studies?
    • STROBE statements
    • NOS
  35. Random info about STROBE statements
    • Strengthening the Reporting of Observational Studies in Epid
    • -international collaborative initiatives of epid, methodologists, statisticians involved in conduct and disseminiation of observational studies
  36. Random info about NOS for cohort studies
    aim is to assess the quality of non-randomize studies with its design, content, and ease-of-use

    • uses a star system which judges a study in 3 perspectives:
    • selection of study groups
    • comparability of groups
    • ascertainment of either exposure or outcome of interest, for case-control or cohort studies, respectively.
  37. What is the goal of the NOS?
    to developan instrument providing an easy and convenient tool for quality assessment of non-RCT to be used in systematic review
  38. Who is richard doll?
    linked smoking to health problems using case-control study
  39. If you get similar results/estimates for OR when you compare your cases to different controls, then this is evidence _____ _____
    against bias
  40. What is a nested-case control study? What are its advantages? What is an example?
    • Case-control nested within a cohort study
    • most data are collected before outcome occurrence, so less likely to be impacted by recall bias
    • ex. peanut allergy study by AVON longitudinal study group
  41. What are some advantages of case-control studies?
    • study a risk factor/multiple risk factors
    • disease of interest is rare (impractical for cohort)
    • long latency period between exposure and disease
    • random assignment is unethical or impossible
    • need an answer quickly (bypassses need to collect info on large # of people who wont get the outcome)
  42. Case-control lets you study ___ while cohort lets you study ____
    exposures; diseases
  43. Which type of study is more susceptible to recall bias? Case-control or cohort?
    Case-control
  44. When does OR approximate the RR?
    When the baseline probabilities of the outcome are low (<0.1-0.2)
  45. What is a primary base?
    adv?
    disadv?
    • Thepopulation that the investigator wishes to target, with the cases being the subjects who develop the disease within the base. The population is defined geographically and temporally
    • adv: easier to sample for controls from a primary base
    • disadv: it is challenging/impractical to ascertain all cases in a primary base
  46. What is a challenge in using secondary bases?
    determination of the study base
  47. What is the ideal way of selecting cases?
    • choose all incident cases in the source population
    • -recall of past exposures may be more accurate
    • -temporal sequence is easier to assess.
  48. What is a problem with using prevalent cases?
    • biases towards longer survival among participants 
    • differential recall of risks
    • exposures status may change with onset of each disease
  49. What is the ideal way to select controls?
    Direct random sampling from source population
  50. What are some different types of controls?
    • Population-base
    • hospital/medical practice controls
    • neighbourhood/friends controls
    • relatives
    • proxy respondents
    • deceased controls
  51. When might it be important to include hospital/medical practice controls?
    in cases where the probability of disease diagnosis depends on access to medical care, then it would be appropriate to include hospital controls
  52. Population based controls
    • Random sample of population from which cases came
    • usually defined by geographic borders
    • valid if cases consist of all individuals who developed disease of interest in a defined population
  53. Hospital controls
    • usually patients seeking care for other conditions (controls should have a variety of conditions)
    • disadv: not always possible to identify source population of the cases
    • -additional biases of specialized medical facilities
  54. Neighbourhood controls
    • source population of cases poorly defined and presumably healthy controls are desirable 
    • neighbours tend to seek care at similar places
    • neighbours are similar with respect to socioeconomic status and determinants of health
  55. What is the comparable accuracy principle?
    measurement of exposure should be comparable in cases and controls
  56. What is the deconfounding principle? 
    How can you deconfound?
    • confounding should not be allowed to distort estimation of effect
    • restriction, matching, adjustment
  57. What is restriction?
    what does it help control?
    what is it susceptible to?
    disadvantages?
    • Limits #of eligible subjects (ex.only males aged 40-50)
    • helps control selection bias
    • susceptible to residual confounding effect (risk still varies within age 40-50)
    • limits generalizability and does not allow evaluation of the restricted factors
  58. What is matching
    what does it help control
    advantages?
    disadvantages
    • Ensure that grous do not differ for confounders (ex. for every active male between 40-50, there will be an inactive male aged 40-50)
    • helps control selection bias
    • requires control of confounding at both design and analysis of study. 
    • advantages: allows matching geographically to control socioeconomic/ethnic factors
    • diadvantages: overmatching: match on factors that may themselves be related to exposures.
  59. What is recall bias?
    • Occurs when a survey respondent answers a question
    • influenced by correct answer, but also respondent's memory
    • ex. response bias: socially acceptable response
  60. What is misclassification bias?
    • Type of information bias orginated when sensitivity and specificity of procedure to detect exposure and/or effect is not perfect.
    • i.e. exposed/diseased classified as non-exposed/non-diseased or vice-versa
  61. What are some sample size considerations in case-control studies?
    very little improvement past case:control of 4:1

    remember that increasing sample size only improves precision, not validity.
  62. What are some advantages of case-control studies?
    • Good for studying rare condition/disease
    • Good for long latency between predictor and outcome
    • Useful for hypothesis generation since relatively inexpensive and fast as disease already developed
    • can evaluate multiple etiological factors/exposures at once
  63. Weakness of case-control studies?
    • cannot estimate incidence/prevalence
    • can only study one outcome, as that is how popn is sampled
    • prone to various errors and biases
  64. Define survey? What determines whether it's analytic or comparative?
    • detailed and quantified description of a population
    • systematic collection of data
    • The research question does
  65. Descriptive surveys
    Measure characteristics of a particular population, either at a fixed point in time, or comparatively over time
  66. Analytic survey
    attemptto test a theory; i.e. explore and test associations between variables.
  67. for surveys: define independent, dependent, and uncontrolled variables:
    • subject of study, gains or losses produced by impact of research study
    • 'cause' of changes in the dependent variable that will be manipulated or observed, then measured in dependent variable
    • includes error variables that may confound results of the study (ideally you want these variables to be randomly distributed)
  68. how can you control extraneous variables in surveys?
    • hold them constant
    • exclusion: only use females to diminish effects of gender
  69. Stages in survey process
    Research questions-> decide on information needed -> decide on prelim. analysis [examine resources, review existing lit. -> decide on sample+ chose survey method -> Design questionnaire -> Pilot survey -> amend questionnaire and sample -> main survey -> edit code and tabulate -> analyze -> final report
  70. What are self-administered surveys? What are interviewer administered?
    SA: postal, online, delivery and collection

    I: structured interview, telephone questionnaire, focus group
  71. When are postal surveys good?
    • sample widely distributed geographically
    • subjects need to be given time to think
    • subjects have moderate/high interest in subject
    • questions are written close-ended
  72. When are Delivery and collection surveys good?
    • Delivered by hand to each respondent and collected later
    • -allows direct contact with potential respondents, which may lead to higher response
  73. Why are online suveys good? Bad?
    Adv: low cost, easy to design and administer, anonymous-> honesty

    • Disadv: volunteer sample (little control over who responds)
    • sampling error (non-internet users)
  74. Structured surveys.,advantages and disadvantages
    adv: higher response rates, can ask open ended questions for detailed responses, additional probes can be asked

    disadv: time consuming, expensive, sensitive topics-> less accurate
  75. Focus groups: advantages and disadvantages
    • allow for a variety of views to emerge
    • group dynamics can often allow for stimulation of new perspective (allows for the basis of a survey)
  76. telephone surveys, advantages and disadvantages
    • most people have a phone
    • higher response rates

    questions need to be short
  77. what is a probability sample, and examples of probability samples:
    • each member of population has known non-zero probability of being selected
    • allows calculation of sampling error
    • ex. random, systematic stratified
  78. what is a non-probability sample, and examples of non-probability samples:
    • members selected in non-random way
    • sampling error unknown
    • ex. convenience, judgement, quota, snowball
  79. Describe each of the probability samples
    • Random: each member has equal and known chance of selection
    • Systematic: also called 9th name selection [as long as no hidden order in list of eligibile participants, just as good as random]
  80. Stratified sampling in surveys:
    • reduces sampling error even compared to random sampling
    • identify relevant strata, then random sampling to select subjects from each stratum until # of participants is proportional to its frequency in the population
  81. What is a stratum
    subset of population that shares atleast one common characteristic
  82. Explain the non-probability sampling techniques
    • convenience: mostly during preliminary phases (ex. friends and family)
    • judgement: an extension of convenience; researcher makes a judgement that this one sample is representative of the entire population (ex. use one city to sample the country)
    • quota: non-probability equivalent of stratified sampling: identify your strata and proportions as represented in population, then use judgement sampling to select required number from each stratum
    • snowball: used when desired sample characteristics are rare; relies on referrals from initial participiants to find new participants
  83. Reducing sampling error in surveys
    • contact members of sampling frame and ascertain whether they belong to the required sample
    • design questionnaire/interview in such a way that ineligible respondents are identified early and screened out
  84. When do you need a scale?
    • If there is no easily applied "objective" measurement tool
    • ex. determining a patient's status, assessing quality of life, enhancing educational programs
  85. What are 4 question types in surveys?
    • Objective: can be verified, real answer exists
    • Subjective: ask about personal perception, no factual answer. 
    • Open-ended: allows information gathering, tough to analyze and time consuming
    • Close-ended: useful for hypothesis testing, but may lose important info. time effecient, lots of response formats
  86. Response formats for close-ended questions
    • dichotomous
    • multichotomous
    • verbal frequency scale (all the time, fairly often, never etc. )
    • list
    • ranking
    • likert scale
    • graphical rating (continuous scale on a line)
    • non-verbal
  87. What are some random facts about Likert scales?
    • Measures strength of feeling or perception
    • optimal number is 7+/- 2 
    • Often analyzed as interval data, but is actually nominal
  88. What type of wording should you avoid in questions?
    • Double negatives
    • Double Barelled: ask for thoughts about 2 things at once
    • Leading: suggests the answer the research is looking for (may end in weren't you, don't you, etc. )
    • Loaded: contains a controversial assumption
  89. Who do you pre-test on? Who do you pilot test on?
    • Pre-test on convenient other
    • Pilot on small sample from target popn.
  90. What are the four dimensions of a good scale?
    • reliable 
    • valid
    • feasible
    • acceptable
  91. Lack of reliability in a tool does what?
    • produces error variance
    • places upper limit on validity (requires reliability for validity,but not the opposite)
  92. What is classic reliability theory?
    • raw score= true score + error 
    • sources of error: misinterpretation, biases, inexperience, inter-rater differences
  93. Why are reliability tests constructed? What is the prof's watered down equation for a reliability test?
    Tests constructed to differentiate between objects

    •            a^2true
    • [(a^2true+a^2error)
    •      n


  94. Implications of reliability based on her equation
    • reliability is not a fixed property of the scale
    • change popn-> change reliability
    • more items/raters increase reliability
    • need trepeat measures to estimate error
    • test that doesn't discriminate is useless!
  95. What is the goal for internal consistency in reliability? For stability?
    >0.8: based on one time administration of survey, is there good correlation between related domains? - does not account for day-to-day or observer variation into account so usually an optimistic reliability

    • Stability >0.5: 
    • reproducibility when administered on different occasions:
    • ex. test-retest: same result on 2 occasions
    • intra-rater: agreement between ratings made by same rater on diff. occasions
    • inter-rater: agreement between 2 different raters
  96. Whats problematic about invalid tools? List some types of validity
    • Wrong conclusions, unethical
    • Face
    • Content: does it tap relevant + disregard irrelevant ideas (based on expert opnion, existing tools etc. )
    • Criterion: are results consistent with other measures [divided into concurrent and predictive]
    • Construct: can we predict differences based on constructs? requires first 3 types of validity
  97. Criterion validity types
    • Concurrent: examine relationship between criterion measure and scale at time of administration
    • predictive: examines relationship between scale and future outcomes (ex. compared to a gold standard)
  98. What are some ways to test validity?
    • extreme groups: t-tests
    • change: 2 way anova
    • criterion: pearson's correlation
  99. What are some concerns with feasibility?
    • Avoid ambiguity
    • Consider reading level
    • How much training required?
    • How easy to score (avoid weighing responses, be aware of unintentional weighting)
    • Cost
  100. What are some concerns with acceptability?
    • be brief (but remember reliability proportional to # of items)
    • use only items for which there is a variety of responses
    • be aware of social desirability bias
    • those who administer the scale willing to do so?
  101. How to reduce item non-response
    • Avoid intrusive questions
    • emphasize confidentiality
  102. How to reduce error associated with poor response rate
    • identify most appropriate respondents
    • use multiple forms of contact
    • develop easy to complete questionnaire with instructions
    • conduct on-site interiews to taylor question to participant cognitive ability
    • be cautious about financial incentives
  103. What are the cognitive requirements of responding:
    • understand question
    • recall relevant attitude, belief, behaviour
    • inference and estimation (decomposition and extrapolation; end-digit bias)
    • Map answer on response alternatives
    • edit the answer (what people think and what they tell you)
  104. Types of response bias:
    • Social desirability
    • Deviation
    • Hello-goodbye
    • End aversion: no extreme responses
    • Positive skew: favourable response
    • Halo effect
    • framing: choice between 2 alternatives depends on how they're framed
  105. Minimizing response bias:
    • keep task simple
    • maintain motivation of respondents (choose only those interested, motivation higher earlier so keep it short, ask people to explain their answers)

What would you like to do?

Home > Flashcards > Print Preview