Research Methods Weeks 1-4

The flashcards below were created by user komail on FreezingBlue Flashcards.

  1. EBM consists of
    • Using best available clinical evidence
    • individual clinical expertise
    • patient's values and expectations
  2. What is the hierarchy of evidence
    • CPG
    • Systematic review/meta analyses
    • RCT
    • cohort
    • case-control
    • case report
    • opinion
  3. What is the hallmark of an experimental study?
    • Researcher assigns exposures 
    • ex. RCT or non-randomized CT
  4. What is the hallmark of an observational study? Observational studies are further split into two groups: what are they, and what do they consist of?
    Exposures not assigned by researcher

    Analytical study (if it has a comparison group, ex. cohort, case-control, cross-sectional)

    Descriptive (no comparison group)
  5. all study designs have ____ and ____. 

    experimental and analytic studies also have _____ and _____ and ______
    • a defined population from which groups of subjects are studied
    • and outcomes that are measured

    intervention, exposure, comparison
  6. What are the elements of a good research question?

    • population
    • intervention, exposure
    • comparison/control
    • outcome
    • timeframe
  7. What are some types of research questions
    intervention, diagnosis, prognosis, etiology, meaning [ how do (P) with (I) perceive (O) during (T)
  8. What are 3 questions to determine study design?
    • what was the objective (descriptive or analytic)
    • if the objective was analytic, was it an intervention or an exposure?
    • if observational analytic, when were outcomes determined?
  9. Why might descriptive studies be helpful?
    Answer questions about frequency, natural history of condition, and generate hypotheses about it.
  10. Define internal validity
    Degree to which conclusions are correct about samples in the study (low systematic error, including bias and confounding)
  11. Define confidence/precision
    Extent to which results are free from sources of variation equally likely to distort estimates in either direction and reduce precision (low random or chance error)

    Confidence = signal/ noise * sqrt(sample size)

    • Signal= intervention - control
    • noise = chance variation
  12. In RCTs, randomization ensures
    • observed/unobserved covariates are balanced
    • only difference is treatment
  13. Design features of RCTs that increase internal validity (minimize bias)
    • randomization
    • blinding
    • follow-up
  14. What is randomization, and how is it achieved?
    allocating treatment by chance rather than by choice (helps to reduce selection bias)

    achieved by: random allocation, concealed allocation
  15. what is blinding, and why should it be undertaken?
    Act of disguuising therapy so individuals do not know what is being given

    Reduces performance and detection bias (both are measurement biases)
  16. what is Follow-up, and why do it?
    ensure inclusion and follow-up of all participants in the assigned group (i.e. intention to treat)

    reduces attrition bias
  17. Why is most research not RCT?
    unnecessary, inadequate, inappropriate,impractical (see observational study notes from week 1 reading)
  18. What is confounding?
    • Error in interpretation of results
    • due to differences in variables between exposure and control groups that are also related to outcome.

    i.e. unbalanced factors are associated with outcome of interest [note that the confounder is not an intermediate step in the causal pathway]
  19. How to control confounding?
    • Design phase:
    • randomization
    • restriction/exclusion [limit study to certain people, e.g. women]
    • matching [ produce groups that are similar in characteristics, and analyzed in pairs]

    • Analysis phase: 
    • statistical methods (stratification, regression etc. )
  20. When can bias occur in studies?
    in any stage: design, conduct, analysis, reporting
  21. How to minimize threats to validity?
    • Pick an appropriate design
    • standardize study conditions (way intervention is given, way data is collected)
    • Collect and use relevant subject info
    • collect and use study details
  22. Supporting causation in the absence of experimentation
    • Strong association
    • association consistent across studies
    • dose-response
    • exposure precedes the effect *most important*
    • biologically plausible.
  23. What is quasi-experimental
    not truly randomized, falls between observational and experimental approaches
  24. What is the Hawthorne effect
    positive impact from being observed(perform better because you are being tested)
  25. What is a systematic review?
    structured review of primary studies, integrating statistical analysis of the results, if appropriate

    has a scientific protocol, with objectives and methods
  26. What is a meta-analysis?
    Is a quantitative synthesis of >1 study to produce a single estimate of effect of exposure.
  27. What is a narrative review?
    qualitative, narrative summary of evidence on a given topic

    involves informal and subjective methods, lots of potential for bias. 

    could be written by field experts
  28. What are some problems with narrative reviews?
    • Subjective and prone to bias:
    • -no description of methods used by review
    • -literature search may be incomplete or selective
    • - can't distinguish research from opinion

    • Usually not quantitative
    • -no estimate of overall effect
    • -cannot detect small effects
  29. What is the trend of RCTs over the past couple decades?
    Keep increasing, upto 30,000 / year now
  30. What are some reasons to perform systematic reviews?
    • manage health information effeciently
    • distinguish sense from non-sense
    • provide EBM [summary of results, and determining if new studies are needed]
  31. Who is Archie Cochrane, and what did he create?
    developed OXford database of perinatal trials, which led to the opening of the first Cochrane Center
  32. What is the trend in systematic reviews over the past couple decades? Are systematic reviews limited to RCTs?
    increasing every year, up to 2500/year now.

    No, they can be on any study.
  33. What are the steps of a systematic review? [OLDADSC]
    • Objectives and protocol
    • Literature Search
    • Quality Assessment
    • Assemble Dataset
    • Data Synthesis
    • Sensitivity Analysis
    • Conclusions
  34. Which steps of a systematic review should have atleast 2 people working on it to avoid bias?
    • Literature Search
    • Quality Assessment 
    • Assemble Data Set
  35. What are the 3 main sources of information in a Literature Search [i.e. to identify all relevant studies]?
    • Bibliographic Databases 
    • Hand Searches (reference lists, select Journals)
    • Contact with researchers/Organizations
  36. Why is it important to use multiple databases for literature searches?
    Can miss upto 50% of info if you only use one database
  37. What is the purpose of the quality assessment step of systematic reviews?
    It is to judge internal validity of the studies, and analyze the study characteristics
  38. What can you use to guide quality assessment of RCTs? of Non-randomized control trials?
    Cochrane risk of bias tool

    Newcastle-ottawa scale (for case-control or cohort studies)
  39. What is the purpose of the Assemble Data Set step of creating a systematic review?
    • Full text review
    • Data extraction:
    • -discrepancies resolved by discussion/external referee
    • -write to researchers to get additional data
    • -impute data
  40. What is the purpose of the Data Synthesis step of Systematic reviews? What are the two types of data syntheses?
    To combine the studies into one effect size. 

    • Qualitative data synthesis: summary information based on words not statistics
    • Quantitative data synthesis: (meta analysis): statistical analysis combining results of independent trials, considered by analyst to be combinable.
  41. What can you calculate with discrete data for a quantitative data synthesis? What about continuous data?
    Discrete: relative risk, odds ratio, risk difference

    • continuous: 
    • -same units: weighted mean difference, standardized mean difference
    • -diff. units: standardized mean difference
  42. What is the purpose of a qualitative data synthesis?
    used o reveal patterns of included studies, and to identify between trial differences.

    i.e study A used a surrogate outcome, while study B used morbidity.
  43. What are the two models of quantitative data synthesis?
    fixed effects model: assumes a single fixed effect under differentconditions/studies

    random effects model: assumes different treatment effects under different conditions/studies (wider CI: less likely to be significant)
  44. Define relative risk, risk difference, and odds ratio (not mathematically)
    • Relative risk: relative event rate [ risk in exposure/risk in non-exposure]
    • Risk difference: absolute difference in event rate [risk in exposure- risk in non-exposure]
    • Odds ratio: relative # of individuals who have an outcome, divide by those who do not. [odds in exposure/ odds in non-exposure]
  45. Define relative risk, risk difference, NNT, Odds ratio
    • RR: (a/a+b)/(c/c+d)
    • RD: (a/a+b)-(c/c+d)
    • NNT= 1/RD
    • OR: (a/b)/(c/d)
  46. When is an OR used? When does it approximate an RR?
    used in case control, as true denominator is not known

    OR approximates RR when baseline probabilities of outcome are low <0.2
  47. Define mean difference , weighted mean difference, and standardized mean difference
    mean difference: mean (treatment) - mean (control)

    weighted mean difference: weighted difference in mean values between treatment and control (same units as original studies)

    • Standardized mean difference: absolute difference in mean vvalues between intervention and control groups divided by SD (used when same outcome is asssessed using different tools) 
    • -scale free estimate
    • <0.4 = small effect
    • 0.4-0.7 = moderate effect
    • >0.7= large effect
  48. What is a forest plot? what is it used for?
    • Plots confidence intervals vs Risk ratio
    • middle of the diamond (overall) is the RR
  49. How can you test for heterogeneity in systematic reviews?
    inspect scatter in data points and overlap in CI

    check results of I2

    check results of X2
  50. What does I2 do? what is considered high heterogeneity? What does X2 do, what is high heterogeneity?
    • I2 estimates thepercentage of variability in results across studies that is likely due to true difference in treatment effects, as opposed to chance
    • >0.75= high heterogeneity

    • Chi estimates variations between results of trials due to chance
    • if P<0.05-> significant heterogeneity.
  51. What is the point of a sensitivity analysis?
    • involves repeating analysis in different subgroups of studies [between-study comparisons)
    • ex. methodological quality, publication status, language
    • if overall results do not change w/ diff. approaches, then the result is robust.
  52. What is subgroup analysis?
    • statistical analysis focussing on a particular sub-group of participants across studies, not a sub-group of studies
    • i.e. within study comparisons
    • ex. look at different ages, sex, etc. 

    • planned in advance according to protocol
    • usually supported by biological hypothesis
  53. How can you identify publication bias?
    • Use a funnel plot
    • Sample size vs RR

    inverted funnel if no publication bias. 

    • difficult to use if few studies
    • can estimate how many trials needed to reverse results.
  54. What is the point of the conclusions section of the systematic review?
    • Summarize and publish systematic review
    • -comply w/ reporting standards, recommend clear description of all critical steps
    • make clinical and scientific recommendations
  55. What are some tools assessing systematic reviews?
    PRISMA: (Preferred reporting items for systematic reviews and meta-analyses) 27 items that cover 7 topics

    Meta-analyses of observational studies (MOOSE)
  56. What are the 7 topics of PRISMA
    Title, abstract, intro, methods, results, discussion, findings
  57. What are some weakness of systematic reviews?
    Authors decide:

    • inputs (which studies to include/quality)
    • processes ( search strategy, judgments about studies)
    • results (# of studies, heterogeneity)

    • Judgements are prone to random and systematic errors
    • (having 2< or more reviewers reduces errors and biases)
  58. What are the key points for appraising a systematic review?
    • Are results valid
    • what are the results?
    • can I apply the results?
  59. What percentage of pts do not receive proven treatments?
    What percentage receive unneeded or harmful care?

  60. What is a care gap?
    discrepancy between evidence-based knowledge and day-to-day clinical practice.
  61. It takes ___ years for ___% of research to make its way in practice
    • 17 years
    • 14%
  62. What is KT type 1? Type 2?
    Lab to clinical research

    Clinical research to health care
  63. What is a CPG?
    systematically developed statement which assist clinicals and patients in making decisions about appropriate treatements for specific conditions/circumstances

    Consists of Evidence  and Values and Preferences
  64. What is the flow of knowledge creation?
    • Knowledge inquiry
    • knowledge synthesis (systematic reviews)
    • knowledge tools/products (CPGs)
  65. What are some purposes of a CPG?
    • distil large amounts of knowledge into usable format
    • assist patients and clinicians in making decisions
    • imporve patient/health care outcomes
    • reduce practice variation
    • optimize resource utilization
    • expose gaps in knowledge
  66. Benefits of CPG
    • enhance quality of care
    • provide guidance by which to hold health care accountable
    • empower patients to advocate for appropriate care
    • contribute to public policy goals
  67. What is Health Care Ontario? What is it mandated by, and what is its purpose
    Agencymandated by the excellent care for all act to advise government and health care providers on the evidence to support high quality care, to support improvements in quality and to monitor and report to the public on the quality of health care in ontario

    • -evaluate health care technologies
    • -report to public on quality of health care system
    • -support quality improvement
    • -make evidence based recomendation for funding
  68. Who makes CPG?
    • professional/sspecialty societies
    • institutions/academic centers
    • governments and other payers (insurers/hmo)
  69. What are the key steps to CPG development
    • 1) prepare for CPG development
    • 2) systematically reviewing the evidence
    • 3) drafting the CPG
    • 4) reviewing the CPG
  70. General steps to develop CPG:
    • 1) agree on the process
    • 2) convene a multi-disciplinary team
    • 3) identify conflicts of interest
    • 4) select the topic (clinical questions)
  71. What are Agree-2 statements:
    appraisal of guidelines for research and evaluations (evaluates the PROCESS of practice guidelines)
  72. Give some examples of knowledge transfer groups
    • HCP
    • Gvt. 
    • Academia
    • Society
    • Institutions
  73. How to make a topic for CPG? (i.e what are the components of a CPG topic)
    Use the PICO format
  74. Process for developing recommendations
    • 1) conduct knowledge synthesis of benefits and harms of interventions (systematic reviews)
    • 2) draft guideline recommendations that align w/ research evidence and values/preferences
    • -levels of evidence (number)
    • -strength of recommendations (letter grade)
  75. Canadian task force- classification of study designs for CPGs
    • I- evidence from RCT
    • II-1 : non-randomized RCT
    • II-2: cohort/case-control
    • II-3: everything else but opinions
    • III: opinions
  76. Canadian task force- classification of recommendations
    • A- good evidence to recommend the action
    • b- fair evidence to recommend the action
    • c- conflicting  evidence to recommend the action
    • d- fair evidence to recommend against the action
    • e- good evidence to recommend against the action
    • i: insufficient evidence
  77. What is the GRADE method for CPG?
    Quality of Evidence: High, Moderate, Low, Very Low

    Strength of Recommendations: Strong, Weak

    Considerations for these judgements must be explicitly state and include: quality of evidence base, trade-off, values/preferences, economic considerations
  78. Once a CPG is completed, what are the subsequent steps?
    • -peer review and pre-testing (external review among intended users and stakeholders)
    • -revisions to address external review 
    • - final report dissemination
    • -strategy for implementation, outcome assessment and updating
  79. Parts of a CPG:
    • Structured abstract
    • introduction/burden of illness
    • methods
    • results
    • recommendations [reccomendations that are clearly worded/formal ; level of evidence and grade of recommendation; clinical considerations/values; recommendation table]
    • Limitations/future research
    • References
    • Tools, resources, appendices
  80. How can you disseminate CPGs?
    • peer-reviewed journals
    • electronic copies
    • short manuals/summaries
    • clinical practice tools
    • local educational interventions
  81. What are some ineffective, mixed, and effective ways to change professional behaviour?

    How effective are these?
    • Ineffective: didactic, printed material
    • Mixed: audit+ feedback, local opinion leaders
    • Effective: reminders, educational outreach, multi-faceted interventions

    Mixed+ effective: mean absolute effect of only 10%!
  82. What are some implementation considerations for CPGs?
    • strength of evidence
    • impact of message
    • coordination across activities and audiences
    • effectiveness of interventions
    • sustainability of interventions
    • resource needs
  83. What are some barriers to uptake of CPGs?
    • physician knowledge gaps
    • patient attitudes
    • organizational factors
    • resource constraints
    • legislative restrictions
  84. Additional challenges of CPGS (other than barriers to uptake)
    • -def'ns of quality of care may vary for different stakeholders
    • -complexity of clinical practice may be difficult to capture in CPGs
    • - existence of conflicting CPGs
    • - CPGs may not be easy to implement
    • -
  85. What are some stats about biases in CPG due to conflicts of interest
    • -87% of guideline authos have pharma interaction
    • 58% received financial support for research from pharma
    • 38% had been pharma employees

    • 7% stated their own relationships influenced recommendations
    • 19% thought co-authors recommendations were influenced (more likely to think it doesn't affect you)
  86. What are some biases due to publication bias in CPGs
    • 1/2 of clin. trials are unpublished
    • some have altered primary outcome
    • selective outcome reporting
    • lack of transparency in trial registries
  87. CPG process methodology flaws
    1. When evidence is strong: reflects different values/views about health outcomes, treatment options, economic issues

    2. When evidence is not strong: reflects biased evidence selection, inadequate consideration of relevant issues, undue influence of chair/panel members

    Thus value judgements and rationale should be explicitly stated
  88. What are guidelines for CPGs (i.e. what are they meant to do)
    • provide a checklist of items, resources for CPG developers
    • ensure critical aspects are addressed
    • dozens of guidelines in use world wide
    • ex. AGREE-II
  89. AGREE-II
    • assesses the quality of the process,not the quality of the evidence
    • i.e. how well the actual clinical practice guidelines were developed
  90. AGREE-II quality indicators
    • scope and purpose
    • stakeholder involvement
    • rigor of development
    • clarity and presentation
    • applicability
    • Editorial independence
  91. What is a quick way to judge a CPG?
    • 1. Systematic Search Strategy?
    • -no systematic search -> biased selection of evidence

    • 2. Do recommendations have attached level of evidence?
    • -helps to assess whether to accept recommendation or not
  92. % share of generic prescriptions
    • 2000: 49%
    • 2012: 84%
  93. How many drugs return revenues that match or exceed R&D costs?
  94. What is the major purpose of each stage of drug development?
    • Pre-clinical :designed for assesssment of safety, toxicity, PD, PK
    • Phase 1: drug given to small # of ppl (20-100) to evaluate safety + identity sideeffects and safe dose range
    • Phase 2: increase sample size for further evaluation (100-500)
    • Phase 3: increase sample size to evaluae safety (300-3000)
    • Phase 4: post-marketing studies to assess additional info
  95. What is the point of PK studies in drug development?
    • Objective: evaluate ADME
    • Primary measure: Drug concentrations
  96. What is power in terms of statistical testing? How can you decide the number of subjects that you will need?
    Probability that a trial is able to detect a true significant difference (used to calculate min sample size)

    Effect size, inter-intra-subject variability, and desired statistical power
  97. What is blocking, in terms of PK studies (or studies in general, really)
    blocks of randomization, to ensure equal number of treatments
  98. What is Stratification in terms of PK studies
    attempt to ensure all groups of subjects are represented in each sequence or block based on some baseline characteric, like sex (ex. equal numbers of women in Drug A followed by Drug B as there are in Drug B followed by Drug A)
  99. What are three different types of trials you can do when comparing treatments?
    • Superiority
    • Equivalence
    • Non-inferiority
  100. Prior to starting a clinical trial, the sponsor will submit study info to health canada. What does this proposal contain, and what is health canada's main concern? What do you get if health canada is ok to let you proceed
    • Product formulation, protocol, informed consent form
    • Health canada's concern is with subject/patient safety
    • 30 day review period, 7 if conducted in healthy patients
    • if you pass you get a NOL (no-objection letter)
  101. What are some different ways of picking a starting dose?
    • NOAEL: no observable adverse event level; use tox from most sensitive species and least sensitive pharmacological study
    • allometric scaling: scaled based on body weight from other animals
    • mixed-effect modelling/simulation: usually for 2nd generation drugs
  102. What was the TGN1412 incident, and what did it chance in legislation?
    first in human trial gone bad, lead to more stringent guidelines for FIH studies in 2007
  103. Who assesses whether it's fine to proceed to the next dose?
    DSMB: data safety monitoring board (the single ascending dose , MAD studies are usually blinded)

    remember that stopping criteria should be written into the protocol before hand.
  104. What is the main purpose of mass balance studies?
    Characterize metabolites, excretion, protein binding) use radioisotope, usually C-14)
  105. % of people using atleast 1 prescription drug in past 30 days? 3 or more in past 30 days? 5 or more in past 30 days? avg # of drugs in elderly?
    • 48.5%
    • 21.7%
    • 10 %
    • 8-13
  106. Reasons for performing DDI studies
    • need for dosage adjustments
    • additional therapeutic monitoring
    • contraindication to concomitant use
    • other measures to mitigate risk
    • risk for drug approval
    • (ex. you must evaluate CYP enzyme induction)
  107. How are DDI studies often performed?
    • In cross-over, with healthy patients. 
    • Study should be designs that challenge the drugs and maximize possibility of detecting an interaction. 
    • Also important to distinguish between interaction on the PK of the new drug vs interaction of new drug on an old drug.
  108. When is population PK used?
    Used when unethical, or unfeasible to sample frequently in the population (geriatrics, pediatrics) 

    See slide 52 for a list of comparisons
  109. What are the uses of a comparative bioavailability study?
    • generic testing
    • site manufacturing changes
    • production scale up or minor tweaks in formulation
    • comparing different formulations for literature
  110. When should you sample in bioequivalence studies
    • before Cmax
    • at Cmax
    • 3 points in terminal elimination
    • AUC 0-t should capture atleast 80% of AUC 0-> inf
  111. What two things have to be within the 80-125%
    Ratio and confidence interval of Cmax and AUC
  112. Compare and contrast bioavailability and bioequivalence
    bioavailability: rate and extent to which active ingredient is absorbed from drug product and becomes available at site of action

    bioequivalence: absence of significant difference in rate and extent to which active ingredient in pharmaceutical alternatives become available at site of action when administered at same molar dose, under similar conditions
  113. How can you estimate sample size in bioequivalence studies? Do these studies have to be blinded in Canada?
    Geometric mean ratio,intra-subject coefficient of variation (ISCV) and power (usually 12 or more).

  114. Why are parallel food studies rarely done?
    Want to compare absorption due to food, not due to inter-person variability. Thus makes more sense to serve as your own control.
Card Set:
Research Methods Weeks 1-4
2014-10-12 04:44:07
Research Methods Weeks

Research Methods Weeks 1-4
Show Answers: