Clinical trials all 6 weeks

Card Set Information

Clinical trials all 6 weeks
2013-12-05 13:05:55
clinical trials

clinical trials
Show Answers:

  1. 5 ways of knowing
    • sensory
    • common knowledge
    • expert opinion
    • logic
    • scientific reasoning
  2. what is research? what makes it clinical?
    Diligent search aimed at discovery, interpretation and application of new or revised facts.

    Clinical if it is patient oriented
  3. What's the difference between research and practice?
    • research is directed at populations
    • practice is directed at individuals
  4. Define Evidence-Based Medicine:
    • use of current best evidence in making decisions about care of individual patients
    • integrates clinical expertise with best available external evidence from systematic research and patient's values and expectations
  5. Steps to providing EBM
    • Ask
    • Acquire
    • Appraise
    • Apy
    • Assess
  6. Difference between experimental and observational quantitative studies?
    • Experimental: intervention under control of experimenter
    • Observational: intervention not under control of experimenter
  7. If you assess exposure and then follow to find outcome what kind of study is this?
    Cohort study (observational)
  8. If you assess outcome, and check to see if they had the exposure, what kind of study is it?
    Case-control (observational)
  9. If you assess both exposure and outcome at the same time, what kind of study is it?
    Cross-sectional study
  10. Define clinical trial
    Clinical research which evaluates more than 1 health related intervention
  11. hallmark of RCT
    Treatment assigned by chance, rather than subjects/researcher
  12. What is the purpose of each phase in a clinical trial?
    • Phase 1- first in man (safety)
    • Phase 2- safety/efficacy
    • Phase 3- large scale comparative study
    • Phase 4- post market surveilance
  13. Hawthorne-effect
    Subjects improve/modify an aspect of their behaviour because they know they are being studied
  14. What are 3 the components of high quality research?
    IV, EV, high precision
  15. How to Achieve Internal Validity ?
    Low systematic error (results free of systematic deviation from truth at any stage (no biases))
  16. Precision
    • Low random error
    • extent to which results are free from sources of variation equally likely to distort estimation in either direction
  17. External Validity
    Extent to which results can be applied to other individuals or settings (a key for practicing EBM)
  18. What is the purpose of random allocation in RCTs?
    • Balance known and unknown confounders at baseline.
    • Reduces selection bias
    • This is why this is the only study that can prove causality (because everything else is theoretically the same)
  19. Allocation concealment
    • The researcher doing the allocating of patient's to treatments cannot know the order of allocation until it is time to allocate
    • -reduces selection bias
  20. What purpose does Blinding serve
    • avoids unequal co-intervention among treatments
    • avoids unequal asertainment of outcome 
    • -thus reduces measurement bias
  21. Double Dummy
    Used when one intervention is a drug and the other is a device
  22. Attrition Bias
    differences that arise in study groups due to exclusion of participants after randomization
  23. Difference between prospective and retrospective cohort study?
    both follow participants into the future

    • prospective measures exposure now, and goes to the future
    • retrospective measures exposure in the past, and checks to see if outcomes were present.
  24. Key elements of a good RCT
    • randomization
    • blinding
    • allocation concealment
    • complete follow-up
  25. Efficacy vs effectiveness
    • efficacy is trying to establish causality (ideal situation) (high internal validity)
    • effectiveness is trying to see whether it really works (normal situation)(high external validity)
  26. What is equipoise?
    What happens if a trial isn't clinically equipoise?
    • reasonable doubt about the effectiveness of an intervention
    • Important for equipoise to be present when beginning a RCT

    Will cause clinicians to break allocation concealment
  27. What are two groups that regulate clinical trials on behalf of Health Canada Food and Drug Regulations?
    • TCPS-2: tricouncil policy statement 2
    • ICH-GCP: good clinical practice
  28. 3 different review boards for clinical trials in Canada
    • Pharmaceuticals and Devices (therapeutic products directorate)
    • Biologics and radiopharmaceuticals (Biologics and Genetic Therapy Directorate
    • Natural health products (Natural health products directorate)
  29. Which drug trials are reviewed by health canada? WHich are not?
    All phase 1, 2, 3, trials are reviewed and approved by health canada unless the drug is being used for an already approved indication
  30. What are two most important aspects of ICH-GCP?
    • Patient safety is being considered
    • study is credible
  31. What is a research ethics board? and what are its core principles?
    Independent peer review board with 5 or more members (either medical professional or lay persons)

    • Core principles: 
    • 1) respect for persons (autonomy + informed consent)
    • 2) Concern for welfare (benefecience, maximum benefit/low risk)
    • 3) Justice-(fairness-all people who could benefit have access to trial)
  32. What is informed consent?
    • Process by which subject voluntarily confirms wilingness to participate
    • -involves prior discussion of study
    • -can be obtained orally/implied
    • requires special provisions in vulnerable groups
    • may be delegated to authorized third party
  33. What are some threats to voluntary participation?
    • Undue influence (participant recruited by person in position of authority) 
    • Coercion: more extreme vesion of undue influence; involves risk of harm/punishment
    • Incentives: anything offered to participants to encourage participation beyond normal compensation
  34. What is capacity?
    Participant is able to research info, and decide the consequences of their participation or not adequately (research on those who are incapacitated is still allowed, but assent is needed from somebody else)
  35. When you can stop/withdraw from a study?
    • Stopping rules: as established in protocol, based on safety/efficiacy
    • Researcer can remove somebody from a study if they have worsening health, better therapy is available elsewhere, or non-adherence
    • Patient can remove themselves from the study at any time, no questions asked
  36. When can be research be done without consent?
    • If it involves no more than minimal risk
    • impossible to carry out research properly with consent
    • it wont affect their welfare
    • is not a therapeutic intervention
    • consent may be obtained later if appropriate
  37. When is deception allowed in research
    • If full disclosure will bias the results
    • must debrief participants after the trial
  38. What does Justice entail in terms of research ethics board principles?
    Particular individuals, groups, communities should not bear an unfair share of the direct burdens of partcipation or be unfairly excluded from potential benefits of research participation
  39. When are placebo controlled trials acceptable?
    • Its use is scientifically and methodologically sound in establishing safety/efficacy of intervention
    • does not compromise safety/health of participants
    • compelling scientific justification
  40. Adverse events
    Any untoward medical occurrence; need not be causal to the treatment
  41. Serious adverse event
    • any untoward medical occurence that:
    • results in death/life-threatening event
    • requires hospitilzation or prolongs hospitilization 
    • results in persistant disability
    • birth defects

    Reporting of SAE varies by REB authority
  42. Adverse drug reaction
    adverse event with a reasonable possibility of being related to treatment
  43. Clinical trial protocol (consists of )
    • Research Problem (background+significance)
    • Research Question (hypothesis + objectives)
    • Design (procedrures; participant; interventions; outcomes)
    • Statistical Issues (sample size; analytical approach)
  44. Hypothesis vs objectives
    Testable statements vs questions being answered
  45. Elements of the research question
    • Population
    • Intervention
    • Comparison
    • Outcome
    • Timeframe
  46. Null hypothesis
    There is no difference between treatments
  47. Different types of hypothesis tests
    • Superiority
    • Equivalence
    • Non-inferiority
  48. Study sample vs Target population
    • Subset of target population who will be part of study
    • Group that the study could be generalized to
  49. Health care settings are primary, secondary, tertiary; what does each setting consist of
    • primary: outpatient
    • secondary: specialist
    • tertiary: hospital
  50. What are two broad rationale for selection criteria
    • Ethical rationale: cant den treatment or impose contraindicated treatment
    • Scientific rationale: groups under treatment should have the same admission criteria
  51. Inclusion criteria serves to maximize
    • rate of outcome
    • likley benefit of trial
    • generalizability 
    • ease of recruitment
  52. Exclusion criteria serves to minimize
    • harm
    • ineffectiveness
    • non-adherence
    • loss-to-followup
    • practical problems

    Researchers may use a run-in phase to assess participants
  53. Representative sample
    a sample that is similar to the target population in all characteristics
  54. Probability/Random sampling method; what it is, and how it can be further broken down into subgroups
    • All members have an equal chance of being selected
    • can be further divided to:
    • simple (normal)
    • stratified (groups organized according to certain characteristic)
    • cluster (group of people are randomized)
  55. Examples of Non-probability/Non-random sampling
    • Systematic (pick every 10th person who applies)
    • convenience (literally because it is convenient to pick people; not very representative)
    • purposive (pick people because they have a certain characteristic)
  56. Possible 2 meanings of selection bias
    Not all individuals in a population have equal chance at being selected to participate

    Intervention and control groups differ from each other
  57. Various types of ways of exposing participants to interventions
    • parallel (subjects only receive one treatment
    • Crossover (each subject receives every treatment)
    • Cluster (groups of individuals are randomized
  58. Allocation ratio
    ratio of participants intended for each study group
  59. Name two examples of negative control groups
    • placebo
    • absence of treatment
  60. What is a head to head trial?
    What is an add-on trial? When are add-on trials beneficial?
    • Drug X vs Drug Y
    • Drug X + placebo vs Drug X + Drug Y
    • add-on trials are beneficial when it is not ethical to simply give placebo alone
  61. 3 methods of randomization (into treatments/control)
    • Simple- randomize all subjects
    • Blocked-randomize subjects within blocks of certain characteristics (say sex; an example of stratification)
    • Cluster- randomly allocate a group rather than an individual
  62. An acronym to help remember what Allocation concealment entails is:
    • Sequential
    • Numbered
    • Opaque
    • Sealed
    • Envelope
  63. Difference between a blinding trial and an open trial?
    • In a blinding trial, one or more groups do not know treatment assignment
    • In an open trial everyone knows treatment assignments
  64. How can you help control measurement bias in unblinded trials?
    • Stanardize procedures
    • minimize co-intervention
    • blind outcome assessor 
    • choose objective (hard) outcomes
  65. What are some desired features of primary and secondary outcomes?
    • Easy to record
    • free of measurement error
    • clinically relevant
    • chosen before study initiation
    • can be observe independent of treatment assignment
  66. 6 Outcomes of Disease
    • Death
    • Disability 
    • Disease
    • Discomfort
    • Destitution
    • Dissatisfaction
  67. What are composite outcomes? What are surrogate outcomes?
    • Composite outcomes are various outcomes added up; helps to reduce sample size to show an association
    • Surrogate outcomes are substitute outcomes that are associated with a relevant clinical outcome, but are not, in themselves relevant.
  68. Explanatory/Efficacy trials using per protocol analysis will be similar to interntion-to-treat (management/pragmatic trials) if there are low levels of
    • dropouts
    • non-compliance
    • co-intervention
    • contamination
    • development of co-morbidity
  69. What are some ways studies determine their sample size?
    • Fixed size: based on a priori sample size calculation
    • mega trial: very large sample size
    • sequential: variable size, not known at outside (do calculations till you get significance)
    • N of 1: single subject
  70. Give an appropriate order for operational definition, variable, and concept
    Give an example
    • Concept-> operational definition-> variable
    • cannulation difficulty-> Operation success-> # of pokes
  71. What are two broad categories in which variables can be measured?
    • Categorical/qualitative: not variable in degree, but in type
    • Continuous/quantitative: exists in some degree along a continuum
  72. What are the four types of data?
    • Nominal: categorical, no order
    • Ordinal: categorical, there is an order, though the interfvals are not necessarily equal (ex. best to worst)
    • Interval: continuous, no true zero
    • Ratio: continuous , true zero exists
  73. What is discrete data?
    units of the data are limited to integers (ex. # of people)
  74. moderator variables
    special type of independent variable; selected to see if it affects/modifies relationship between IV and DV
  75. What is reliability? 
    What are some causes?
    • Precision/consistency/reproducibility of a measure
    • poor precision is due to random error
    • could be from: observers, instruments, subjects
  76. How can you increase reliability?
    • Standardize measurements+methods
    • Train observer
    • Use reliable instruments
    • Take repeated measurements
  77. What is validity? Poor validity is due to? Could be from?
    What are some strategies for enhancing validity?
    • Accuracy of a measure
    • Poor validity due to systemic errors (bias)
    • could be from observers, instruments, subjects

    • Standardize measurement methods
    • Train/certify observers
    • use calibrated instruments
    • blind individuals
    • use objective variables
  78. Reliability testing
    • test-retest: apply same test to same group after a pre-specified time itnerval
    • observer: compare scores from >2 observers
    • Equivalent forms: two alternate forms of a tool to same group during same period and see if they match
    • internal consistency: determine how components of the tool score relative to others
  79. Validity testing
    • Content- extent to which measure contains all dimensions of a construct (expert judgement)
    • Criterion- relate scores obtained with a different measure of the same variable
    • Construct- assess predictions made from theory
    • Face validity: does it seem like it measures what it intends to?
  80. Describe descriptive vs inferential data
    descriptive: summaries of information gathered from samples of a population

    inferential: captures data from 2 or more groups, used to draw inferences between sample and population, and see how likely it is that the effect is due to chance(two types of statistics in this category: parametric, non-parametric)
  81. Describe parametric vs non-parametric stats
    • part of inferential statistics
    • parametric: assumptions made about nature of population (normally distributed etc. ); can't be used for categorical data or with small sample size
    • non-parametric: minimal assumptions made about nature of population; not as strong as a statistic
  82. Example of parametric and non parametric tests
    • Parametric: T-test
    • non-parametric: chi squared, mann-whitney U test
  83. Positive study
    Study where the null is rejected (if you can't reject the null, the study is inconclusive or indeterminate)
  84. Qualititative data is to _____
    as _______ is to Mean difference
    Relative risk; quantitative
  85. Calculate RR, ARR, RRR, NNT, MD
    RRR= (Ie-Ic)/Ic
  86. What confidence interval implies significance for RR? What confidence interval implies significance for MD?

    What's the meaning of a confidence interval?
    Why are confidence intervals being used increasingly in statistical reporting?
    • RR= CI must exclude 1
    • MD= CI must exclude 0

    True point estimate will be within the confidence interval 95% of the time.

    Provide precision and significance
  87. What is the equation for confidence?
    How does this relate to confidence intervals?
    Confidence= (signal/noise) * sqrt(sample size)

    • higher the confidence, the smaller the confidence interval
    • signal is the difference between treatment and control
    • noise is chance variability.
  88. How can you minimize random error? Systematic error?
    • random error: reduce random variation, increase sample size
    • systematic error: combat by using study designs that reduce the sizes of various biases
  89. What factors will help you determine an effective sample size (estimating sample size)
    • Characteristics of the data (variability or proportion)
    • Estimation of effect (magnitude of differences to be detected)
    • Type 1 error (where you set it)
    • Type 2 error (where you set it)
  90. Define type 1 errors
    Define type 2 errors
    Define statistical power
    • 1: false positive, rejecting the null, when you should have withheld judgement 
    • 2: false negative, not rejecting the null, when the null is false
    • 3: probability that a trial will find statistically significant differences when a difference exists (1-B)
  91. What is fishing?
    Choosing to report only significant outcomes
  92. Define publication
    Fulfillment of responsibility to communicate results publically for external evaluation (authors must contribute sufficiently in order to take responsibility for the work)
  93. What are the "Guidelines for reporting Health research" (very generally)
    advice about how to report research methods and findings (specifies minimum set of items required for a transparent account of methods and results; serves to particularly elucidate how studies may be biased)
  94. What are the basic requirements for reporting health research?
    • Journals require authors to comply with : "Uniform requirements for manuscripts submitted to biomedical journals" 
    • -prepared by the International Committee of Medical Journal Editors
    • -includes requirements like ethical considerations (e.g. informed consent)
  95. What is the standard format for publishing? What reporting guideline is specific to RCT?
    • Introduction
    • Methods
    • Results
    • Discussion

    CONSORT is the guideline specific to RCT
  96. What is peer-review, and what purpose does it serve?
    Articles evaluated by experts in the same field before it's published; adds credibility
  97. What are the 3 different styles of titles?
    Descriptive, interrogative, affirmative
  98. What are two different types of abstracts?
    Structured (w/ subheadings) or unstructured (no subheadings)
  99. How does CONSORT help to increase transparency?
    • Requires RCT have a registration number (with a clinical trials registry)
    • Requires a link to where the full protocol was published (to elucidate if fishing occurred)
    • Requires disclosure of sources of funding
  100. What is Publication Bias
    Trials are published selectively based on magnitude and direction of study results (studies without significant results are less likely to be published)
  101. What 3 things increased treatment effect in RCT, and by how much?
    • 18% increased by lack of allocation concealment
    • 12% increase by lack of randomization
    • 9% increase due to lack of blinding
    • ??? for attrition bias
  102. Define critical appraisal
    process of assessing research by considering validity, results and relevance
  103. What questions might you ask to assess internal validity?
    • what factors are known to affect the dependent variable?
    • what is the likelihood of comparison groups differing on each factor 
    • evaluate treatments on how likely they will have an effect
  104. How can you evaluate internal validity?
    • minimize systematic errors (bias)
    • selection bias
    • performance bias
    • detection bias: outcome more likely to be reported in a certain subset of patients
    • attrition bias
  105. 3 types of Critical appraisal tools
    • scale-based
    • check-list based
    • domain-based
  106. What is the number of RCTs per year?
    What percentage of patients don't receive treatments proven to be effective
    What percentage of patients receive unneeded or harmful care?
    • 26,000
    • 30-40%
    • 20-25%
  107. What are clinical practice guidelines?
    Systematically developed statements which assist clinicians and patients in making decisions about appropriate treatment for specific condition/circumstance