Home > Preview
The flashcards below were created by user
on FreezingBlue Flashcards.
5 ways of knowing
- common knowledge
- expert opinion
- scientific reasoning
what is research? what makes it clinical?
Diligent search aimed at discovery, interpretation and application of new or revised facts.
Clinical if it is patient oriented
What's the difference between research and practice?
- research is directed at populations
- practice is directed at individuals
Define Evidence-Based Medicine:
- use of current best evidence in making decisions about care of individual patients
- integrates clinical expertise with best available external evidence from systematic research and patient's values and expectations
Steps to providing EBM
Difference between experimental and observational quantitative studies?
- Experimental: intervention under control of experimenter
- Observational: intervention not under control of experimenter
If you assess exposure and then follow to find outcome what kind of study is this?
Cohort study (observational)
If you assess outcome, and check to see if they had the exposure, what kind of study is it?
If you assess both exposure and outcome at the same time, what kind of study is it?
Define clinical trial
Clinical research which evaluates more than 1 health related intervention
hallmark of RCT
Treatment assigned by chance, rather than subjects/researcher
What is the purpose of each phase in a clinical trial?
- Phase 1- first in man (safety)
- Phase 2- safety/efficacy
- Phase 3- large scale comparative study
- Phase 4- post market surveilance
Subjects improve/modify an aspect of their behaviour because they know they are being studied
What are 3 the components of high quality research?
IV, EV, high precision
How to Achieve Internal Validity ?
Low systematic error (results free of systematic deviation from truth at any stage (no biases))
- Low random error
- extent to which results are free from sources of variation equally likely to distort estimation in either direction
Extent to which results can be applied to other individuals or settings (a key for practicing EBM)
What is the purpose of random allocation in RCTs?
- Balance known and unknown confounders at baseline.
- Reduces selection bias
- This is why this is the only study that can prove causality (because everything else is theoretically the same)
- The researcher doing the allocating of patient's to treatments cannot know the order of allocation until it is time to allocate
- -reduces selection bias
What purpose does Blinding serve
- avoids unequal co-intervention among treatments
- avoids unequal asertainment of outcome
- -thus reduces measurement bias
Used when one intervention is a drug and the other is a device
differences that arise in study groups due to exclusion of participants after randomization
Difference between prospective and retrospective cohort study?
both follow participants into the future
- prospective measures exposure now, and goes to the future
- retrospective measures exposure in the past, and checks to see if outcomes were present.
Key elements of a good RCT
- allocation concealment
- complete follow-up
Efficacy vs effectiveness
- efficacy is trying to establish causality (ideal situation) (high internal validity)
- effectiveness is trying to see whether it really works (normal situation)(high external validity)
What is equipoise?
What happens if a trial isn't clinically equipoise?
- reasonable doubt about the effectiveness of an intervention
- Important for equipoise to be present when beginning a RCT
Will cause clinicians to break allocation concealment
What are two groups that regulate clinical trials on behalf of Health Canada Food and Drug Regulations?
- TCPS-2: tricouncil policy statement 2
- ICH-GCP: good clinical practice
3 different review boards for clinical trials in Canada
- Pharmaceuticals and Devices (therapeutic products directorate)
- Biologics and radiopharmaceuticals (Biologics and Genetic Therapy Directorate
- Natural health products (Natural health products directorate)
Which drug trials are reviewed by health canada? WHich are not?
All phase 1, 2, 3, trials are reviewed and approved by health canada unless the drug is being used for an already approved indication
What are two most important aspects of ICH-GCP?
- Patient safety is being considered
- study is credible
What is a research ethics board? and what are its core principles?
Independent peer review board with 5 or more members (either medical professional or lay persons)
- Core principles:
- 1) respect for persons (autonomy + informed consent)
- 2) Concern for welfare (benefecience, maximum benefit/low risk)
- 3) Justice-(fairness-all people who could benefit have access to trial)
What is informed consent?
- Process by which subject voluntarily confirms wilingness to participate
- -involves prior discussion of study
- -can be obtained orally/implied
- requires special provisions in vulnerable groups
- may be delegated to authorized third party
What are some threats to voluntary participation?
- Undue influence (participant recruited by person in position of authority)
- Coercion: more extreme vesion of undue influence; involves risk of harm/punishment
- Incentives: anything offered to participants to encourage participation beyond normal compensation
What is capacity?
Participant is able to research info, and decide the consequences of their participation or not adequately (research on those who are incapacitated is still allowed, but assent is needed from somebody else)
When you can stop/withdraw from a study?
- Stopping rules: as established in protocol, based on safety/efficiacy
- Researcer can remove somebody from a study if they have worsening health, better therapy is available elsewhere, or non-adherence
- Patient can remove themselves from the study at any time, no questions asked
When can be research be done without consent?
- If it involves no more than minimal risk
- impossible to carry out research properly with consent
- it wont affect their welfare
- is not a therapeutic intervention
- consent may be obtained later if appropriate
When is deception allowed in research
- If full disclosure will bias the results
- must debrief participants after the trial
What does Justice entail in terms of research ethics board principles?
Particular individuals, groups, communities should not bear an unfair share of the direct burdens of partcipation or be unfairly excluded from potential benefits of research participation
When are placebo controlled trials acceptable?
- Its use is scientifically and methodologically sound in establishing safety/efficacy of intervention
- does not compromise safety/health of participants
- compelling scientific justification
Any untoward medical occurrence; need not be causal to the treatment
Serious adverse event
- any untoward medical occurence that:
- results in death/life-threatening event
- requires hospitilzation or prolongs hospitilization
- results in persistant disability
- birth defects
Reporting of SAE varies by REB authority
Adverse drug reaction
adverse event with a reasonable possibility of being related to treatment
Clinical trial protocol (consists of )
- Research Problem (background+significance)
- Research Question (hypothesis + objectives)
- Design (procedrures; participant; interventions; outcomes)
- Statistical Issues (sample size; analytical approach)
Hypothesis vs objectives
Testable statements vs questions being answered
Elements of the research question
There is no difference between treatments
Different types of hypothesis tests
Study sample vs Target population
- Subset of target population who will be part of study
- Group that the study could be generalized to
Health care settings are primary, secondary, tertiary; what does each setting consist of
- primary: outpatient
- secondary: specialist
- tertiary: hospital
What are two broad rationale for selection criteria
- Ethical rationale: cant den treatment or impose contraindicated treatment
- Scientific rationale: groups under treatment should have the same admission criteria
Inclusion criteria serves to maximize
- rate of outcome
- likley benefit of trial
- ease of recruitment
Exclusion criteria serves to minimize
- practical problems
Researchers may use a run-in phase to assess participants
a sample that is similar to the target population in all characteristics
Probability/Random sampling method; what it is, and how it can be further broken down into subgroups
- All members have an equal chance of being selected
- can be further divided to:
- simple (normal)
- stratified (groups organized according to certain characteristic)
- cluster (group of people are randomized)
Examples of Non-probability/Non-random sampling
- Systematic (pick every 10th person who applies)
- convenience (literally because it is convenient to pick people; not very representative)
- purposive (pick people because they have a certain characteristic)
Possible 2 meanings of selection bias
Not all individuals in a population have equal chance at being selected to participate
Intervention and control groups differ from each other
Various types of ways of exposing participants to interventions
- parallel (subjects only receive one treatment
- Crossover (each subject receives every treatment)
- Cluster (groups of individuals are randomized
ratio of participants intended for each study group
Name two examples of negative control groups
- absence of treatment
What is a head to head trial?
What is an add-on trial? When are add-on trials beneficial?
- Drug X vs Drug Y
- Drug X + placebo vs Drug X + Drug Y
- add-on trials are beneficial when it is not ethical to simply give placebo alone
3 methods of randomization (into treatments/control)
- Simple- randomize all subjects
- Blocked-randomize subjects within blocks of certain characteristics (say sex; an example of stratification)
- Cluster- randomly allocate a group rather than an individual
An acronym to help remember what Allocation concealment entails is:
Difference between a blinding trial and an open trial?
- In a blinding trial, one or more groups do not know treatment assignment
- In an open trial everyone knows treatment assignments
How can you help control measurement bias in unblinded trials?
- Stanardize procedures
- minimize co-intervention
- blind outcome assessor
- choose objective (hard) outcomes
What are some desired features of primary and secondary outcomes?
- Easy to record
- free of measurement error
- clinically relevant
- chosen before study initiation
- can be observe independent of treatment assignment
6 Outcomes of Disease
What are composite outcomes? What are surrogate outcomes?
- Composite outcomes are various outcomes added up; helps to reduce sample size to show an association
- Surrogate outcomes are substitute outcomes that are associated with a relevant clinical outcome, but are not, in themselves relevant.
Explanatory/Efficacy trials using per protocol analysis will be similar to interntion-to-treat (management/pragmatic trials) if there are low levels of
- development of co-morbidity
What are some ways studies determine their sample size?
- Fixed size: based on a priori sample size calculation
- mega trial: very large sample size
- sequential: variable size, not known at outside (do calculations till you get significance)
- N of 1: single subject
Give an appropriate order for operational definition, variable, and concept
Give an example
- Concept-> operational definition-> variable
- cannulation difficulty-> Operation success-> # of pokes
What are two broad categories in which variables can be measured?
- Categorical/qualitative: not variable in degree, but in type
- Continuous/quantitative: exists in some degree along a continuum
What are the four types of data?
- Nominal: categorical, no order
- Ordinal: categorical, there is an order, though the interfvals are not necessarily equal (ex. best to worst)
- Interval: continuous, no true zero
- Ratio: continuous , true zero exists
What is discrete data?
units of the data are limited to integers (ex. # of people)
special type of independent variable; selected to see if it affects/modifies relationship between IV and DV
What is reliability?
What are some causes?
- Precision/consistency/reproducibility of a measure
- poor precision is due to random error
- could be from: observers, instruments, subjects
How can you increase reliability?
- Standardize measurements+methods
- Train observer
- Use reliable instruments
- Take repeated measurements
What is validity? Poor validity is due to? Could be from?
What are some strategies for enhancing validity?
- Accuracy of a measure
- Poor validity due to systemic errors (bias)
- could be from observers, instruments, subjects
- Standardize measurement methods
- Train/certify observers
- use calibrated instruments
- blind individuals
- use objective variables
- test-retest: apply same test to same group after a pre-specified time itnerval
- observer: compare scores from >2 observers
- Equivalent forms: two alternate forms of a tool to same group during same period and see if they match
- internal consistency: determine how components of the tool score relative to others
- Content- extent to which measure contains all dimensions of a construct (expert judgement)
- Criterion- relate scores obtained with a different measure of the same variable
- Construct- assess predictions made from theory
- Face validity: does it seem like it measures what it intends to?
Describe descriptive vs inferential data
descriptive: summaries of information gathered from samples of a population
inferential: captures data from 2 or more groups, used to draw inferences between sample and population, and see how likely it is that the effect is due to chance(two types of statistics in this category: parametric, non-parametric)
Describe parametric vs non-parametric stats
- part of inferential statistics
- parametric: assumptions made about nature of population (normally distributed etc. ); can't be used for categorical data or with small sample size
- non-parametric: minimal assumptions made about nature of population; not as strong as a statistic
Example of parametric and non parametric tests
- Parametric: T-test
- non-parametric: chi squared, mann-whitney U test
Study where the null is rejected (if you can't reject the null, the study is inconclusive or indeterminate)
Qualititative data is to _____
as _______ is to Mean difference
Relative risk; quantitative
Calculate RR, ARR, RRR, NNT, MD
What confidence interval implies significance for RR? What confidence interval implies significance for MD?
What's the meaning of a confidence interval?
Why are confidence intervals being used increasingly in statistical reporting?
- RR= CI must exclude 1
- MD= CI must exclude 0
True point estimate will be within the confidence interval 95% of the time.
Provide precision and significance
What is the equation for confidence?
How does this relate to confidence intervals?
Confidence= (signal/noise) * sqrt(sample size)
- higher the confidence, the smaller the confidence interval
- signal is the difference between treatment and control
- noise is chance variability.
How can you minimize random error? Systematic error?
- random error: reduce random variation, increase sample size
- systematic error: combat by using study designs that reduce the sizes of various biases
What factors will help you determine an effective sample size (estimating sample size)
- Characteristics of the data (variability or proportion)
- Estimation of effect (magnitude of differences to be detected)
- Type 1 error (where you set it)
- Type 2 error (where you set it)
Define type 1 errors
Define type 2 errors
Define statistical power
- 1: false positive, rejecting the null, when you should have withheld judgement
- 2: false negative, not rejecting the null, when the null is false
- 3: probability that a trial will find statistically significant differences when a difference exists (1-B)
What is fishing?
Choosing to report only significant outcomes
Fulfillment of responsibility to communicate results publically for external evaluation (authors must contribute sufficiently in order to take responsibility for the work)
What are the "Guidelines for reporting Health research" (very generally)
advice about how to report research methods and findings (specifies minimum set of items required for a transparent account of methods and results; serves to particularly elucidate how studies may be biased)
What are the basic requirements for reporting health research?
- Journals require authors to comply with : "Uniform requirements for manuscripts submitted to biomedical journals"
- -prepared by the International Committee of Medical Journal Editors
- -includes requirements like ethical considerations (e.g. informed consent)
What is the standard format for publishing? What reporting guideline is specific to RCT?
CONSORT is the guideline specific to RCT
What is peer-review, and what purpose does it serve?
Articles evaluated by experts in the same field before it's published; adds credibility
What are the 3 different styles of titles?
Descriptive, interrogative, affirmative
What are two different types of abstracts?
Structured (w/ subheadings) or unstructured (no subheadings)
How does CONSORT help to increase transparency?
- Requires RCT have a registration number (with a clinical trials registry)
- Requires a link to where the full protocol was published (to elucidate if fishing occurred)
- Requires disclosure of sources of funding
What is Publication Bias
Trials are published selectively based on magnitude and direction of study results (studies without significant results are less likely to be published)
What 3 things increased treatment effect in RCT, and by how much?
- 18% increased by lack of allocation concealment
- 12% increase by lack of randomization
- 9% increase due to lack of blinding
- ??? for attrition bias
Define critical appraisal
process of assessing research by considering validity, results and relevance
What questions might you ask to assess internal validity?
- what factors are known to affect the dependent variable?
- what is the likelihood of comparison groups differing on each factor
- evaluate treatments on how likely they will have an effect
How can you evaluate internal validity?
- minimize systematic errors (bias)
- selection bias
- performance bias
- detection bias: outcome more likely to be reported in a certain subset of patients
- attrition bias
3 types of Critical appraisal tools
- check-list based
What is the number of RCTs per year?
What percentage of patients don't receive treatments proven to be effective
What percentage of patients receive unneeded or harmful care?
What are clinical practice guidelines?
Systematically developed statements which assist clinicians and patients in making decisions about appropriate treatment for specific condition/circumstance