Epi Quals

Card Set Information

Author:
lisas_pad
ID:
87318
Filename:
Epi Quals
Updated:
2011-12-17 16:55:31
Tags:
epi qual
Folders:

Description:
Study for epidemiology qual exam
Show Answers:

Home > Flashcards > Print Preview

The flashcards below were created by user lisas_pad on FreezingBlue Flashcards. What would you like to do?


  1. Beta Error
    AKA (type-II errors)— is, the probability of concluding that no difference between treatment groups exists when, in fact, there is a difference.Epi dict: the error of failing to reject a false hypothesis i.e., declaring that a difference does not exist when in fact it does.
  2. Counterfactual ideal
    The ideal comparison group consists of exactly the same individuals in the exposed group had they not been exposed. Since it is impossible for the same person to be exposed and unexposed simultaneously, epidemiologists much select different sets of people who are as similar as possible.
  3. Healthy worker effect
    Healthy people become or are more likely to remain active workers where as the general population is more likely to reflect the full spectrum (health and not healthy)- so if general pop is used as comparison group for workers – may be less likely to find adverse health outcome in workers. Solve by using correct comparison group.
  4. Alpha error
    AKA – type 1 error- the error of rejecting a true hypothesis i.e., declaring tha a difference exists when it does not.

    Probability of concluding theres a difference but in reality - there is not (p-value)
  5. Case fatality
    • case fatality rate -the ratio of the number of deaths caused by a specified disease to the number of diagnosed cases of that disease in a specified time period.
    • If occurring over long period of time- best to use survival rate or survivorship table
    • Modern epi: incidence proportion of death among those whom an illness develops- time if often unstated but it is better to specify it.
  6. Molecular epidemiology
    The use in epi studies of techniques of molecular biology. Techniques such as DNA typing can be used to detect, identify and measure molecular structure which may be normal variant or damaged by disease or environment. Branch of medical science that focuses on the contribution of potential genetic and environmental risk factors, identified at the molecular level, to the etiology, distribution and prevention of disease within families and across populations.[1] This field has emerged from the integration of molecular biology into traditional epidemiologic research. Molecular epidemiology improves our understanding of the pathogenesis of disease by identifying specific pathways, molecules and genes that influence the risk of developing disease.[2]
  7. Bias of an Estimator
    Difference between the expected value of an estimator of a parameter and the true value of a parameter Compare to unbiased estimator- an estimator that for all sample sizes has an expected value equal to the parameter being estimated.
  8. Bernoulli Distribution
    The probability distribution associated with two mutually exclusive and exhaustive outcomes (death or survival).
  9. A model used for an experiment with only 2 outcomes- success and failure
  10. Antigenic Shift
    A mutation, shift or sudden change in the molecular structure of RNA/DNA of micro-organisms, especially viruses, that produce new strains of micro-organism in which hosts who have been previously exposed to other strains have little or no immunity. Often associated with influenza A viruses and may be associated with influenza epidemics and pandemics.
  11. Attack rate
    • Traditional term for incidence proportion.
    • The proportion of a population affected by the disease during a prescribed, usually short, period of time. (# exposed and ill/# exposed)
    • Epi Dict: the cumulative incidence of infection in a group observed over a period during an epidemic. This “rate” can be determined empirically by identifying clinical cases. Because it’s time dimension is uncertain or arbitrarily decided, it should probably not be described as a rate. (it’s really a proportion)
  12. Confidence interval
    The computed interval with a given probability ( i.e. 95%) that the true value of a variable such as mean proportion or rate is contained within this interval

    • 100(1-alpha)% of the time, the true CI will include the true value of the parameter of interest - gives the range of values of the parameter of interest that is consistent with the data(rich notes)
    • CI takes into account magnitude and precision of effect estimate simutaneously.
  13. Ecologic fallacy
    results from the bias that may occur because an association observed between a variable on an aggregate level does not necessarily represent the association that exists at an individual level.
  14. Hawthorne effect
    the tendency of study participants to change their behavior based on their knowledge that they are being observed.
  15. Herd Immunity
    The immunity of a group or community. The resistance of a group to the invasion and spread of an infectious disease based on the high proportion of individuals in the group who are resistant to the infection. The resistance is a product of the number of susceptible and the probability that those who are susceptible will come in contact with infected persons. Proportion of the population needed to reach this varies with agent.
  16. Placebo Effect
    Tendency for people to report a response to a treatment regardless of the actual effect (Marcella intro notes)
  17. Length bias
    • Cases are more likely to be detected by screening if they are slowly progressive and have a long pre-clinical phase. So, again, even if therapy is ineffective, screen detected cases are likely to have longer survival.
    • Bias due to the tendency of screening to detect a larger number of cases of slowly progressing disease and miss aggressive disease due to its rapid progression.situation where mostly indolent (slow to develop) disease is picked up by screen This makes the screen appear more effective than it truly is. Ex: slow grow cancer most likely to be found on screen - since these are less likely to cause death – assoc with better prognosis (Marcella intro notes)
  18. Bills of Mortality
    Weekly and annual abstracts of christening and burials compiles from parish registries in england- date back to 1538. Basis for the earliest English vital stats.
  19. Harmonic Mean
    A measure of central tendency calculated by summing the reciprocals of individual values and dividing the resulting sum into the number values. Used when looking for the average of rates
  20. Geometric Mean
    A measure of central tendency.calculated by adding the logarithms of individual values, calculating the arithmetic mean and then converting back by taking the antilog. Can only be calculated with positive values. Used with highly skewed data.
  21. Risk
    Probability of an individual developing disease in a specified time period (Advance – lec1)
  22. Lead bias
    • Cases detected by screening are diagnosed earlier and would have longer apparent survival than clinically diagnosed cases, even if treatment is ineffective.
    • Apparent lengthening of survival due to earlier diagnosis in the course of disease without any actual prolongation of life.
    • Refers to the extra time one has in knowing they have disease then contributes to a longer apparent “survival time”. This occurs even though aperson dose not , in fact, live any longer. One can get around this by using mortality as a measure of outcome rather than “5 year survival” (Marcella intro notes)
  23. Sensitivity
    • Proportion of subject correctly classified a having disease
    • Proportion of those w/ D who test +
    • a/(a+c)
  24. Specificity
    • Proportion of subject correctly classified as not having disease
    • proportion of those w/o D who test neg
    • d/(b+d)
  25. Reliability of a measurement
    Reproducibility of a measurement. (Marcella intro epi lec 1) Extent to which repeated measures by the same instrument yield consistent results. If there is a large amount of random error, then the measure will not be reliable.
  26. Census Tract
    • An area for which details of a population structure are separately tabulated at a periodic census;normally it is the smallest unit of analysis of census tabulations. Have well defined boundaries. Urban areas- may be further sub-dvidided into blocks but published tables do not contain this level of details. Tracts are usually relatively homegenouse in demographic, SES and ethnic composition.
    • is a geographic region defined for the purpose of taking a census. Census tracts represent the smallest territorial unit for which population data are available in many countries.[2] In the United States, census tracts are subdivided into block groups and census blocks. In the U.S., Census tracts are "Designed to be relatively homogeneous units with respect to population characteristics, economic status, and living conditions, census tracts average about 4,000 inhabitants.
  27. Credible interval
    • Used in Bayesian statistics, it is the posterior
    • probability interval used for interval estimation
    • in contrast to point estimation. Used for purposes similar to those of confidence intervals in frequentist statistics and an alternative terminology is to use Bayesian confidence interval instead of
    • "credible interval".
  28. Bayesian Statistics
    • Is an alternative approach to probability and inference which makes use of a prior and posterior probability. The prior probability is determined by a best guess of the observer of what the probability of that event is usually in the absense of data. The posterior probability is probability of the same event based on some data and integrates the prior probability.
    • ISSUES: difficultly in specifying prior probability as it's often not well defined or may differ by different groups.

    • From liklihood notes
    • prior = prevalance
    • PVP=posterior
  29. Collinearity
    • Exists when there is a strong linear
    • relationship among independent (predictor) variables in a regression model.
    • (kleinbaum)

    • Cause instability in parameter
    • estimation due to inflation of stand errors for estimate regression
    • coefficients.
    • Practical def – there is some degree of redundancy or overlap among variables.
    • Results in loss of power (b/c it takes more data to disentangle the individual
    • effects) and can make interpretation difficult.
  30. Coefficient of variation
    • Theratio of the standard deviation to the mean, (single variable setting). It aimsto describe the dispersion of the variable in a way that does not depend on the
    • variable's measurement unit. The higher the CV, the greater the dispersion in the variable.
    • EQUATION - s/x bar x100%

    • Modeling setting -CV is the ratio of the root mean
    • squared error (RMSE) to the mean of the dependent variable. In both CV is often presented as the given ratio multiplied by 100. The CV for a model aims to
    • describe the model fit in terms of the relative sizes of the squared residuals and outcome values. The lower the CV, the smaller the residuals relative to the predicted value. This is suggestive of a good model fit.
  31. Number needed to treat (NNT) in clinical trial
    Quantitative expression of the practical magnitude of a statistical effect. Reciprocal of absolute risk difference. It’s the # needed (followed with regimen for x time) in each group in order to obtain 1 additional success.

    • Ex: %pt benefit tx A=75 B=65 absolute diff =.75-.65-.1
    • 1/0.1=10
    • For every 10 pt give A vs B 1 additional pt benefits for other 9 A v. B makes no diff
  32. Recall bias
    • Info bias – systematic tendency for individuals
    • selected for the study to be erroneously placed in different exposure or outcome categories due to due to difference in accuracy or completeness of
    • recall to the memory of past events or experiences. For example, a mom whose child has died of leukemia if more likely than a mom of a health child to remember details of past experiences such as x-rays done when child was in utero.
  33. Dynamic population study
    • Dynamic population is one which gains and loses
    • members (open cohort) – all natural population are dynamic. A dynamic population study is a cohort study using a dynamic population
  34. Confounding by indication
    A distortion of the effect of a treatment on the outcome that is caused by the presense of a sign or symptom that is associated both with treatment and outcome; or a distortion of the effect of a treatment that is caused by the presence of an indication or contraindication for the treatment that is also associated with the outcome.

    • DN: refers to an extraneous determinant of the outcome parameter that is present if a perceived high risk or poor prognosis is an indication for an intervention and is a risk indicator for illness. Thus it
    • produces an imbalance in prognostic factors between compared treatment groups.
  35. Internal validity
    Ability of the study to show correct characterization of the association(s) within the population studies. This means there is a lack of bias and confounding is minimized or corrected. (Marcella intro)

    • Internal validity refers both to how well a study was run (research design, operational definitions used, how variables were measured,what was/wasn’t measured, etc.), and how confidently one can conclude that the
    • observed effect(s) were produced solely by the independent variable and not extraneous ones.
  36. Bonferroni Correction
    • The Bonferroni correction is a multiple-comparison correction used when several dependent or independent statistical tests are being performed simultaneously (since while a given alpha value may be appropriate for each individual comparison, it is not for the set of all comparisons).
    • In order to avoid a lot of spurious positives, the alpha
    • value needs to be lowered to account for the number of comparisons being performed.


    • The simplest and most conservative approach is the Bonferroni correction, which sets the alpha value for the entire set of comparisons equal to by taking the
    • alpha value for each comparison
    • equal to alpha/n.
    • Divide alpha by number of hypotheses being tested (Advance lect 1)
  37. Hills Criteria for Causality
    • (Carl Barbara And Teddy Can Play Soccer Every Sunday)
    • Coherence- no conflict with natural history
    • Bio-Gradient
    • Analogy- 1 drug cause defect - perhaps another one can also - air pollution smoking- similar outcomes
    • Temporality- E b/4 D
    • Consistency- can results be replicated
    • Plausibility-makes sense biologically
    • Strength of Assoc
    • Experiemntal evidence- RCT
    • Specificity- cause lead to single effect
  38. Etiologic Fraction
    • fraction of disease that could be prevented by eliminating an exposure
    • CIe-CIu/CIe
  39. Synergy
    • relationship of factors which exhibit a joint effect that exceeds the sum of their separate effects
    • (advanced lec2 slide 28)
  40. Effect Modification
    • Change in the magnitude of effect according to the value of some third variable.
    • (advanced - lec2- slide 28)
  41. Validity
    Extent to which an instrument measures what it is supposed to
  42. Reliability
    extent to which repeated measures by the same instrument yield consistent results (on different ocasstions by different observers) - a tool has to be reliable to be valid - a reliable tool is not always valid
  43. Realiabilty coefficient
    intra class correlation - quatitative expression of instruments reliability
  44. Positive predictive value
    • proportion of person w/ positive test who have disease
    • a/(a+b)
  45. Negative Predictive Value
    • Proportion of person with negative test who do not have disease
    • d/(c+d)
  46. Base population
    • is the set of persons or persom time who are eligible to be case.
    • Study base - the population experience that you want to capture for association studies
    • demissie (lec 11)
  47. Factorial design
    • type of rct in which multiple factors can be evaluated simutaneously. Good for combo therapies (tx of congestion and cough)
    • subjects are randomized then assigned txa, txb, tx a and b, no treatment.
    • can have incomplete factorial - when combination cannot be given
  48. intent to treat analysis
    AKA effectiveness anlalysis - often done with RCT in which subjects are compared in the analysis based on treatment assigned (non-adherence is ignored)

    demissie notes lect7
  49. efficacy
    • potential treatment until optimal circumstances- compares subjects based on treatment rec'd (rather than one assigned)
    • aka explantory trial analysis
    • demissie lect 7
  50. Central limit theorem (CLT)
    given a population of any non-normal functional form with a mean (mu) and finitie variance (theta squared), the sample distribution of x bar computed from samples of size n from this population will have a mean (mu) and variance (theta square/n) and will be approximately normally distributed when the sample size is large.

What would you like to do?

Home > Flashcards > Print Preview