Home > Preview
The flashcards below were created by user
deshea
on FreezingBlue Flashcards.

Definition of science
An area of study that objectively, systematically and impartially weighs empirical evidence for the purpose of discovering and verifying order within and between phenomena

Definition of research
Scientific structured problem solving

Definition of qualitative research
An approach to research involving indepth description of a topic with the goal of obtaining a rich understanding of some phenomenon

Definition of quantitative research
An approach to research that relies on prior research to generate research questions and often to make predictions, with the results being analyzed mathematically

Definition of data
Information collected by researchers for later analysis to answer research questions

Does having nonnumeric data mean the research is qualitative?
No, quantitative researchers may collect nonnumeric data, such as gender, diagnosis, medications being prescribed, etc. Qualitative research is an entirely different approach to studying phenomena, with the goal of obtaining nuanced understanding of the topic

Definition of mixedmethods research
Research that involves two parts: a qualitative study and a quantitative study

Definition of statistics
Numerical summary measures computed on data

Definition of a mean
An arithmetic average

Definition of literature review
Process of identifying and reading articles and books about a topic on which subsequent research may be conducted

Definition of theory
An organized, researchsupported explanation and prediction of phenomena

Meaning of 'designing a study'
Process of making decisions about the plans for a study, such as whether repeated measures or different groups of participants are needed

Definition of population
Larger group of people to whom we would like to generalize our results, or the larger group of entities that share a characteristic of interest to researchers

Definition of unit of analysis
The entity that is measured in a study. The unit of analysis often is the participant

Definition of sample
Subgroup of a population

Characteristics of a population
Large, usually unobtainable, sometimes hypothetical

Why do we need samples?
Because we can't get populations

Definition of epidemiology
The study of the distribution and spread of health conditions and diseases in human populations

Definition of a variable
A quantity or property that is free to vary or take on different values

Definition of discrete variable
A variable with discrete categories or values (example: gender)

Definition of continuous variable
A variable that theoretically could take on an infinite number of values between any two points (example: weight)

Definition of bias
A systematic influence on a study that makes the results inaccurate

Definition of simple random sampling
Process of obtaining a sample such that each participant is independently selected from the population

Definition of sample size
The number of people in a study (symbol is N)

Definition of external validity
Quality of our generalization from the sample back to the population

Does simple random sampling require equal probability of selection?
No. A random process involving independent selection of participants is required.

Definition of convenience sample
Groups of participants conveniently available to researchers

Definition of judgment sample
Same thing as a convenience sample

Are magazine articles the same thing as journal articles?
No, journal articles are published in scientific journals, such as the New England Journal of Medicine

Definition of random assignment
Process of placing participants into groups such that the placement of each participant is independent of the placement of any other participant

Another term for random assignment
Randomization

Definition of manipulation or intervention
The researchers' act of changing the experience of different groups of participants

Definition of control
Researchers' efforts to limit the effect of variables that may interfere with the ability to detect a relationship between other variables

Definition of statistical replication
Having a sample size greater than 1, or observing a phenomenon across multiple people or units of analysis

Definition of experimental research
Research which is characterized by random assignment of participants to groups, the researchers' manipulation of the participants' experiences, and statistical replication

Definition of experiment
Same as experimental research

Definition of an independent variable
A variable that is manipulated by the researcher. It comes first in time and is independent of the results

Definition of levels
The possible conditions or categories within an independent variable

Definition of a dependent variable
A variable that is measured as an outcome in experimental studies. It comes second in time

Definition of an extraneous variable
A variable that potentially interferes with the researchers' ability to detect the effect of the independent variable on the dependent variable

Other terms for extraneous variable
Confounding variable or lurking variable

Definition of randomized controlled trials
Studies in health sciences in which participants are randomized to groups and typically observed across time, with one group receiving a sham condition that mimics a real treatment but has no effect

Definition of placebo
A sham condition that is identical to the treatment condition except having no effect

Definition of control group
The group that receives a placebo or no intervention

Definition of experimental group
The group that receives an intervention

Definition of treatment arm
A term sometimes used to refer to an experimental condition in a study

Definition of attentioncontrol group
A control group that receives some attention from the researcher but no intervention

Definition of doubleblinding
A research method that keeps participants and the researchers who directly deal with them from knowing who is in which experimental condition

Definition of randomized block design
A study that contains at least one variable to which the participants cannot be randomly assigned, such as gender

Definition of blocking variable
A categorical variable with participants in naturally occurring or predefined groups. The variable is taken into account in the statistical analysis of the results

Definition of blocking
A method of incorporating a predictor variable (consisting of naturally occurring groups) into a study's design and statistically analyzing differences among levels of the predictor variable

Definition of nonexperimental research
Research that lacks both random assignment to groups and manipulation of an independent variable

Other terms for nonexperimental research
Observational research or descriptive research

Definition of predictor variable
A variable in descriptive/observational research that comes first in time and is thought to influence an outcome variable. It is analogous to the independent variable in an experiment, but researchers cannot randomly assign participants to conditions (example: gender)

Definition of explanatory variable
Another term used to describe a predictor variable

Definition of a criterion variable
In descriptive/observational research, it is the outcome variable that comes second in time. It is analogous to the dependent variable in an experiment

Definition of response variable or outcome variable
In nonexperimental research, it is the same thing as the criterion variable

Definition of casecontrol study
A study in which people with a condition (the cases) are compared with otherwisesimilar people who do not have the condition (the controls), then risk factors are assessed in an attempt to explain why some people have the condition and others don't

Definition of cohort study
A study in which people exposed or not exposed to a potential risk factor are compared by examining data across time, either retrospectively or prospectively

Definition of cohort
A group of people who share one or more characteristics and who are studied to assess eventual disease incidence or mortality

Definition of quasiexperimental research
Research characterized by manipulation of an independent variable, but in the absence of randomization

Definition of an inference
A conclusion drawn from information and reasoning

Definition of internal validity
Quality of inference about whether a causal relationship exists between variables

Characteristics of experimental research
Participants are randomly assigned to groups, an independent variable is manipulated, and there is statistical replication. This combination means we can make causal conclusions, thus we have good internal validity

Characteristics of observational research
Lack of randomization or manipulation, which means we can make only predictive conclusions, thus we have poor internal validity

Characteristics of quasiexperimental research
Lack of randomization, but presence of manipulation and statistical replication. The manipulative independent variable is one possible explanation for any observed differences, but internal validity is weak because of the lack of randomization

Three ways of controlling extraneous variables associated with participants
1) randomization 2) include the variable in the design as a factor to be studied 3) limit the study to make everyone the same on that variable (e.g., only females)

What improves internal validity
Randomization, because it controls extraneous variables that could interfere with the causal relationship between a manipulated independent variable and a dependent variable

What improves external validity
Random sampling from the population of interest

Definition of descriptive statistics
Statistics that summarize or describe information about a sample, such as the sample average blood pressure

Another definition of statistic
A numerical characteristic of a sample

Definition of parameter
A numerical characteristic of a population, such as the population average blood pressure

What is an estimate of a parameter?
A descriptive statistic estimates a parameter

Definition of distribution
A set of scores arranged on a number line

Advantages of using the mean
People understand it, and all scores in the sample go into its computation

Disadvantage of using the mean
One or more extreme scores can pull the mean in that direction

Definition of the median
Middle score in a distribution, or the score with the same number of scores below it and above it

Advantage of using the median
It is unaffected by extreme scores

Definition of trimmed mean
A mean computed on a data set from which some of the highest and lowest scores have been dropped. Also called a truncated mean

Definition of the mode
Most frequently occurring value or response (like the most commonly reported marital status in a sample)

Disadvantages of using the mode
There may be no mode if every value is equally occurring. There may be more than one mode. If numeric values are being analyzed, the mode may not be in the middle (e.g., if a lot of babies are at a family reunion, the most frequently occurring age may be 1, which doesn't describe the middle location of a distribution of the ages)

Symbol for the sample mean
M

Definition of inferential statistics
Statistics that are used for decisionmaking

Symbol for the population mean
Lowercase Greek letter mu

Definition of variability
The amount of spread or variation within a set of scores

Definition of the range
High score minus low score

Why is the average distance from the mean a useless measure of variability?
The distances below the mean balance out the distances above the mean, so the average distance from the mean is always zero

Definition of sample variance
The average squared distance from the mean

Definition of sum of squares
The process of squaring some numbers and adding them up

How do we get the standard deviation based on the sample variance?
Take the square root of the sample variance

What does the sample variance estimate?
Population variance

Symbol for the population variance
Lowercase Greek letter sigma, squared

Symbol for the population standard deviation
Lowercase Greek letter sigma

Definition of unbiased variance
A statistic computed by taking the sum of squared distances from the mean and dividing by N  1

How do we get the standard deviation based on the unbiased variance?
Take the square root of the unbiased variance

What do we get when you take the square root of a variance?
A standard deviation

What do we get when you square a standard deviation?
A variance

Name five measures of variability
Range, sample variance, standard deviation based on sample variance, unbiased variance, standard deviation based on unbiased variance

What numeric values can a standard deviation or variance have?
Anything from zero and up

Disadvantages of the sample variance and the unbiased variance
They are both in squared units of measure. They are influenced by extreme scores. They are not intuitive

What does it mean if a variance or standard deviation equals zero?
All of the scores are the same. There is no variability

Definition of skewness
Degree of departure from symmetry

A positively skewed distribution has only a few scores in which direction?
The positive end of the distribution. The skew is named after the few extreme scores

Meaning of skewness statistic = 0
The distribution is symmetric. There is no skewness

Meaning of a negative number for the skewness statistic
The distribution has some negative skewness or skewness to the left (a few extreme scores on the lower end of the number line)

Meaning of a positive number for the skewness statistic
The distribution has some positive skewness or skewness to the right (a few extreme scores on the higher end of the number line)

Why should we graph our data?
To understand research results, to communicate quickly and accurately with others about research results, to summarize the data, to look for anomalies like outliers, gaps in the distribution, data entry errors, etc., and to see patterns that can't be seen with statistics

Definition of frequencies
Number of occurrences within categories

A bar graph is used with what kind of data?
Nonnumeric or categorical data, like gender. Frequencies within categories are displayed

A pie chart is used with what kind of data?
Nonnumeric or categorical data, like gender. Frequencies within categories and/or percentages are displayed

Advantage of a pie chart
It can show how much of the whole is contained in each category

Disadvantage of a pie chart
Can be hard to judge the relative sizes of the wedges

A simple dot plot is used with what kind of data?
Numeric data, such as systolic blood pressure. Each dot represents one score

A multiway dot plot is used with what kind of data?
A multiway dot plot usually has categories or nonnumeric data, such as locations, combined with a frequency or percentage. The variable of location is nonnumeric or categorical

Advantage of a simple dot plot
All scores are shown

Advantage of a multiway dot plot
Frequencies can be placed in rank order, so locations or entities can be compared

A scatterplot is used with what kind of data?
Numeric or quantitative data for two variables, like height and weight

What is the square root of pi?
Just kidding! Hello from your first author, Lise DeShea!

Advantage of a scatterplot
Allows us to see whether two numeric variables appear to have a relationship with each other

Definition of a point cloud
Collection of dots on the scatterplot

A histogram is used with what kind of data?
Quantitative or numeric data, like ages

Two differences between a histogram and a bar graph
1) With a histogram the bars touch, but in a bar graph there are gaps between bars (so we have to bar hop). 2) A histogram uses numeric data, such as systolic blood pressure, and a bar graph uses nonnumeric data, such as diagnosis

Advantage of a histogram
We can see gaps in the distribution of scores

Disadvantage of a histogram
Multiple scores may be combined in one bar, which could change our understanding of the data, depending on how scores are clumped together

A time plot is used with what kind of data?
Quantitative or numeric data, often used to connect observations or means for different occasions in time

Another term for a time plot
Line graph

Advantage of a time plot
Allows us to see whether there are trends across time

Potential disadvantage of a time plot
If means are being graphed for different points in time, we lose the ability to see how much variability is present at each occasion in time

A boxplot is used with what kind of data?
Quantitative or numeric data, like ages

Advantage of a boxplot
Ability to define outliers in a way that many researchers can agree upon

Another term for a boxplot
Boxandwhisker plot

Definition of prevalence
Proportion of people with a condition, usually expressed as a percentage or a rate

What is represented by the length of the whiskers in a boxplot?
The spread of scores in approximately the top 25% and bottom 25% of the distribution

If one whisker is longer than the other whisker, does the longer whisker represent more scores?
Generally, no. Approximately the same number of scores is represented by each whisker and each half of the box. The length represents the amount of spread in those scores

Definition of an outlier
An extreme score that meets certain criteria to be defined as notably different from the rest of the data. Belongs to either the top 25% or bottom 25% of the distribution

Disadvantages of a boxplot
The definition of an outlier differs in various statistical software programs. Boxplot does not show gaps in the distribution that are not associated with outliers

How many of the scores in a data set are represented by the box in a boxplot?
About half. The line that divides the box itself is the median, so the middle two quarters of the data set are represented by the box

Definition of a percentile
A score that has a certain percentage of the distribution below it

What is the interquartile range?
A term sometimes used to describe the distance between the ends of the box in a boxplot

Disadvantage of graphs created by statistical software
The software sometimes zooms in to enhance any apparent difference or trend, which can be misleading

Definition of relative location
A score's position on the number line, in comparison with the mean

Verbal definition of a z score
(something minus its mean) divided by its standard deviation

A more general term for a z score
It is one example of a standard score. The z comes from the word standardize

Meaning of a positive z score
The score is greater than the mean

Meaning of a negative z score
The score is less than the mean

Meaning of z = 0
The score equals the mean

What does z = 0.5 mean?
The score is onehalf of a standard deviation below the mean

What do z scores measure?
The relative position or location of a score within a distribution, compared with the mean

If every score in a sample were transformed into z scores, then the z scores were graphed, what would be the shape of the distribution?
The distribution of z scores would look like the distribution of the original scores

If every score in a sample were transformed into z scores, what would be the mean and variance of the set of z scores?
"The mean of the z's would be zero, and the variance (and standard deviation) of the z's would be 1."

Suppose we had a sample of systolic blood pressure readings. We compute each person's z score. If we graph the z scores, will we find a standard shape to the distribution?
No. Computing z scores standardizes the distribution by making it have a mean of zero and a standard deviation of 1, but the shape is not standard. The shape is the same as the original distribution of scores

What does a statistician's cat say?
Mu

How does the formula for a z score change if we want to compare a person's score with a population mean?
The formula becomes: (something minus its population mean) divided by its population standard deviation

What is a T score?
A standard score that is computed so that the mean for the set of T scores equals 50 and the standard deviation equals 10

Definition of norming
Process of gathering scores and assessing the numerical results from a large reference group

Definition of norms
Usually the mean and standard deviation for the large reference group used in the norming process

How do T scores for bone mineral density tests differ from most other T scores?
The T scores for bone mineral density dests are scaled like z scores, with a mean = 0 and standard deviation = 1

Definition of proportion
A fraction expressed in decimals

Definition of a normal distribution
One distribution in a family of mathematically defined curves that are bell shaped and have a complex formula specifying the exact location and spread. Not all bellshaped curves are normal distributions

Definition of a theoretical reference distribution
A distribution that is defined by a mathematical formula and describes the relative frequency of occurrence of all possible values for a variable. Normal distributions are considered one family of theoretical reference distributions

How much of a normal distribution is contained between a score that is one standard deviation below the mean and another score that is one standard deviation above the mean?
About 68%. To contain about 95% of scores in a normal distribution, we would draw vertical lines through the scores that are two standard deviations above the mean and two standard deviations below the mean

Are samples normally distributed?
No. We may hazard a guess that a variable like adult male height is normally distributed, but even a large sample will be lumpy, not a smooth curve

Definition of the standard normal distribution
The normal distribution that has a mean = 0 and a standard deviation = 1

When can z scores be used in conjunction with a standard normal distribution?
ONLY when the original scores are normally distributed. Computing z scores does not change the shape of the distribution.

Characteristics of a normal distribution
1) mean = median = mode. 2) symmetric (i.e., skewness = 0). 3) All of the scores are under the curve. 4) The total proportion (area) under the curve = 1 (that is, 100% of scores)

What is a linear relationship?
It is an association between variables that can be described with a straight line

What does Pearson's correlation coefficient measure?
The degree of linear relationship between two variables

Other names for Pearson's r
A zeroorder correlation or productmoment correlation

What does Pearson's r estimate?
The population parameter rho, which represents the correlation between the two variables in the population

Range of values for Pearson's r
It can be as small as 1, and it can be as large as +1

Meaning of r = 0
There is no linear relationship between the two variables

Comparing r = .5 and r = +.5: Which is stronger?
Neither  they are equally strong

What does bivariate mean?
Related to two variables. Pearson's r measures bivariate correlation (two variables at a time)

Meaning of r = 1
There is a perfect negative linear relationship between the two variables

Meaning of r = +1
There is a perfect positive linear relationship between the two variables

Does Pearson's r have units of measure?
No, it is an index. It exists on a continuum from 1 (strongest negative linear relationship) to +1 (strongest positive linear relationship).

Definition of covariance
The shared corresponding variation between a pair of variables. When two variables covary, then the variation in one variable corresponds to variation in another variable

How is the covariance statistic related to Pearson's r?
It is the numerator of r. The denominator of r functions to standardize the covariance, taking away the units of measure

Verbal definition of Pearson's r
Average product of z scores for the two variables

What is the coefficient of determination?
It is rsquared, or Pearson's r times itself

What is the purpose of the coefficient of determination?
It is used to judge the strength of the correlation

What is the meaning of a coefficient of determination = .49?
It means almost half of the variance in Y is explained by X (or vice versa). We would say that 49% of the variance in Y is accounted for by its relationship with X

If r = .9, can we say that one variable causes changes in the other variable?
No. Correlation does not imply causation

What kinds of relationships can be assessed with Pearson's r?
Only linear relationships between two quantitative variables

What effect can an outlier have on Pearson's r?
An outlier can make Pearson's r seem stronger (closer to +1 or 1), or an outlier can dampen Pearson's r (make it closer to zero). An outlier even can reverse the sign of r (e.g., making it negative when most of the data showed a positive linear relationship)

Effect of restriction of range on Pearson's r
Same as outliers

Effect of combined groups on Pearson's r
Same as outliers and restriction of range

Effect of linear transformations (e.g., switching from measuring height in inches to measuring it in cm) on Pearson's r
None  Pearson's r is unaffected by linear transformations on the data

How do missing data affect Pearson's r?
Any participant with missing data on one variable is excluded from the computation of Pearson's r, so the correlation will reflect only the participants with complete data on both variables

How do we know which is the predictor variable and which is the criterion variable in Pearson's r?
We don't. Pearson's r does not make the distinction between a predictor variable and a criterion variable

Definition of probability
Relative frequency of occurrence

What is the numerator of a probability?
The number of outcomes that specifically interest us at the moment

What is the denominator of a probability?
The total number of options available, or the pool from which we are choosing

Range of a probability (numerically)
Can be as small as zero, can be as big as 1

What is a conditional probability?
A relative frequency based on a reduced number of possible options. The condition that we place on the probability limits the number of people who can be counted in the denominator

What is a gold standard in health care?
It is the best, most widely accepted diagnostic tool

What is sensitivity?
A conditional probability of a positive diagnosis by a new test, given that the gold standard gave a positive diagnosis. Usually expressed as a percentage (i.e., the probability times 100)

Meaning of the mnemonic SnNout
Sensitivity: Negative test rules out a possible diagnosis

Meaning of sensitivity = 100%
The new test was positive for 100% of the tests that the gold standard said were positive

Meaning of sensitivity = 25%
The new test was positive for 25% of the tests that the gold standard said were positive (so the new test is missing a lot of positive cases)

What is specificity?
A conditional probability of a negative diagnosis by the new test, given that the gold standard gave a negative diagnosis. Usually expressed as a percentage (i.e., the probability times 100)

Meaning of the mnemonic SpPin
Specificity: Positive test rules in a possible diagnosis

Meaning of specificity = 100%
The new test was negative for 100% of the tests that the gold standard said were negative

Meaning of specificity = 25%
The new test was negative for 25% of the tests that the gold standard said were negative (so the new test's negative results are incorrect 75% of the time)

What is positive predictive value?
A conditional probability of a positive diagnosis by the gold standard, given that the new test gave a positive diagnosis. Expressed as the percentage of the new test's positive diagnoses that were confirmed by the gold standard

Meaning of positive predictive value = 100%
All of the new test's positive diagnoses were confirmed by the gold standard

Meaning of positive predictive value = 25%
One out of four positive diagnoses by the new test were confirmed by the gold standard (so the new test is giving false positive results threefourths of the time)

What is negative predictive value?
A conditional probability of a negative diagnosis by the gold standard, given that the new test gave a negative diagnosis. Expressed as the percentage of the new test's negative diagnoses that were found to be truly negative according to the gold standard

Meaning of a negative predictive value = 100%
All of the new test's negative diagnoses were confirmed as negative by the gold standard

Meaning of a negative predictive value = 25%
One out of four negative diagnoses by the new test were found to be negative according to the gold standard (so threefourths of the time that the new test is negative, the gold standard said the results should have been positive)

What is a joint probability?
An 'and' probability: two facts must be true at the same time in order to count the people in the numerator

What is an 'or' probability?
Only one of the two facts must be true in order to count the people in the numerator. We would count everyone for whom Fact A is true, everyone for whom Fact B is true, and everyone for whom both facts are true

Two definitions of risk
1) Probability of an undesired health outcome 2) "Uncertainty about and severity of the consequences (or outcomes) of an activity with respect to something that humans value" (Aven & Renn, 2009)

Definition of disease surveillance
Monitoring of disease incidence and trends for entire populations

Definition of risk factor
A variable that affects the chances of a disease

Definition of relative risk
A statistic that quantifies how people with a risk factor differ from people without the risk factor

What is a hazard ratio?
A complex statistic that is interpreted like a relative risk

Definition of odds
The probability of something happening divided by the probability of that same thing not happening

Definition of sampling variability
The tendency for a statistic to vary when computed on different samples from the same population

How is sampling variability different from a sample variance?
Sample variance is a measure of the spread of a sample's scores. Sampling variability is variation that we could expect in numeric values of a statistic that could be computed on repeated samples from the same population

What is contained in a sample distribution?
Scores

What is contained in a population distribution?
Scores

Define sampling distribution
A distribution of a statistic that could be formed by taking all possible samples of the same size from the same population, computing the same statistic on each sample, then arranging the numeric values of the statistic in a distribution

What is contained in a sampling distribution?
Values of some statistic

Why we need sampling distributions
To compute probabilities so we can test hypotheses and then make inferences from a sample to a population

Theoretically, how would we get a sampling distribution of the mean?
Decide on a sample size. Then repeatedly draw samples of that size from the same population. For each sample, compute the sample mean. Then arrange the pile of sample means along a number line

Definition of point estimate
A single number or point on the number line being used to estimate the parameter

Definition of hypothesis
A testable guess

What does the Central Limit Theorem say?
1) With a large enough N & independent observations, the sample mean's sampling distribution will have a normal shape, 2) the 'mean of the means' will equal the mean of the population from which we sampled, and 3) these means will have a variance equal to the population variance divided by N

Why is the Central Limit Theorem a gift?
It saves us from having to create a sampling distribution for one statistic: sample mean. We want to generalize from M to mu, so we need to know how likely it is to get a sample mean at least as extreme as ours, but we don't want to take all possible samples needed to create a sampling distribution of M

What makes a point estimate (like the sample mean) unbiased?
A statistic is unbiased if the mean of its sampling distribution equals the parameter estimated by the statistic

Now that unbiased has been defined, what can we say about the unbiased variance?
The unbiased variance has a sampling distribution made up of the unbiased variance statistics for all possible samples of the same size from the same population. The average of those statistics will be the population variance: the parameter estimated by the unbiased variance

If the square root of the unbiased variance is a standard deviation, is that standard deviation unbiased?
No, standard deviations are biased, but the square root of the unbiased variance generally is judged to be good enough in estimating the population standard deviation

How do we compute the z test statistic?
We take the sample mean and subtract the population mean, then this difference is divided by the square root of (sigmasquared divided by N)

What is the standard error of the mean?
It is the denominator of the z test statistic: the square root of (sigmasquared divided by N). It also can be written as the population standard deviation divided by the square root of N. It is the standard deviation of the sample mean's sampling distribution

Which statistics have sampling distributions?
All of them  we could choose any statistic and imagine drawing all possible samples of the same size and computing that statistic on each sample

Definition of interval estimate
A pair of numbers that contain a range of values that is more likely to contain the true value of a parameter being estimated

Definition of interval estimation
An approach that quantifies the sampling variability by specifying a range of values in the estimation of a parameter

Definition of confidence interval
An interval estimate that could be expected to contain the true value of the parameter for a certain percentage of repeated samples from the same population

Definition of margin of error
A measure of spread that is used to define a confidence interval. It is generally computed by multiplying a critical value by a standard error of a statistic

How do we get the two numbers that define a confidence interval?
We take a point estimate and subtract the margin of error to get the lower limit. To get the upper limit, we add the margin of error to the point estimate

Name three characteristics of a hypothesis that can make it testable
Specific, objective and nonjudgmental

Definition of hypothesis testing
Process of setting up two competing statements or hypotheses that describe two possible realities, then using probability to decide whether a study's results are typical or unusual for one of those realities

What is a null hypothesis?
A statement of what we don't really believe, but that we're setting up tentatively as a possible reality. The idea to be tested

What is an alternative hypothesis?
A statement of what we do believe. An idea opposite of the null hypothesis

What are statistical hypotheses?
DeShea & Toothaker's term for symbolic representations of the null and alternative hypotheses

Symbol for the alternative hypothesis in DeShea & Toothaker's book
H with a subscript 1

Symbol for the null hypothesis in DeShea & Toothaker's book
H with a subscript 0

What is a nondirectional alternative hypothesis?
A statement of what we do believe, but not specifying or predicting an outcome in a particular direction (e.g., the rats will differ on average from health rats in their maze completion time)

What is a directional alternative hypothesis?
A statement of what we do believe, specifying or predicting an outcome in a particular direction (e.g., the rats will take longer on average to complete the maze)

What is a significance level?
A small probability chosen in advance by the researcher as a standard for how unlikely the results must be in order to declare that the results are evidence against the null hypothesis

Symbol for the significance level
Lowercase Greek letter alpha

How do we know where to put alpha in a distribution?
The alternative hypothesis tells us where we expect to find our results. If no direction is predicted, then for the z test statistic, alpha is split between the two tails of the distribution. If a direction is predicted, alpha goes in the predicted tail

What are critical values?
Values of the test statistic that cut off a total tail area equal to alpha

Twotailed critical value decision rule
If the observed test statistic is equal to or more extreme than a critical value, reject the null hypothesis

Twotailed p value decision rule
If p is less than or equal to alpha, reject the null hypothesis

When to use 'prove' in statistics
Never! Slap your own hand if you ever say 'prove' in conjunction with statistics! We can only say what is likely or unlikely

Meaning of 'significant' in statistics
Saying something is significant implies that a test statistic has been computed and a null hypothesis has been rejected

Definition of p value
Probability of observing a test statistic at least as extreme as the one computed on our sample data, given that the null hypothesis is true

Reject and retain are actions taken on which hypothesis?
Only the null. There is no action taken on the alternative hypothesis.

Another way to say that we retain the null hypothesis
Fail to reject the null hypothesis

Can we say that we accept the null hypothesis?
Please don't use the word accept in hypothesis testing! It implies that we are embracing the null hypothesis as Truth  but in fact we never really believed the null hypothesis.

What are decision rules?
Requirements for taking the action to either reject or retain the null hypothesis

Onetailed critical value decision rule
If the observed test statistic is in the direction predicted by the alternative hypothesis AND the observed test statistic is equal to or more extreme than the critical value, reject the null hypothesis

Onetailed p value decision rule
If the results are in the predicted direction AND if the onetailed p value is less than or equal to alpha, reject the null hypothesis

How is a p value a conditional probability?
It is conditional on the idea that the null hypothesis is true. The null hypothesis determines how we draw the distribution that is used in hypothesis testing  as if the null were true

Where to look to determine which tail to place alpha, if we have a directional hypothesis
The alternative hypothesis. If it says mu < 50, then the directional arrow is pointing toward the lower tail, and that's where we put alpha

How awesome are you for working through these flashcards?
Just about as awesome as any student possibly could be  keep up the good work!

Definition of assumptions
Statements about the data or population that allow us to know the distribution of the test statistic and to compute p values

What does it mean for an assumption to be met?
A condition that is described in the assumption has been achieved

What does it mean for an assumption to be violated?
A condition that is described in the assumption has been not achieved

What are the assumptions of the z test statistic?
Independence of scores, and a normally distributed population of scores is sampled

What does it mean if a confidence interval that estimates the population mean does not contain the value of mu that is stated in the null hypothesis?
It means we reject the null hypothesis and conclude it is unlikely that we have sampled from the population with that value of the population mean

Definition of a Type I error
Rejecting the null hypothesis given that it's actually true in the population

Definition of a Type II error
Retaining the null hypothesis given that it's actually false in the population

What is the only kind of error that can be made if we reject the null hypothesis?
Type I error

What is the only kind of error that can be made if we retain the null hypothesis?
Type II error

What are the two correct decisions that can be made in hypothesis testing?
1) Rejecting the null hypothesis given that it's actually false in the population, and 2) retaining the null hypothesis given that it's actually true in the population

What is the probability of a Type I error?
Alpha

How is alpha a conditional probability?
It is the probability of rejecting the null, given that the null is true in the population

Why is the tail area for alpha in the drawing of the standard normal distribution representative of the probability of a Type I error?
The distribution is drawn as if the null hypothesis is true. If the observed test statistic goes beyond the critical value that defines the border of alpha's area, then the decision would be to reject the null. But that tail probability exists within the distribution that reflects the idea that the null is true.

What is the probability of correctly retaining the null hypothesis?
1 minus alpha

What is the probability of a Type II error?
Beta

What is power?
The probability of rejecting the null, given that the null is false in the population. Rejecting a false null hypothesis would be a correct decision

What is the correct decision that could be made when the null hypothesis is true in the population?
If the null is true in the population, we would hope the sample data would lead us to the correct decision to retain the null.

What is the correct decision that could be made when the null hypothesis is false in the population?
If the null is false in the population, we would hope the sample data would lead us to the correct decision to reject the null.

How can the probability of a Type II error be used to define power?
Power can be defined as 1 minus the probability of a Type II error (i.e., 1  beta)

If alpha = .01, what is the probability of correctly retaining the null hypothesis?
This correct retention of the null hypothesis would occur with a probability equal to 1 minus alpha = .99

If beta = .15, what is power?
Power = 1  beta = .85, which would be the probability of correctly rejecting the null hypothesis

Two definitions of effect size
1) The magnitude of the impact of an independent variable on a dependent variable, or 2) the strength of an observed relationship between variables

All else being equal, will a study need more power to detect a small effect size or more power to detect a large effect size?
More power is needed to detect smaller effect sizes, just as a magnifying glass or microscope must be stronger to look at smaller objects

All else being equal, will adding participants to a study tend to increase or decrease power?
Adding participants should increase power

All else being equal, which is associated with more power  alpha = .05 or alpha = .01?
Alpha = .05

All else being equal, do we have more power or less power with a onetailed test, compared with a twotailed test?
We have more power with a onetailed test, unless we don't predict the correct direction. In that case, we have no power

What does power = 0 mean?
No probability of finding statistical significance, which can happen if the wrong direction is predicted for the results

All else being equal, if we improve the control of extraneous variables, will we have generally more power or less power?
More power  with less extraneous variability, the test statistics will be more sensitive to detecting actual effects in the population

What is the relationship between variability and the analogy of signal and noise?
Variability is like static (noise) on a radio. If we reduce the static, we tend to be able to hear a signal better. The signal is like the effect of one variable on another variable

Why do we need the onesample t test?
Sometimes we want to test a null hypothesis about a single population mean, but we don't always know a population standard deviation or variance. We must replace that parameter with a sample statistic, giving us a new test statistic (no longer a z test statistic)

Verbal definition of a onesample t test
(Sample mean minus population mean) divided by the estimated standard error of the mean

What is the estimated standard error of the mean in the denominator of the onesample t test?
Sample standard deviation divided by the square root of N

How do the hypotheses for the onesample t test differ from the hypotheses for the z test statistic?
They don't differ. The same hypotheses can be tested with both statistics

In practical terms, what are degrees of freedom, df?
df are needed so that we know which t distribution to use to find critical values and p values. Each t distribution is defined by df. Different df lead to slightly different shapes of t distributions

df for the onesample t test
N  1

Example of a null hypothesis for a onesample t test
Mu = 100, or our sample comes from a population where the mean equals 100

When to use the onesample t test
When we're interested in testing a null hypothesis about a single known (or hypothesized) population mean, but we don't know the population variance or population standard deviation

Critical value decision rule for a onesample t test
If the observed onesample t test is equal to or more extreme than a critical value, reject the null hypothesis

p value decision rule for a directional hypothesis for the onesample t test
If the results are in the predicted direction AND if the onetailed p value is less than or equal to alpha, reject the null hypothesis

p value decision rule for a nondirectional hypothesis for the onesample t test
If the twotailed p value is less than or equal to alpha, reject the null hypothesis

When can we say that a onesample t test is significant?
When we have rejected the null hypothesis

If we reject the null hypothesis for a onesample t test, which hypothesis should we restate to describe the significant outcome?
If we reject the null hypothesis for a onesample t, we restate the alternative hypothesis and say there is a significant difference between the sample mean and population mean

When can we reject the alternative hypothesis?
NEVER! We either reject or retain the null hypothesis and take no action on the alternative hypothesis

If a tree falls in a forest and no humans are there to hear it, did it happen?
Yes, p < .05. :)

What are the assumptions for the onesample t test?
The scores are independent of each other and the scores in the population are normally distributed  the same assumptions with the z test statistic

For a single mean, how do we interpret a 95% CI computed using a critical value from a onesample t test?
If we computed 100 confidence intervals like ours, we could expect 95% of them to bracket the true population mean

When computing a 95% CI around a sample mean, will our CI encompass the population mean?
With any particular CI, it's impossible to know. We only can say that in the long run, 95% of the time we will get CIs that bracket the true mu

If we have a null hypothesis that mu = 100 and we compute a 95% CI to estimate the population mean, how do we know whether there is a significant difference between the sample mean and population mean?
If the 95% CI does not straddle the hypothesized value of mu (here, 100), then we can say the sample mean is significantly different from the population mean

How do we interpret a 95% CI that brackets the value of mu in the null hypothesis (such as mu = 100)?
If the 95% CI brackets the hypothesized mu = 100, then there is not a statistically significant difference between the sample mean and the population mean

Why must we be cautious about interpreting histograms of means that show error bars for the confidence intervals?
Usually the confidence intervals were computed using a onesample t test for each group separately. But the difference between two means has a different computation for the confidence interval

What is a limitation for using the z test statistic or the onesample t test?
In both cases we must know or hypothesize a value for a population mean. For the z test statistic we also must have a numeric value for the population variance or standard deviation. We rarely know these parameters

What is a difference score?
When two scores have a link to each other, such as a pretest score and posttest score for the same person, the difference score would be computed by subtracting one of the scores from the other

What are three ways that scores can be paired?
Pairs of scores are created when people are measured twice on the same variable, when naturally occurring pairs (like leftarm and rightarm blood pressure) are compared, or when a researcher creates pairs by matching people on extraneous variables

What does it mean for participants to act as their own controls?
When the same people are measured repeatedly on the same variable, each person is like his/her own little control group for comparison across time or conditions

Describe the paired t test
It is a onesample t test computed on difference scores

What is an order effect?
An extraneous variable associated with the order in which conditions are presented to participants and the influence of that order on the outcome variable

How can researchers combat order effects?
The order of conditions can be randomized

Why must difference scores be computed with the same direction of subtraction for all participants (such as pretest minus posttest)?
So that all participants' difference scores are comparable

List other names for the paired t test
Dependentsamples t test, matchedpairs t test, Student's t test for paired samples, t test for related samples, etc.

Does the paired t test require two samples?
The paired t test can be computed when one sample is measured twice on the same variable, or it can be computed when two samples involve pairs of participants (such as doctorpatient pairs who are both measured on their satisfaction with their interaction)

Explain our fun fact associated with paired means
When dealing with pairs of scores, the difference scores will have a mean that equals the difference in each sample mean (such as the pretest mean minus the posttest mean)

What is a mean difference?
It is the difference in two means  one mean minus the other mean

If we have a null hypothesis that says two population means are equal, what would the null hypothesis say is the mean difference?
If the two population means are equal, then the null hypothesis can say that the mean difference (one mu minus the other mu) is zero

Give an example of a null hypothesis for a paired t test
Two population means are equal, or our samples come from populations where the means are equal (with the understanding that there is a pairwise link between the means)

What is the estimated standard error of the difference scores in the denominator of the paired t test?
The standard deviation of the d's divided by the square root of the number of d's

What are the assumptions of the paired t ets?
Normality of the d's and independence of the d's

What are the degrees of freedom for the paired t test?
Number of d's minus 1 (or the number of pairs minus 1)

If we have a nondirectional alternative hypothesis for the paired t test and we reject the null hypothesis, what conclusion can we draw?
Our samples come from populations where the paired means are different. There is a statistically significant difference in the paired means

If we have a nondirectional alternative hypothesis for the paired t test and we retain the null hypothesis using a paired t test, what conclusion can we draw?
There is no statistically significant difference in the paired means

What does the confidence interval associated with the paired t test estimate?
It is an interval estimate of the difference in two paired population means (such as the difference in the population pretest mean and the population posttest mean)

If a 95% confidence interval for the paired mean difference brackets zero, what can we conclude?
There is no significant difference in the paired means. The paired means are statistically indistinguishable (their difference is essentially zero)

When do we use an independentsamples t test?
When we're interested in testing a null hypothesis about whether two independent means are equal: we have two independent groups, we're interested in means, and we have equal sample sizes with at least 15 people per group

List other names for the independentsamples t test
Student's t test, t test for unpaired samples, the independent t test, etc.

What is an example of a null hypothesis for an independentsamples t test?
Our samples come from populations where the means are equal

If we have a directional alternative hypothesis for an independentsamples t test, why can the directional sign be confusing?
The same idea can be expressed with the symbol "">"" or the symbol ""<."" If we think the first group will have a bigger mean, we can list it first and use >. Or we can list the bigger mean second and use <.

What is the numerator of the independentsamples t test?
The mean difference, or one sample mean minus the other sample mean

What is in the denominator of the independentsamples t test?
A big ugly estimated standard deviation

Knowing the numerator and denominator of the independentsamples t test, how can we interpret an independentsamples t test = 3?
The two means are three estimated standard deviations apart

Give the df for the independentsamples t test
Sample size for the first group plus sample size for the second group, minus 2

If we reject the null hypothesis using an independentsamples t test, what conclusion can we draw?
Our samples come from populations where the means are different. There is a statistically significant difference in the means

If we retain the null hypothesis using an independentsamples t test, what conclusion can we draw?
The two means are statistically indistinguishable

If we have a directional alternative hypothesis for an independentsamples t test, how do we use the p value decision rule?
First, we look at the sample means to see if we correctly predicted which sample mean would be bigger. If not, retain the null. If so, then we ask if the onetailed p value is less than or equal to .05. If so, we reject the null hypothesis. Otherwise, we retain the null

What are the assumptions of the independentsamples t test?
That the scores are normally distributed in the populations that were sampled, that all scores are independent of each other, and that the two sampled populations of scores have equal variances

Are we likely to violate the normality assumption of the independentsamples t test? If so, is that a problem?
Yes, we are likely to violate it, but in many cases it is not a problem  the theoretical t distribution will still match the sampling distribution of the independentsamples t test

What does it mean for the theoretical t distribution to match the sampling distribution of the independentsamples t test?
It means the p value for our observed independentsamples t test will be trustworthy for hypothesis testing

Are we likely to violate the independence assumption of the independentsamples t test? If so, is that a problem?
No, we are not likely to violate it, as long as we use careful research methods. If we do violate it, our study may be seriously compromised

Are we likely to violate the equal variances assumption of the independentsamples t test? If so, is that a problem?
Yes, we commonly do violate the equal variances assumption. The effect of violating it depends on sample sizes: If both samples have 15+ participants and are equal in size, then our p value will be trustworthy. Otherwise, we need to use a different test statistic

Definition of robustness
The ability of the sampling distribution of a test statistic to resist the effects of a violation of an assumption

What does it mean if an assumption has been violated but the test statistic is robust to that violation?
It means the statistic's sampling distribution will still look like the theoretical distribution that we want to use to compute p values

Explain the cold analogy for the independentsamples t test
Nonnormality in the data is like a cold virus. Most of the time, the independentsamples t test can resist the effects of nonnormality, and its sampling distribution still matches the theoretical t distribution

Explain the analogy of the nuclear meltdown for the independentsamples t test
Violating the independence assumption is like a nuclear meltdown. The independentsamples t test (and most other test statistics) cannot survive

Explain the measles analogy for the independentsamples t test
Robustness to the measles virus depends on whether we have had a measles shot. The measles virus is like having unequal variances in the populations being sampled. If the independentsamples t test has its shot (equal and large n's), then it is robust to the measles (unequal variances)

When do we use the AWS t test?
If we are interested in comparing two independent means and we have unequal n's, or samples with fewer than 15 people each, or we have both unequal and small n's

If we sample from two populations and one of the populations is not normally distributed, will we know it?
No, we rarely know whether we're violating the normality assumption, but in most cases it won't make a difference. If many outliers in one tail are expected, then there might be a problem

What does it mean to say that the independentsamples t test is robust to most violations of normality?
It means that the p value that we would have gotten from the statistic's sampling distribution (if it had been created) will roughly equal the p value that we actually get from the theoretical t distribution

If we sample from two populations with unequal variances, will the independentsamples t test be robust?
It depends. If the sample sizes are equal with at least 15 people per group, then the independentsamples t test will be robust to the unequal variances

When would the independentsamples t test not be robust to unequal variances?
When we have 1) unequal n's, 2) small n's, or 3) smallandunequal n's

If we want to use the independentsamples t test and some people who were in the control group later participated in the treatment group, will the independentsamples t test be robust?
No, because the independence assumption is violated. If those participants were dropped from the study, in this case we might be able to rescue the research

What is independent in the independentsamples t test?
The groups are independent (and all participants are independent of each other)

What are the assumptions of the AWS t test?
Normality and independence

Describe the robustness of the AWS t test
The AWS test is generally robust to violations of normality, but it is NOT robust to violations of independence

If we have a nondirectional alternative hypothesis and we reject the null hypothesis using the AWS t test, what conclusion can we draw?
Our samples come from populations where the means are different. There is a statistically significant difference in the means

If we have a nondirectional alternative hypothesis and we retain the null hypothesis using the AWS t test, what conclusion can we draw?
The two means are statistically indistinguishable

Give an example of a null hypothesis for the AWS test
Two population means are equal, or our samples come from populations where the means are equal

If we compute a 95% confidence interval to estimate the difference in two independent population means, what does it mean if the interval brackets zero?
If the 95% CI brackets zero, then the difference in means is statistically indistinguishable from zero

If we compute a 95% CI to estimate the difference in two population means, what does it mean if the CI does not straddle zero?
It means the mean difference is statistically different from zero. Our samples most likely come from populations that have different means

What is referenced by the term 'analysis of variance'?
Analysis of variance is a family of statistics. These statistics have two estimates of variability, one in the numerator and one in the denominator

What is a level?
A level is one value of an independent variable. If the independent variable is the method of soothing babies, one level might be 'bottlefeeding'

When do we use the oneway ANOVA F test?
When we have two or more independent groups and we're interested in comparing their means

What does oneway mean?
It means there is one grouping variable (either an independent variable or predictor variable)

Give an example of a null hypothesis for the oneway ANOVA F test
Our samples come from populations with equal means

Give an example of an alternative hypothesis for the oneway ANOVA F test
There is some difference in the population means

If we want to detect some difference in means for four independent groups, must all the means be different from each other?
No, only one of them must differ from the others, which is why the alternative hypothesis for the oneway ANOVA says 'some difference in the population means'

What are fixed effects?
The levels of the independent variable can be replicated in another study, based on a specific definition of each level

What does 'variability between groups' mean?
In a oneway ANOVA F test, variability between groups refers to differences in the group averages, with at least one group differing from the others

What does 'variability within groups' mean?
People's scores will differ from each other for many reasons, even if they are in the same group. Variability within groups refers to this variation in the scores within the groups in a oneway ANOVA design

Explain the logic of the oneway ANOVA F test
1) Compute 2 estimates of variability: between and within. 2) Compute oneway ANOVA F = (betweenvariability)/(withinvariability). 3) If the 2 estimates are about the same, F will be around 1. 4) As group means differ, F gets bigger. 5) At some point, F will exceed a critical value, indicating a significant difference somewhere in the means

Numerator of the oneway ANOVA F
Mean square between

Formula for the mean square between for the oneway ANOVA F
Between sum of squares divided by dfbetween

What is the meaning of the between sum of squares for the oneway ANOVA F?
It is a measure of variability of the sample means

What is dfbetween for the oneway ANOVA F?
Number of groups minus 1

Denominator of the oneway ANOVA F
Mean square within

Formula for mean square within for the oneway ANOVA F
Within sum of squares divided by dfwithin

What is the meaning of the within sum of squares for the oneway ANOVA F?
It is a measure of variability of scores within groups

What is another phrase that means the same thing as mean square within?
The error term, or mean square error

What is dfwithin for the oneway ANOVA F?
Total sample size minus the number of groups

What is dftotal for the oneway ANOVA F?
Total sample size minus one, which equals the sum of dfbetween and dfwithin

Critical value decision rule for the oneway ANOVA F
If the observed oneway ANOVA F is equal to or more extreme than the critical value, reject the null hypothesis

p value decision rule for the oneway ANOVA F
If p is less than or equal to alpha, reject the null hypothesis

If we reject the null hypothesis for the oneway ANOVA F, what can we conclude?
Our samples come from a population where there is some difference among the population means

Name each assumption of the oneway ANOVA F test and state whether the statistic is usually robust in the face of a violation of that assumption
Normality: yes. Independence: no. Equal variances: no

Does the oneway ANOVA F test have an inoculation against unequal variances?
No. Even with large and equal n's, the oneway ANOVA F's p value can become untrustworthy

Why isn't the oneway ANOVA F test's robustness problem in face of unequal variances a problem?
Because most researchers really want to know more than whether there is some difference in the means  they want to know which means differ. Multiple comparison procedures can answer these research questions

What are multiple comparisons?
The process of comparing all possible combinations of two means at a time (pairwise comparisons of means)

What are the statistics that look at the combinations of two means at a time?
Multiple comparison procedures

How are multiple comparison procedures different from doing all possible independentsamples t tests in a oneway ANOVA design?
Multiple comparison procedures have the goal of controlling the risk of making at least one Type I error for the entire set of pairwise comparisons

What is the typical null hypothesis for almost any multiple comparison procedure?
For each pair of means, the null hypothesis is that the population means are equal

If we have four independent groups and we want to run a multiple comparison procedure to identify any statistically significant differences in means, how many null hypotheses will we test?
If we have four independent groups, there are six possible pairs of means to compare, so we would have six null hypotheses to test

What is a Bonferroni correction?
An approach to controlling the probability of a Type I error for a set of comparisons. The most basic Bonferroni correction involves dividing alpha by the number of pairwise comparisons to be made, then applying that portion of alpha to each comparison's hypothesis test

What is Tukey's Honestly Significant Difference (HSD)?
A multiple comparison procedure that tests for differences in all pairs of means within a oneway ANOVA F design when n's are equal

Why is the RyanEinotGabrielWelsch Q statistic better than Tukey's HSD?
It tends to have slightly more power, making it a more sensitive statistic for detecting differences in pairs of means when n's are equal

When do we use Tukey's HSD or the REGWQ multiple comparison procedure?
When we have a oneway ANOVA situation with equal sample sizes

When do we use the GamesHowell multiple comparison procedure?
When we have a oneway ANOVA situation with unequal sample sizes

What does the term 'bivariate linear relationship' mean?
It means two quantitative variables will be examined to determine whether their relationship can be described with a straight line

When do we use Pearson's r for testing a hypothesis about a correlation?
When we want to determine whether there is a statistically significant linear relationship between two continuous variables

What does Pearson's r estimate?
Rho, the population correlation between two variables

Example of a null hypothesis for a test of correlation when no direction is predicted in the alternative hypothesis
Rho = 0, or our sample comes from a population where there is no linear relationship between X and Y

Example of an alternative hypothesis for Pearson's r when a positive correlation is predicted
Rho > 0, or our sample comes from a population where there is a positive linear relationship between X and Y

df for the correlation test
N  2, where N is the number of XY pairs of scores or the number of participants

What are the assumptions of Pearson's r?
The pairs of scores are independent of each other and the scores have a bivariate normal distribution in the population

Is Pearson's r robust if the independence assumption is violated?
No, having dependence between the participants' observations on either variable would make the study's results untrustworthy

Is Pearson's r robust if the assumption of bivariate normality is violated?
Usually yes, as long there is no extreme skewness in one or both variables

What is the purpose of regression?
Straightline prediction. We use data from one sample to create a prediction equation. When someone new comes along who has a score on only one of the two variables, we can predict a value on the second variable

What is the problem with recruiting participants for a treatment after observing a onetime reading that indicated high blood pressure?
No matter whether the treatment is effective or not, chances are that the person's bp will be lower next time. This example demonstrates regression to the mean

What does it mean to compare the rise and run in regression?
The rise is the change in the Y score. The run is the corresponding change in the X score. The slope of the regression line is an expression of the change in Y (rise) relative to the change in X (run)

What does a negative slope mean?
As we read the graph from left to right, the line goes downhill, meaning that the change in Y is negative as the X variable increases

What is a Yintercept?
The point on the Y axis where the regression line is crossing. That is, if X = 0, it is the predicted value of Y, based on the regression equation

What two bits of information are needed to graph a regression line?
The slope and Yintercept

Explain two ways that regression analysis different from correlation
1) We can't make predictions with correlation, but we can with regression. 2) With correlation, it doesn't matter which variable is X and which variable is Y. In regression, one variable must be considered X (predictor variable) and the other variable is designated Y (criterion variable)

How is simple regression similar to and different from multiple regression?
They are similar because both involve a single outcome variable. But simple regression involves only one predictor, while multiple regression involves more than one predictor

Explain the Predicted Y formula in words
This is the regression line: Predicted Y equals the Yintercept plus the product of two things: the slope and some value of the predictor variable

What is a regression coefficient?
This is a general term for both the slope and the Yintercept, which are the two numeric elements that define any simple regression line

Generally speaking, what is the ordinary least squares criterion?
It is a standard for determining mathematically what is the bestfitting regression line

What do we call the difference between an actual Y score and a predicted Y score, given a particular value of X?
An error or residual = actual Y minus predicted Y (with both of these numbers being paired with the same given value of X)

Explain the ordinary least squares criterion
If we have an OLS regression line, we can find all of the errors for the scatterplot, square every error and add up the squared errors. The result is smaller than the same sum that could be computed for any other line drawn through that scatterplot

Explain the ordinary least squares criterion in a less wordy way!
OLS regression defines a regression line so that the sum of squared errors is minimized

What does the slope b estimate?
b estimates the population slope, beta

If we expect two variables to have a positive linear relationship, what will our alternative hypothesis be for the slope?
The null hypothesis will say beta > 0, or our sample comes from a population in which the slope for the regression line is positive

How is the t test for the slope computed?
The slope, b, is divided by the standard error of its sampling distribution

We have computed a nondirectional alternative hypothesis and a 95% confidence interval to estimate the population slope, and this interval does not bracket zero. What does that mean?
There is a significant linear relationship between the two variables. We can look at whether the slope statistic is positive or negative to determine the direction of the linear relationship

Why must we consider the range of the data in regression?
Mathematical predictions can be made using any numbers, but our predictions can be trusted only within the range of numbers for which we have evidence (data)

What is categorical data analysis?
A family of statistics that analyze the frequencies for nonnumeric variables (like type of obesity surgery and whether blood sugar is controlled a year later)

What are rank tests?
Statistics that involve computations performed on ranks instead of the original scores

What is the relationship between a proportion and a percentage?
If we multiply a proportion by 100, we get a percentage. If we divide a percentage by 100, we get a proportion"

What is a sample proportion?
The number of people who share an attribute divided by the total number of people in the sample

What does it mean if a null hypothesis says a population proportion = .6?
The null hypothesis is saying that our sample comes from a population where 6 out of 10 people share some attribute

If the null hypothesis says the population proportion = .6 and a 95% confidence interval for the proportion does not bracket .6, what does that mean?
It means we should reject the null hypothesis and conclude that our sample most likely comes from a population where the population proportion is not .6

When do we use a chi square for goodness of fit?
When we have one categorical variable (e.g., days of the week) and we are interested in frequency of occurrence of some event (e.g., heart attacks)

What kind of null hypothesis is tested by the chi square test for goodness of fit?
A null hypothesis that focuses on the proportions for all levels of a categorical variable. The null hypothesis may say that all of the proportions are equal (like the example of sudden cardiac deaths across days of the week), or it may specify the proportions (such as the population distribution of people in bloodtype groups)

What is another term for the chi square test for goodness of fit?
Oneway chi square test

What are the expected frequencies in a chi square test for goodness of fit?
The number of occurrences that we predict in each category based on theory or previous research

What are the observed frequencies in a chi square test for goodness of fit?
The number of occurrences that actually are found in the sample data for each level of the categorical variable being analyzed

Describe the computation of the chi square test for goodness of fit
Subtract the expected frequency from the observed frequency for each category. Square the differences. Divide each squared difference by its expected frequency. Add up the results

df for the chi square for goodness of fit
Number of categories minus 1

p value decision rule for the oneway chi square
If p is less than or equal to alpha, reject the null hypothesis

What conclusion can be drawn if the chi square for goodness of fit is significant?
The distribution of observations across categories are not equal. A difference in proportions lies somewhere among the categories

How do you feel?
Hey, if this question was good enough for a computer to ask Spock in a Star Trek movie, it's good enough for this set of flashcards!

Name the three assumptions of the chi square test for goodness of fit
Independence of observations, categories are mutually exclusive, and categories are exhaustive

Why is it a problem if expected frequencies are too small in a chi square test for goodness of fit?
The p value for the chi square test for goodness of fit may not be trustworthy

Give an example of a null hypothesis for the chi square for independence
There is no relationship between the two categorical variables, or Variable 1 is independent of Variable 2

What are two other terms for the chi square test for independence?
Twoway chi square, or chi square for contingency tables

Assumptions of the chi square test for independence
Same as the chi square test for goodness of fit: independence of observations, categories are mutually exclusive, and categories are exhaustive

df for the chi square for independence
(number of rows minus 1) times (number of columns minus 1)

p value decision rule for the chi square for independence
If p is less than or equal to alpha, reject the null hypothesis

What conclusion can be drawn if the chi square for independence is significant?
The two categorical variables are significantly related

Why is relative risk usually associated with very large samples?
Because epidemiologists studying disease risk want good estimates of actual risk in populations

Why is relative risk usually associated with cohort studies?
A cohort study is research that identifies people exposed or not exposed to a potential risk factor, then compares these people by observing them across time for the development of disease. Relative risk compares the disease risk for people who were exposed or not exposed to the potential risk factor

Describe the two risks involved in the computation of a relative risk
The numerator tells about the risk of disease given exposure to the risk factor, and the denominator tells about the risk of disease given the absence of exposure to the risk factor

Describe the computation of a relative risk in terms of probabilities
Relative risk is the probability of disease given exposure to the risk factor, divided by the probability of disease given no exposure to the risk factor

What is the meaning of RR = 1?
It means the probability of disease is the same for those exposed to the risk factor and those not exposed to the risk factor

If a journal article reports that the 95% confidence interval for RR is [0.9, 1.1], what can we conclude?
There is no significant difference in the risks for those exposed and not exposed because the interval contains 1

Why is relative risk the wrong analysis for a casecontrol study?
In casecontrol studies, researchers identify people with a condition (the cases), then they find people who are similar to the cases, except without the condition (i.e., they find controls). The researchers in casecontrol studies haven't watched exposed and unexposed groups over time

How do we compute odds?
By dividing the probability of something happening by the probability of that something not happening

What goes into an odds ratio?
Two odds: one odds computation is divided by another odds computation

What is the meaning of OR = 1?
The odds of getting the disease or condition are the same for those with and without exposure to the risk factor. There is no relationship between the risk factor and getting the disease or condition

What is the meaning of OR = 7?
The odds of the disease or condition for those with the risk factor are 7 times greater than the odds of the disease for those without the risk factor

What is the meaning of the following 95% confidence interval for an odds ratio? [1.7, 3.2]
People exposed to the risk factor have greater odds of getting the disease than those not exposed because the CI is greater than 1. The spread of people across categories of cases and controls is tilted toward cases for those who were exposed, compared with those not exposed to the risk factor

What is the meaning of the term 'nonparametric statistics'?
Nonparametric statistics test null hypotheses that do not contain parameters. In contrast, a parametric statistic will have a null hypothesis containing something like mu or rho

How do nonparametric statistics' assumptions differ from the assumptions for parametric statistics?
Nonparametric statistics generally free researchers from the assumption of normality

Why would a researcher choose a rank test?
If researchers expect to violate the normality assumption in a way that would make their parametric statistics untrustworthy, they may choose a nonparametric rank test instead

Why not use rank tests all of the time?
The hypotheses may not reflect the ideas that the researchers wish to test, a violated assumption may trigger a statistically significant result when the null is true, and rank tests may not have as much power as parametric tests in some cases

