are you able to repeat the experiment to get the same results
statement or expectation of what you think will happen
you must be able to prove a theory or experiment wrong
psychologists ensure that they can replicate their own and others research
the simplest most logically economical explanation
states exactly how a variable are and how they will be measured within the context of your study. like recipe
a standardized way of making observations, gathering data, forming theories, testing predictions, and interpreting results
an explanation that organizes seperate peices of information in a coherent way
in- depth examinations of a single person. strength: it can highlight individuality. weakness: there is nothing to compare the results to, researcher bias can creep in, very unlikely that this one person actually represents a large population
observe organisms in their natural setting. strength: the behavior of the subject is likely to be most accurate. weakness: researcher has no control over the setting, subjects may not have the opportunity to display the behavior the researcher is looking for. *cannot study topics like attitudes or thoughts using this method
study that asks a large number of people questions about their behaviors. strength: all us to gather a large portion of information, can study things that cant be studies in naturalistic obersvation (sexual behavior). weakness: subjects may not understand the language, social desirability effect.
looking for relationships between variables, only tells us if there is a relationship, not which variable caused the other. less control over subjects enviroment (hard to rule out alternatives)
a statistic that shows the strength of the relationship. the closer to 1 or -1, the stronger the relationship.
there is a direct relationship, the variables are varying in the same direction. ex: amount of study time and grades
the variables are inversely related. ex: as number of children increase, the IQ scores of the children decrease. hands move in opposte directions
direct causal relationship between IV and DV, being positive that the manipulation of the IV caused the DV.
generalizability of our results in the general population. expect other similar groups to react the same way.
Sample (selection) Bias
when random sampling is not used. ex: taking the first 30 volunteers.
tendency for results to confrom to the experiments expectations. ex: treating subjects differently depending on what he/she wants from them
when the participants expectations about the effect of an experimental manipulation has an influence on the DV. expectations make behavior change. ex: telling someone they are drinking alcohol when they actually arent
subtle bias that is produced by participants trying to be good subjects and behave in a manner that helps the experimenter.
any variable not intentionally included in the research design that may affect the DV. extra variables. ex: sickness, distractions.
variables other then the IV that participants in one experiment may get that participants in the over experiment dont get. ex: time of day, sunlight.
how to assert more control and higher experimental validity
subjects arent aware of whether they're in the experimental group of control group.
neither subjects nor experimental assistants measuring the DV are aware of which groupd subjects are assigned to. reduces experimenter bias and demand characteristics.
reducing "order" effect. testing some subjects from group A and some subjects from group B both in the morning.
HOPS IV DV EAP
hypothesis, operationalize your population, sample, independent and dependent variables, expose, analyze, publish
Within Subjects Design
subjects serve as both the experimental and control group (pre test, post test)
Between Subjects Design
the DV is compared between two different experimental and control groups
variables that do not change or can't be manipulated during the experiment. ex: gender, height.
The assumption that the IV will have no effect on the DV
-if we notice a difference in results between the exp and control groups we reject the null and give support to the hypothesis
-if we fail to notice a difference in results between the two groups then we fail to reject the null hypothesis and DO NOT support the hypothesis
Type I Error
rejecting the null hypothesis when infact it is true. ex: we reject that a pacient is sick and admit them to the hopsital for observation, later finding out that we made an error, the patient is not sick, "so sorry, you can go home now"
Type II Error
we fail to reject the null hypothesis and assume that it is true. ex: assuming the patient is not sick, send him/her home where they later die at home.
3 Main Research Tools
descriptive studies, correlation studies and experimental designs.
no control group, just a signle participant being studied, no comparison. ex: pre test, post test. one group is tested given the treatment and then retested
no control group, design doesnt include randomization
True Experimental Designs
control groups and random assigment of groups
the variable in which the researcher things had an effect and would like to measure. what we think will have an effect.
variable that is observed or measured by the experimenter to determine the effect of the IV( must to measurable and observable) the outcome variable
the ability of the experimenter to remove factors that might cause or affect the results even though they arent the IV
Sample / Sampling
the population you want to make conclusions about / how you select your participants
wanting your sample to be similar to the key characteristics in the questions
Simple Random Sample
randomly selecting a number of participants from a group (equal chance)
Stratified Random Sample
randomly selecting participants from different subsets of the population
allows researcher to have control over chance variables. both groups should be relatively the same except for the esposure to the IV
the subjects recieving the IV
not exposed to the IV, the subjects the experimental group is compared to.
the results of the study are unliley to have occured simply by chance.
extent to which the researcher can claim that what was found was actually the result of something the researcher did. the fewer alternative explanations that can be offered the higher the validity
refers to consistency and repeatability of scores of an experiment. use test and retest method.
taking a bunch of different studies and analyzing them as a whole