Card Set Information

2011-10-24 20:45:18

Handouts for test #2
Show Answers:

  1. Measures of spread or variation give an idea about;
    the consistency of a set of scores
  2. Measures of spread or variation suggests;
    whether the measure ofcentral tendancy used indeed depicts what scores in the distribution are like
  3. Measures of spread or variation indicates;
    whether the mean or median accurately predicts the scores of individuals in a group
  4. Measures of spread or variation are important for judging;
    whether a generalization about a population based on a sample is likely tobe a good guess
  5. All measures of spread establish a;
    reference point from which the amount of spread in a distribution of scores is measured
  6. Examples of measures of variation or spread
    • 1. Range
    • 2. Absolute mean deviation or Average absolute deviation (ADD)
    • 3. Median absolute deviation (MAD)
    • 4. Variance and standard deviation
  7. Range=
    largest score - smallest score
  8. Median Absolute Deviation
    • 1. Find median of set of scores
    • 2. Find deviation from median by subtracting median from each score
    • 3. Take absolute value of each deviation
    • 4. Find median of ab. deviations
  9. Sensitivity
    Sensitivity increases with the number of possible values that can be reliably used
  10. Content validity
    refers to the extent to which a test uses items representative of the area you are trying to measure
  11. Criterion Validity
    refers to the degree to which test scores correlate with some criterion of interest (some direct and independent measure of what you are trying to measure)
  12. Transform data for a number of reasons
    • 1. to obtain more convenient numbers
    • 2. to meet the assumptions of certain stats
    • 3. to obtain pretty graphs
    • 4. to meet the assumptions ofsome theory
    • 5. to minimize the influence of extreme scores
    • 6. to allow for comparison across different data sets such as college exams
  13. 2 transformations which allow easy comparison by expressing scores relative to some standard are;
    • percentages
    • z scores
  14. Z scores tell you;
    • how far youare from the mean in standard deviations units
    • Allow you to compare people even if measured using different scales
  15. Calculation of Pearson r using z score formula
    • 1. Find z score for each number (use standard deviation for samples)
    • 2. Multiply z scores for each pair together
    • 3. Add the products of the multiplications together
    • 4. Divide by the number of pairs -1
  16. Psychophysics
    concerns the quantitative relationships between physical stimuli and the psychological experience of them
  17. Fechner's law
    • S = K log R
    • Sensation (psychological intensity) increases as the logarithm of the physical intensity of a stimulus
  18. Steven's Power law
    • S = k RN
    • Sensation (psychological intensity) increases as th nth power of the physical intensity of a stimulus
  19. Psychometrics
    is a field of study having to do with the measurement of personality "traits", intelligence, knowledge, attitudes..
  20. PsychometricScaling has to do with;
    • assigning numbers to the psychological experiences caused by a set of stimuli that may not vary along a simple physical dimension
    • However the sest of stimuli do vary along some psychological dimension
  21. To conclude that X is a cause of Y one must be able to;
    • 1. show that changes in Y did not occur until after the changes in X (Temporal precedence rule)
    • 2. show that X and Y are related (covariation rule)
    • 3. show that other explanations for the relationship between X and Y can be ruled out (internal validity rule)
  22. temporal precedence rule
    show that Y didnt occur until after changes in X
  23. Covariation rule
    show that X and Y are related
  24. Internal validity rule
    show that other explanations for the relationship between X and Y can be ruled out
  25. All true experiments have 4 characteristics
    • 1. manipulation of X
    • 2. comparison of the effects of various levels of X on Y
    • 3. Subjects which begin the experiment equivalent on all important characteristics for each levelof X (1 method of doing this is by random assignment)
    • 4. control over all other important variables so that all subjects are treated exactly the same except for X (the I.V.)
  26. Between Subject Comparison
    • each subject contributes a score (or a mean) for just one level (or amount) of the predictor or independent variable (X)
    • there are at least 2 leavels
  27. Between subject comparison:
    Some possible confounding variables=threats to internal validity=rival hypothesis for b/w subjects comparisons include
    • selection bias
    • differential attrition or mortality
    • interaction of selection bias with treatment
  28. Within Subjects Comparison
    Each subject contributes a score (or a mean) for all levels (or amounts) of the predictor or independent variable (X) and there are at least 2 levels
  29. Some possible confounding variables= threats to internal validity = rival hypothesis for w/in subjects comparison designs:
    • maturation
    • instrument decay
    • history
    • testing
    • statistical regression
  30. One-shot case study or post-test only study;
    • No comparison
    • 1 level of the variable of interest may be introduced by the researcher or simply identified by the researcher
    • Since there is no comparison obviously comparisons are neither b/w or w/in
    • Few if any alternative explanations for the outcome (Y) can be eliminated
  31. No comparison of the influence of X on Y;
    • X is selected
    • X is "manipulated" (introduced by researcher)
  32. Comparison of the influence of X on Y
    -Comparisons are b/w subjects:
    • 1. X is simply measured
    • 2. X is selected
    • 3. X is manipulated
  33. Comparison of the influence of X on Y
    -Comparisons are w/in subjects;
    • 1. X is simply measured
    • 2. X is selected
    • 3. X is manipulated
  34. Differential Attrition
    the differential loss of subjects from the various comparison groups over the course of a study
  35. Interaction of selection bias with the treatment
    • the treatment produces selection bias
    • Thus one or more levels of the I.V. cause some subjects to drop out making the group unequal
  36. Diffusion
    the subjects in the various treatment groups communicate with eachother so that the participants in 1group learn something about the experiment that was intended for other groups, not theirs
  37. Compensatory Equalization
    refers to the situation in which untreated individuals or groups learn of the treatment received by others and demand the same treatment or equally good
  38. Resentful Demoralization
    involves the situation in which individuals in an untreated or control group learn that others are receiving special treatment and become less productive, efficient, or motivated than they would have been because of feelings of resentment
  39. Maturation
    refers to the possibility that the observed effect in a w/in subjects design is due to changes in internal conditions over time rather than the variable of interest
  40. Testing
    • refers to the possible effects of already having taken a test on a participants score when the participant takes the test again.
    • Also called carry over
  41. Differential carry over effects
    occurs when performance on a particular condition depends partially upon the particular condition that preceded it
  42. History
    refers to a threat to internal validity that occurs when an effect may be due to external (outside/environmental) events that occur b/w tests rather than the variable of interest
  43. Instrument Decay
    occurs when a measurement device changes calibration or an observer changes criteria over time
  44. Statistical regression
    • refers to the fact thatextreme scores in a distribution tend to move toward the mean of a distribution with repeated testing.
    • A function of the reliability of the test
  45. As reliability decreases or measurement error increases;
    statistical regression is more likely to occur