Study Guide

Card Set Information

Study Guide
2011-09-22 18:57:48
psy testing

Show Answers:

  1. What is the definition of a test?
      1. A standardized procedure
      2. for sampling behavior and describing it with categories or scores.
  2. What are the common characteristics that most
    tests share?
    • Standardized procedure, behavior sample, scores
    • or categories, norms or standards, prediction of nontest behavior
  3. What is a norm-referenced test?
      1. test that use a
      2. well-defined population of persons for their interpretive population.
  4. What is a criterion-reference test?
    • Tests that measure what a person can do rather
    • than comparing results to the performance levels of others.

    1. What are the different types
      of psychological tests?

        1. Intelligence test:
        2. measure an individual’s ability in relatively global areas such as verbal
        3. comprehension, perceptual organization, or reasoning and therby help
        4. determine potential for scholastic work or certain occupations.
        5. Aptitude Test: Measure
        6. the capability for a skill
        7. Achievement Test: Measure
        8. a person’s degree of learning, success, or accomplishment in a subject or
        9. task.
        10. Personality test: measure
        11. the traits, qualities or behaviors that determine a person’s
        12. individuality.
        13. Interest Inventories:
        14. Measure an individuals preference for certain activities.
        15. Behavioral procedures:
        16. objectivily describe and count the frequency of a behavior, identifying
        17. the antecedents and conseuences of the behavior.
        18. Neuropsychological test:
        19. measure cognitive, sensory, perceptual, and motor performance to
        20. determine the extent, locus, and behavioral consequnces of brain damage.

    1. What are psychological
      tests primarily used to do?

        1. Classification
        2. Diagnosis and treatment
        3. planning
        4. Self-kowledge
        5. Program evaluation
        6. Research
  5. . What are the common uses of tests?
    Make decisions about persons.

    1. What are desirable test
      administration procedures?

        1. Examiners must be
        2. familiar with materials and directions before giving tests out.
  6. 1.
    What are the primary responsibilities of test
    • a.
    • Publication: no pre-maturely released
    • b.
    • Marketing: advertise it honestly
    • c.
    • Distribution: only to trained ppl. Class A , B(BA), C
    • (Masters+)
  7. 1.
    What are the three levels of qualifications test users
    must meet for purchasing tests?
    • a.
    • A. nonpsychologiest: business executives or educational
    • administrators
    • b.
    • B. completed an advance course in testing from college
    • (aptitude, personality)
    • c.
    • Master degree minimum. (i.e., What kinds of tests could be bought at
    • the different levels?)
  8. 1.
    What is meant when we say testing should be in the
    “best interest of the client?”
    • a.
    • Test should be given to benefit the client not harm
    • them
  9. 1.
    What is the Tarasoff case? What is meant by ‘duty to
    warn’ and when does a psychologist have a duty to warn?
    • a.
    • You warn the persons in danger. And you have to notify
    • authorites if they are abusing children/elderly/themselves/otehrs.
    • b.
    • An Indian student stabs another student to death.
    • Campus therapist knew this and only reported it to campus police and never
    • warned the girl that got stabbed.
  10. 1.
    What is informed consent? When, in regard to testing, is it not required?
    • a.
    • Test takes or representatives re made aware in plain
    • English what reasons for testing, types of test being used, how the results
    • will be measured and used.
    • i. 1.
    • Disclosure: client received suffient info
    • ii. Competency:
    • client is mentally able to give consent
    • iii. Voluntariness

    -court ordered
  11. 1.
    What is meant when we say test results must be given
    “in a language the test taker can understand?”
    • a.
    • Linguistic barriers ESL, also appropriate age/ mental
    • ability vocab.
  12. 1.
    What does the book recommend for how to consider the
    impact of cultural background on test results?
    • a.
    • Avoid stereotype threat
    • b.
    • Adopt a frame of reference

    1. What was the first use of
      organized testing?

      1. Chinese 2200 BC civil
      2. service testing.

  • What does the “brass instruments” era of testing
    refer to?
    • a)
    • Tools used to measure sensory thresholds and reactions
    • times erroneously that that measured intelligence.
    • b)
    • 1800’s Europe and Great Britain

    1. Who developed the 1st
      intelligence test & why?

        1. Binet and Simon 1905.
        2. Goal to identify which kids could
        3. or could not learn in a typical classroom environment.

    1. When did intelligence
      testing make it to the U.S.? Why
      was this important?

        1. 1916. Translated it to be
        2. culturally relavent to USA. Goddard was a D-bag nativist

    1. What tests were developed
      for use with Army recruits & what were the positive & negative
      results of these tests?

        1. Test
    • i. Following
    • oral direction
    • ii. Math
    • iii. Judgment
    • iv. Synonym-antonym
    • pairs
    • v. Sentence
    • restructioning
    • vi. Number
    • series completion
    • vii. Analogies
    • viii.
    • Information
        1. Positive effects
    • i. Psychologist
    • got to experience in psychometrics of test construction
    • ii. Test
    • construction became a science
        1. Negative
    • i. Army
    • spent money and didn’t really even use the test scores
    • ii. Ppl
    • wouldn’t understand directions/fell asleep.

    1. What are projective tests
      designed to measure?

        1. Responses to ambiguous
        2. stimuli ->disclose innermost needs, fantasies, and conflicts.

    1. What is the MMPI? When was
      the most recent version published?

        1. The Minnesota
        2. Multiphasic Personality Inventory. 2003
  • 1. How does test interpretation work, For
    norm-referenced tests?
    Comparing ones score is compared to a standardized sample
  • How does test interpretation work Criterion-referenced tests?
    compare raw score to set standard
  • How can we summarize & pictorially represent
    a distribution of scores
    • Histagram, Ploygon, bell shaped curve.
  • Which is strongly
    influenced by outliers?
    The mean
  • What is the standard deviation (SD)?
    • Degree of dispersion in a group of
    • scores
  • What percent of scores fall within 1 SD in a
    normal distribution?

  • What percent of scores fall within 2 SD in a normal
    2 SD. 95%
  • What percent of scores fall within 3 SD in a
    normal distribution
        1. 3 SD. 99%

  • What is a percentile rank?
      1. The percentage rank of a
      2. person in the standardized sample who scored below a specific raw score

      1. How do you calculate a
        percentile rank.

    Number of scores below target raw score, divided by total of participants, multiplied by 100

    1. What are the benefits in
      using percentiles? .

    Easy to obtain, and understand
  • . What is a standardized score?
    • Expresses the distance from the mean in standard
    • deviation units.
  • What is a z score
    Computation of an examinee’s standard score.
  • standardized score calculated how
  • What are the M & SD of z scores
    O, 1.
  • What are the M & SD T scores
        1. 50 M, 10 SD
  • What are the M & SD IQ scores
    100 M, 15 SD
  • What are the M & SD CEEB
    500 M, 10 SD
  • How can we calculate the various standardized
    scores once we have the Z scores?
        1. X’=(z)(SD)+M
  • What is meant by test standardization?
      1. Test results are compared
      2. to norms.
  • What are norms?
    • Statistical summary of scores from the norm group or
    • standardization sample.
  • What types of scores can be used as norms
    • Sample is large and representative
    • and the raw score distribution is only mildly nonnormal.
  • What are the important issues to consider when
    developing test norms
    • if a norm-referenced test doesn’t represent the population
    • for whom the test is intended, they are useless and all comaprisons made will
    • be ueseles.
  • What factors should be considered in selecting a
    norm group
    • age, grade, sex, education, ses, ethnic group, geographic
    • region.
  • What is random sampling?
    • Every person in the target population has an equal likely
    • chance to be used.
  • stratified random sampling
    • putting constraints on your randomness. PPl r chosen
    • randomly from within each strata.
  • cluster sampling
    • divided the population into geographical clusters, than
    • randomly sample from each cluster.
  • 1. What is classical test theory? Test scores
    result from the influence of 2 factors:
    consistency and Inconsistency
  • What are the four assumptions of classical test
    • a. measurement errors are random
    • b. mean
    • error of measurement is zero
    • c. true
    • score and error scores are uncorrelated
    • d. errors
    • on different scores are uncorrelated
  • What are the two types of error?
    Un-systematic & systematic.
  • Reliablitity is involved with what time of error/
  • What is the range of a reliability coefficient?
    (-1) -- +1What is high?
  • . What are the types of reliability that consider temporal
    • Test-Retest, Alternate-form
    • reliability.
  • 1.
    What are the types of reliability that consider
    internal consistency?
    • a.
    • Split half reliability
    • b.
    • Spearman-brown Formula
    • c.
    • Coeffient alpha
    • d.
    • Kuder-Richard estimate of reliability
    • e.
    • Interscorer reliability
  • 1.
    What are the benefits and drawbacks of the different
    types of reliability?
    • a.
    • Split-half approach is not precise; coeffient alpha
    • reliability is higher.
  • . What is considered acceptable reliability for
    research purposes? .
  • What is acceptable for tests used to make
    important decisions about individuals?
  • What is the standard error of measurement (SEM)?
    • 1.
    • Index of how much on average an individual score might
    • varies if they were to repeatedly take a test.
  • What is the relationship between reliability and
    the SEM?
    • The more reliable the test the less error there is on
    • average.
  • How do we calculate confidence intervals?
    Serr= sx(squred1-r11])
  • what does a confidence interval tell us?
    • The accuracy of what we can predict will be the next test
    • score if they were to retake the exam.
  • What is Item Response Theory?
    • An idea that we can use fancy math to develop scales that
    • contain highly discriminating items and thereby increase the reliability of
    • tests. Eliminate error in test development phase,
  • What is validity?
    What it measure what it claims to measure .
  • What is the relationship between reliability and
      1. Test must be reliable
      2. before it can be valid.
  • Can a
    test be valid if it is not reliable?
  • What type of error affects validity?
    • Systematic
    • and unsystematic error
  • What is the Tinitarian model?
    3 phase model to describe validation procedures
  • What types of validity does Tinitarian modele ncompass?
    • Content validity, criterion-related validity
    • (construction/predictive Validity), Construct validity.
  • What is face validity?
    • Appearance of the appropriateness of the test
    • taker’s perspective.
  • How is face validitydifferent from other types of
      1. Not an actual measure of
      2. validity.
  • What is content validity and how is it assessed?
    • A)are items on the test a good representative
    • sample of the domain we are measuring. B) assessed by experts.
  • What is criterion-related validity? What types of validity are encompassed under it?
      1. A) the extent to which a
      2. test correlates with non0test behaviors; called criterion.
    • B) Concurrent Validity/predictive
    • Validity
  • What are the differences between concurrent
    & predictive validity?
    • Concurrent Validity- test is correlated with a criterion
    • measure that is available at the time of
    • testing. predictive Validity- test is correlated to a criterion that
    • becomes available in the future.
  • What is the standard error of the estimate?
      1. Margin of error expected
      2. in the predicted criterion score, it tells us how accurately can test
      3. scores predict the performance on the criterion.
  • How SEE it related to predictive validity?
    • The higher the correlation between test and
    • criterion, the less there is error there is in the predictions to be made from
    • the test.
  • What is a typical validity coefficient for
    predictive validity?
      1. Rarely greater than .60
      2. -.70

      1. Predictive validity is
        still considered useful if its between

  • What is decision theory used for?
    • involves the use of test scores as a
    • decision-making tool.
  • What is an expectancy table
    • A visual tool that helps decision makers chart
    • data and make cut off points
  • How does expectancy table relate to predictive validity?
      1. If we use tests to make
      2. decisions than those test must have strong predictive validity.
  • In decision theory, what is considered a hit?
    Is when a correct prediction occurred
  • In decision theory, what is considered a miss
    Is when we predicted incorrectly. false positive
  • In decision theory, what is considered a false negative
    • When they were predicted to fail but actually
    • succeeded.
  • What is construct validity?
    • 1.
    • Tests designed to measure constructs (personality) must
    • estimate the existence of an inferred, underlying characteristic based on a
    • limited sample of behavior. Good for test that don’t have a well defined domain
    • of content.
  • What does Construct validity involve?
    • a.
    • Construct validity involves the theoretical meaning of test scores.
  • 1.
    What are the ways we can demonstrate a test has
    construct validity?
    • a.
    • Expert Opinion
    • b.
    • Test Homogeneity- see if items intercorrelate with on another.
    • c.
    • Developmental change- if tests measure something that
    • changes with age.
    • d.
    • Theory-consistent group differences- do ppl with
    • different characteristics score differently (in a way we would expect)
    • e.
    • Theory consistent intervention
    • effect- do test scores change as expected based on an intervention
    • f.
    • Classification Accuracy – how well can a test classify
    • people on the construct being measured
    • i. Sensitivity-
    • accurately identify those with the trait
    • ii. Specificity-
    • accurately id those w/0 the trait.
    • g.
    • Inter- correlation among tests- looking for similarties
    • or diff. with scores on other tests
    • i. Convergen
    • validity/ discriminant validity
  • What is convergent validity?
    • 1.
    • Is supported when tests measuring the same construct
    • are found to correlate. Example test that measure depression should correlate
    • with one another
  • discriminant
    • Is supported when tests measuring different or
    • unrelated constructs are found NOT
    • to correlate with one another.
  • What do they tell us? discriminant validity
    • a.
    • Tells you if your comparing apples to oranges or not.