social psych

Card Set Information

social psych
2015-09-21 05:46:48
social psych
social psych
social psych
Show Answers:

  1. what is assessment?
    it involves some type of measurement,

    it involves gathering samples of behavior, making inferences

    it is objective & systematic
  2. definition of assessment?
    formal and informal, used to quantify a behavior
  3. definition of appraisal?
    formal and informal
  4. what is the purpose of tests?
    to evaluate
  5. what is the purpose of an instrument?
    to provide greater knowledge to context
  6. what are the essential steps in counseling?
    • 1. assessing the clients problems
    • 2. conceptualizing and defining the clients problems
    • 3. selecting and implementing effective treatments
    • 4. evaluating the counseling
  7. what is assessing the clients problems?
    • -collaborative relationship
    • -confirmatory bias
  8. what is confirmatory bias?
    a tendency to search for or interpret information in a way that confirms one's own beliefs
  9. what is conceptualizing the client's problems?
    goal agreement
  10. what is selecting and implementing effective treatments?
    educating clients about purpose of assessment
  11. what is evaluating the counseling?
    impacts legislation, public perception, professional identity
  12. assessment can be therapeutic in that . . .
    it can be used to further build rapport with clients

    therapeutic assessment model

    assisting clients in decision making
  13. what is therapeutic assessment model?
    better outcomes, improved perception of couselor

    shift away from traditonal thinking of role of assessment
  14. what are types of assessment tools?
    • standardized vs non-standardized
    • individual vs group
    • objective vs subjective
    • verbal vs nonverbal
    • speed vs power
    • cognitive vs affective
  15. what is individual vs group?
    quantity vs quality
  16. what is objective vs subjective?
    bias of researcher vs manualized administration
  17. what is verbal vs nonverbal?
    language barriers
  18. what is speed vs power?
    difficulty of items completed vs # of items completed
  19. what is cognitive instruments?
    cognition, perceiving, processing, concrete & abstract thinking, remebering

    • -intelligence/general ability tests
    • - achievement tests
    • -aptitude tests
  20. what is affective instruments?
    interest, attitudes, values, motives, temperament, non-cognitive aspects of personality

    • -structure personality instruments
    • -projective techniques
  21. when was early testing done?
    • greeks - 2500 years ago
    • chinese - 2000 years ago
  22. who was credited with launching the testing movement - emphasis on sensory perception and intelligence?
    francis galton
  23. who was credited with founding the science of psychology - furthered intelligence testing?
    wilhelm wundt
  24. who expanded testing to include memory and other sample mental processes?
    james mckeen catell
  25. what is binet-simon scale (1905)?
    • -assessed judgement, comprehension & reasoning
    • -ratio of mental age to chronological age (IQ)
  26. what edition is the stanford-binet scale on?
    fifth edition
  27. world war 1 - group testing (army alpha & army beta)?
    use to assign work role/tasks
  28. frank parson's "father of guidance"?
    promoted more systemic views of assessment and career
  29. theoretical basis became a debating concerning definition of?
  30. interest in assessment spread beyond intelligence - led to development of self-report personality inventories such as?
    rorschach inkblots developed in 1921
  31. aptitude tests developed for ?
    selecting and classifying industrial personnel
  32. standford achievement test (1923) was ?
    first standardize achievement battery
  33. the first edition of mental measurements yearbook was when?
  34. dissatisfaction with exisitng persoanlity instruments led to :
    • projective techniques became popular
    • MMPI developed (early 1940)
    • presented individual from "faking" results
  35. standardized achievement tests well-established in public schools?
    multiple aptitude batteries appeared after 1940
  36. criticisms of assessment began to emerge:
    • need for standards (APA)
    • need for centralized test publication
    • condensing of testing agencies/publishers allowed for electronic scoring
    • increased accessibility of tests
  37. examination and evaluation of testing and assessment - widespread public concern:
    • misuse of testing isntruments
    • cultural biases
  38. family educational rights and privacy act (1974):
    • mandated right to view educational records, including testing
    • required permission of parents for many types of testing
  39. use of computers blossomed : administration, scoring, interpretation, computer-adapted testing, report-writing-
    customized testing based on test-taker
  40. revision of instruments in response to criticism
    increase in cultural diversity and awareness of test bias
  41. increasing use of authentic and portfolio assessment
    educational setting s- evaluation through multiple tests instruments, assessments
  42. in 1900's what happened?
    • binet-simons cale
    • standford binet scale
    • world war 1 - group testing
    • frank parsons "father of guidance"
  43. in 1920's and 1930's what happened?
    • theoretical basis became a debating concerning definition of intelligence
    • interest in assessment spread beyond intelligence - led to development of self-report personality inventories
    • aptitude tests developed for selecting and classifying industrial personnel
    • development of vocational counseling instruments
    • stanford achievement test
    • first edition of mental measurements yearbook
  44. what happened in 1940s and 1950s?
    • dissatisfaction with existing personality instruments
    • standarized achievement tests well-established in public schools
    • criticisms of assessment began to emerge
  45. what happened in 1960s and 1970s?
    • examination and evaluation of testing and assessment - widespread public concern
    • 1970's grassroots movement for "minimum competency" testing for high school graduates
    • family educational rights and privacy act
    • increased use of computes in assessment
  46. what happened in 1980's and 1990s?
    • use of computers blossomed: administration, scoring, interpretation, computer-adapted testing, report-writing
    • revision of instruments in response to cricism
    • increasing use of authentic and portfolio assessment
  47. what happend in 2000s to the present?
    • influences on technology and the internet
    • research on multicultural issues
    • achievement testing & no child left behind
    • increased intetest in accountability and effectiveness data
    • revision of standards and DSM
  48. measurement scales?
    • nominal
    • orinal
    • interval
    • ratio
  49. nominal is?
    measurement scale in which numbers are assigned to attributes of objects or classes of objects solely for the purpose of identifying the objects
  50. ordinal is?
    a scale on which data is shown simply in order of magnitude since there is no standard of measurement of differences
  51. what is interval?
    scale of measurement for which the difference between two points on the scale is meaningful.
  52. what is ratio ?
    • The ratio scale
    • of measurement is similar to the interval scale in that it represents
    • quantity and has equality of units. One difference is that this scale
    • also has an absolute zero, with no numbers existing below the zero.
  53. norm-referenced instruments-
    • individual's score is compared to performance of others who have taken the same instrument (norming group)
    • example : personality inventory
    • evaluating the norming group by size , sampling, and representation
  54. criterion referenced instruments -
    • individual's performance is compared to specific criterion or standard
    • example : third-grade spelling tests
    • how are standards determined? common practice, professional organizations or experts, empirically-determined
  55. what is frequency distribution?
    mathematical function showing the number of instances in which a variable takes each of its possible values
  56. what is frequency polygon?
    A graph made by joining the middle-top points of the columns of a frequency histogram
  57. what is histogram?
  58. diagram consisting of rectangles whose area is proportional to the frequency of a variable and whose width is equal to the class interval
  59. what is mode?
    most frequent score
  60. what is median?
    evenly divides scores into two halves (50% of scores fall above, 50% fall below)
  61. what is mean?
    arithmetic average of the scores
  62. what are measures of variability?
    range, variance, standard deviation
  63. what are measures of central tendency?
    mean, median, mode
  64. what is range?
    highest score minus lowest score
  65. what is variance?
    sum of squared deviations from the mean
  66. what is standard deviation?
    square root of variance
  67. what is normal distribution?
    a function that represents the distribution of many random variables as a symmetrical bell-shaped graph
  68. what is skewed distribution?
    bell-shaped curve with the most scores in the middle and tapering off toward the higher and lower scores. A skewed distribution is when the most scores fall to either the high or low end instead of in the middle
  69. what are types of scores?
    • raw scores
    • percentile scores/percentile ranks
    • standard scores
  70. what are standard scores?
    z scores, t scores, stanines, age/grade-equivalent scores
  71. how can we interpret percentiles?
    98th percentile - 98% of the group had a score at or below this individual's score

    • 32nd percentile - 32% of the group had a score at or below this individual's score
    • if there were 100 people taking the assessment, 32 of them would have a score at or below this individual's score

    • units are not equal
    • useful for providing information about relative position in normative sample
    • not useful for indicating amount of difference between scores
  72. what is z score?
    how many standard deviations from the mean a value is
  73. what is t score?
    scores scaled to have a mean of 50 and a standard deviation of 10
  74. what is stanines?
    method of scaling test scores on a nine-point standard scale with a mean of five and a standard deviation of two
  75. what are problematic scores?
    • age-equivalent scores
    • grade-equivalent scores

    • problematic because do not reflect precise performance on an instrument
    • learning does not always occur in equal developmental levels
    • instruments vary in scoring
  76. adequacy of norming group depends on:
    • clients being assessed
    • purpose of the assessment
    • how information will be used
    • examine methods used for selecting group
    • examine characteristics of norming group
  77. methods for selecting norming group:
    • simple random sample
    • stratified sample
    • cluster sample
  78. what is stratified sample?
    method of sampling from a population
  79. what is cluster sample?
    technique that generates statistics about certain populations
  80. norming group characteristics -
    • size
    • gender
    • race/ethnicity
    • educational background
    • socioeconomic status
  81. assessment report -
    goal is to gain experience giving a formalized assessment in a safe situations and then using the results to decide treatment interventions and therapeutic goals - less emphasis on the construction of the instrument
  82. instrument review -
    goal is to understand more about how a particular instrument is created - by whom, how, psychometric, the intended use, and your best clinical opinions of the quality of the instrument based on what you know thus far
  83. classical test theory
    every observed score is a combination of true score/ability and error
  84. reliability -
    • reliability coefficient (ex 0.80) - observed variance vs error variance
    • systematic versus unsystematic error
    • reliability only takes unsystematic error into account
  85. reliability -
    often based on consistency between two sets of scores
  86. correlation -
    • statistical technique used to examine consistency (relationship between scores)
    • correlation coefficient : ranges from -1.00 to +1.00 (.00 indicates lack of a correlation or evidence)
  87. positive correlation
    is a relationship between two variables such that their values increase or decrease together
  88. negative correlation
    A relationship between two variables in which one variable increases as the other decreases
  89. pearson-product moment correlation coefficient -
    • most common (most practical sense, +, -, large, small)
    • examines z scores between administrations (product of variance) and the number of individuals
  90. coefficient of determination -
    the percentage of shared variance between two sets of data
  91. types of reliability -
    • test-retest
    • alternate/parallel forms
    • internal consistency measures
  92. what is test/retest?
    variation between first and second administration
  93. what is alternate/parallel forms?
    two versions of test, ca n be administered closer together, eliminates memorized responses
  94. what is itnernal consistency measures?
    • split half - superman-brown formula
    • kuder-richardson formulas
    • coefficient alpha (cronbachs alpha) for non-dichtonous test items
  95. nontypical situations -
    typical methods for determining reliability may not be suitable for :

    • speed tests
    • criterion-referenced tests
    • subjectively-second instruments (inter-rater reliability)
  96. evaluating reliability coefficients -
    • examine purpose for using instrument
    • be knowledgeable about reliability coefficients of other instruments int that area
    • examine characteristics of particular clients against reliability coefficients
    • number of times of relaibility testing and type of reliability measures
  97. standard error of measurement -
    provides estimate of range of scores if someone were to take instrument repeatedly (attempt to indicate an individuals 'true score'
  98. validty -
    • concerns what instrument measures and how well it does so
    • not something instrument "has or does not have"
    • informs counselor when it is appropriate to use instrument and what can be inferred from results
    • reliability is a prerequisite for validity
  99. traditional categories of validity -
    • content related - items represent intended behavior
    • criterion - related - how instrument relates to an outcome (SAT)
    • construct - extent to which instrument measures theoretical or hypothetical construct
  100. evidence based on test content -
    • degree to which the evidence indicates that items, questions, or tasks adequately represent intended behavior domain
    • central focus is typically on how the instrument's content was determind
    • content-related validation evidence should not be confused with face validity (instrument appears to be a good measure)
  101. evidence based on response processes -
    • concerns whether individuals respond in a manner consistent with construct measured
    • how do individuals think aloud
    • may also examine information processing differences by subgroup
  102. evidence based on internal structure -
    • measuring one construct or the use of scales and subscales?
    • examining internal structure using factor analysis (comparison of subscale sets)
    • can also examine internal structure of instrument for different subgroups (differential item functioning) - (example , women consistently get item correct on particular item while men consistently get it incorrect)
  103. evidence based on relations to other variables -
    • correlational method
    • convergent evidence
    • related to other variable which it should in theory be related to
    • discriminant evidence
    • not related to other variables from which it should differ
  104. prediction/instrument-criterion relationship -
    • concurrent validity - assesses current context, no gap in time - useful for diagnosing in session
    • predictive validity - gap in time between administration and collecting criterion evidence
    • decision theory
  105. regression -
    • cousin of correlation
    • used to determine usefulness of variable or a set of variables in relation to other meaningful variable we are trying to predict
    • regression line - instrument scores vs criterion - line of best fit
  106. standard error of estimate -
    • no perfect reliability or content - related validity
    • the margin of expected error in the individual's predicted criterion score as a result of imperfect validity
    • used to give clinicians a range for practical significance by acounting for error
  107. decision theory -
    • "group separation" or "expectancy tables"
    • do the scores of the instrument correctly differentiate into performance or diagnostic realms
    • gathering of how often the instrument is right, and how often it is incorrect
  108. validity generalization -
    • method of combining validation studies to determine if validity evidence can be generalized to new situations
    • must have a substantial number of studies
    • must use meta-analytiv procedures (exploration of many studies and different research factors)
  109. evidence based consequences of testing -
    • examples of social consequences :
    • group differences on tests used for employment selection
    • group differences in placement in special education

    • desire to see context and system in which instrument was administered
    • counselors should consider both validation evidence and social implications
  110. conclusions on different types of validation evidence :
    • gradual accumulation of evidence
    • no clear decision of construct is or is not present
    • consult validity prior to interpreting results
    • counselor must evaluate the information to determine if it is appropriate for which client under what circumstance
    • validation evidence should also be considered in informal assessments
  111. item analysis -
    • examining and evaluatin each item in instrument (validity refers to instrument as a whole)
    • useful in development and revision process
    • item difficulty (must consider other variables)
    • item discrimination - does the item differentiate among responders on behavior domain
  112. item response theory
    • focus is on each item and establishing items that measure ability or level of a latent trait
    • emphasis is on each individual item rather than the sum of the whole
    • particular interest in which items the persons answers correctly or affirmatively
  113. selection of assessment instruments/strategies
    • determine what information is needed
    • analyze strategies for obtaining information
    • search assessment resources
    • evaluate assessment strategies
    • select an assessment instrument or strategy
  114. determine information needed -
    • identify - information needed for specific client, general information clinicians in an organization need about clients
    • consider information already available
  115. analyze strategies for obtaining information -
    • formal or informal techniques
    • consider which assessment method would be best suited to clients
    • consider professonal limitations and which instruments counselor can ethically administer and interpret
  116. search assessment resources
    • mental measurements yearbook
    • educational testing services
    • test in print
    • tests: a comprehensive reference for assessments in psychology, education and business
    • directory of unpublished experimental mental measures
  117. evaluate assessment strategies
    • test purpose
    • instrument development
    • appropriate selection of norming group or criterion
    • reliability
    • validity
  118. evaluate assessment strategies
    • interpretation of scoring materials
    • user qualifications (level a, b,c)
    • practical issues
  119. administering assessment instruments
    • read administration materials ahead of time
    • attend to time limits
    • know boundaries of what is acceptable
    • use administration checklist, if helpful
  120. scoring
    • hand, computer, or internet scored (some can be self-scored)
    • before using computer scoring, investigate integrity of scoring service and steps used to develop program
    • some assessments require clinician judgement as part of scoring
  121. scoring authentic / performance assessments
    • involve performance of "real" authentic applications, rather than proxies
    • objectivity in scoring is more difficult to achieve
    • scoring is enhance if : assessment has specific focus, scoring plan is based on qualities that can be directly observed, scoring is designed to reflect intended target, the setting for assessment is appropriate, observers use checklists or rating scales, score procedures have been field tested before use
  122. communicating results
    • often one of the most important parts of assessment process
    • clients who receive test interpretation have greater gains than those who dont
    • tentative interpretations more helpful than abosloute
    • clients prefer individual interpretation
  123. guidelines for communicating results -
    • know information in manual (especially validity information)
    • optimize the power of the test
    • use effective general counseling skills
    • develop multiple methods of explaining results
    • use visual aids to explain technical terms
    • use descriptive terms rather than numerical
    • provide range of scores and rational for assessment
    • use probabilities rather than certainties, tentative interpretations rather than abosoloutes
    • discuss results in context of other information
    • involve clients in interpretation
    • monitor client reactions during interpretation
    • encourage client to ask questions
    • discuss limitations
    • do not leave confused
  124. communicating results to parents -
    • be prepared to answer questions and explain results
    • should understand testing used and symptoms of child's disorder
    • help parents adjust to diagnosis
    • be prepared to use variety of techniques
    • focus on active, coping approach
    • acknowledge parents emotions
  125. psychological reports:
    • purpose: to disseminate assessment information to parents or other professionals
    • valuate quality of reports before implementing suggested intervntions
    • expect a comprehensive overview of client and interpreation of results in contextual manner
    • should be carefully crafted with attention to detail
    • be alert for typographical errors, us of vague jargon, careless mistakes
  126. psychological reports-
    • identifying information
    • reason for referral
    • background info
    • behavioral observations
    • assessment results and interpretations
    • diagnostic impressions and summary
    • recommendations
    • signature