Student Assessment Test 2

  1. Where do these test come from
    Some are developed by states, some are bought from developers, some are national, most are externally mandated
  2. How are tests used
    • 1. To document failed education
    • 2. used for education reform
  3. Content standards
    Performance standards
    Authentic assessment
    Alternative assessment
    Performance assessment
    • Content standards – the what
    • Performance standards – the how well
    • Authentic assessment – real world testing
    • Alternative assessment – no multiple choice Performance assessment –actually doing a task
  4. 4 criticisms of testing
    • 1. Test Anxiety 
    • 2. Labeling
    • 3. Damage student self concept
    • 4. Pygmalion effect
  5. Why so much standardized testing
    • 1. Cheap
    • 2. Drive the content
    • 3. Easy to Implement
    • 4. Easy answers, easy to document
  6. Measurement vs Assessment
    Measurement is quantitative (uses #’s)

    Assessment is qualitative and quantitative

    • (Qualitative always has some
    • value judgment
  7. 4 types of Assessment
    Placement – duh

    Formative – testing along the way

    Diagnostic – duh

    • Summative – total assessment at the end of
    • the day
  8. norm v. criterion
    Norm – compared against peers

    Criterion – compared against a standard
  9. How to choose objectives
    • 1. Do the specify important parts of the course
    • 2. Are they politically feasible
    • 3. Are they realistic
    • 4. Are you being rational
  10. 2 types of Objectives
    • 1. General
    • 2. Specific
  11. Things that hinder validity
    • unclear directions
    • big words
    • ambiguity
    • bad table of specifications
    • poorly drawn inferences of results
    • bad questions 
    • short tests
    • poorly constructed tests
  12. 2 problems with test constructs
    • under-representation
    • irrelevant varriance
  13. 4 Big Areas in Validity
    • Content:Do test items adequately
    • represent the universe of possible questions

    Construct:construct is the characteristic or trait we are trying to measure. So, construct validity is how well a measurement or test measures whatever the subject it is

    Criterion:How well does a test:Predict future performance, or Estimate a student’s ability on a similar construct 

    Consequences
  14. Reliability
    The consistency of a measurement
  15. X = T + E
    • X = Obtained or observed score (fallible)
    • T = True score (reflects stable characteristics)
    • E = Measurement error (reflects random error)
  16. 6 ways to Estimate Reliability
    • 1. Test-ReTest
    • 2. Equivalent Forms
    • 3. Test-ReTest with Equivalent forms
    • 4. Split Halves
    • 5. Coefficient Alpha
    • 6. Interrater Consistency
  17. Factors that influence reliability
    • 1. number of items or assessment tasks
    • 2. Spread of Scores
    • 3. Objectivity
Author
cbraswe1
ID
279360
Card Set
Student Assessment Test 2
Description
Terms dealing with Student assessment
Updated