Testing and Assessment (chapt. 1-5)

Card Set Information

Author:
clkottke
ID:
129963
Filename:
Testing and Assessment (chapt. 1-5)
Updated:
2012-01-23 12:45:33
Tags:
testing assessment education
Folders:

Description:
Education Testing and Assessment
Show Answers:

Home > Flashcards > Print Preview

The flashcards below were created by user clkottke on FreezingBlue Flashcards. What would you like to do?


  1. Minimum-Competency Testing (MCT)
    MCT programs focused on basic skills that were considered to be the minimal essentials for the next grade level or a high school diploma.
  2. 4 Effects of Testing on Students
    • 1. Tests create anxiety.
    • 2. Tests categorize and label students
    • 3. Tests damage students' self-concepts.
    • 4. Tests create self-fulfilling prophecies.
  3. Assessment
    a general term that includes the full range of procedures used to gain information about student learning. (observations, ratings of performances or projects, paper-and-p0encil tests) and the formation of value judgments concerning learning progress.
  4. A test is..
    a particular type of assessment that typcially consists of a set of questions administered during a fixed period of time under reasonably comparable conditions for all students.
  5. Measurement
    is the assigning of numbers to the results of a test or other type of assessment according to a specific rule.
  6. General principles of assessment
    • 1. clearly specifying what is to be assessed has priority in the assessment process.
    • 2. is selected because of its relevance to the characteristics or performance to be measured.
    • 3. Comprehensive assessment requires a variety of procedures. (multiple choice, essay, projects, short-answer, fill in the blank, etc..)
    • 4. Proper use of assessment procdures requires an awareness of their limitations.
  7. What is the assessment and instructional process? (5 steps)
    • 1. Identifying instructional Goals
    • 2. Preassessing the Learners' Needs.
    • 3. Providing relevant instruction.
    • 4. Assessing intended Learning Outcomes
    • 5. Using the results.
  8. What is involved in Provide Relevant Instruction? (2 steps)
    • 1. Monitor learning progress.
    • 2. Diagnose learning difficulties.
  9. What is included under Assessing Intended Outcomes? (3 steps)
    • 1. Improvement of learning and instruction.
    • 2. Marking and reporting to parents.
    • 3. Use of results for other school purposes.
  10. What is Placement assessment?
    To determine student performance at the beginning of instruction.
  11. What is Formative assessment?
    To monitor learning progress during instruction.
  12. What is diagnostic assessment?
    To diagnose learning difficulties during instruction.
  13. What is summative assessment?
    To assess achievement at the end of instruction. Comes at the end of a course or unit of instruction.
  14. What is the purpose of Placement Assessment?
    • 1. Does the student possess the kinowledge and skills needed to begtin the planned instruction.
    • 2. To what extent has the student already dev3eloped the understanding and skills tat are the goals of the planned instruction?
    • 3. To what extent do the student's interest, work habits, and personality characteristics indicate that one mode of instruction might be better than another.
  15. What is norm-referenced assessment
    A test or other type of assessment designed to provide a measure of performance that is interpretable in terms of an individual's realtive standing in some known group.
  16. What is Criterion-referenced assessment?
    A test or other type of assessment designed to provide a measure of performance that is interpretable in terms of a clearly defined and delimited domain of learning tasks. Terms similar to criterion referenced : standards based, domain referenced, objective referenced, content referenced, and universe referenced.
  17. Informal tests are...
    those constructed by classroom teachers.
  18. Standardized Tests are...
    designed by test specialists and administered, scored, and interpreted under standard conditions.
  19. Supply versus Fixed-Response Tests...
    Some tests require examinees to supply the answer (e.g., essay), whereas others require them to select one of two or more fixed-response options (e.g., multiple-choice test).
  20. Objective Versus Subjective Tests.
    An objective test is one on which equally competent examinees will obtain the same scores (e.g., multip-choice test), whereas a subjective test is one in which the scores are influenced by the opinion or judment of the person doing the scoring (e.g., essay).
  21. Nature if Assessment (2 types)
    • 1. Maximum performance (What a person can do)
    • 2. Typical performance (What a person will do).
  22. Two types of Assessment Formats.
    • 1. Selected-response test. (student selects response to questions from available options.)
    • 2. Complex-performance assessment (student contructs extended response or performs in response to complex task).
  23. Instructional Goals and Objectives provide/convey what? (3)
    • 1. Provide direction for the instructional process.
    • 2. Convey instructional intent to others (students, parents, school personnel, public).
    • 3. Provide a basis for assessing student learning by describing the performance to be measured.
  24. What are the three domains in Taxonomy of Educational Objectives?
    • 1. Cognitive Domain
    • 2. Affective Domain
    • 3. Psychomotor Domain.
  25. Cognitive Domain
    Knowledge outcomes and intellectual abilities and skills.
  26. Affective Domain
    Attitudes, interests, appreciation, and modes of adjustment.
  27. Psychomotor Domain
    Perceptual and motor skills.
  28. The Taxonomy of Educational Objectives are primarily useful in....
    identifying the types of learning outcomes that should be considered when developing a comprehensive list of objectives for classroom instruction.
  29. What is the simple framework to move from factual information to more complex learning outcomes?
    • K= Knowledge (knows)
    • U=Understanding (Understands)
    • Ap= Application (uses)
    • An=Analysis (shows, analyzes)
    • S=Synthesis (derives proofs, conducts, reports)
    • E=Evaluation (critiques)
  30. Summary of criteria for selecting final list of objectives.
    • 1. Completeness: are all important outcomes included?
    • 2. Appropriateness: Are outcomes related to school goals?
    • 3. Soundness: Are outcomes in harmoney with sound principles of learning?
    • 4. Feasibility: Are outcomes realistic in terms of student abilities, time available, and facilities?
    • 5. Final list of instructional objectives
  31. Objectives should NOT be stated in terms of the following: (4)
    • 1. Teacher performance. (e.g. "teach students,...)
    • 2. Learning Process (e.g. student learns meaning)
    • 3. Course content (e.g. student studies...)
    • 4. Two objectives (e.g. knows and understands concepts.)
  32. Validity
    Refers to the meaningfulness and appropriateness of the uses and interpretations to be made of assesment results.
  33. Contruct validation is
    the process of determining the extent to which performance on an assessment can be interpreted in terms of one or more construct.
  34. Contruct validation typically includes...
    consideration of content and may include assessment-criterion relationships as well as several other types of information.
  35. Reliability
    refers to the consistency of assessment results.
  36. What is the nature of Validity?
    • 1. refers to the appropriateness of the interpretation and use made of the results of an assessment procdure for a given group of individuals, not to the procdeure itself.
    • 2. is a matter of degree, it does not exist on an all-or-none basis.
    • 3. is always specific to some particular use or interpretation for a specific population of test takers/
    • 4. is a unitary concept.
    • 5. involves an overall evaluative judgment.
  37. What is the essence of the content consideration in validation?
    the goal in the consideration of content validation is to determine the extent to which a set of assessment tasks provides a relevant and respresentative sample of the domain of tasks about which interpretations of assessment results are made.
  38. Table of Specifications
    a very simple form. percentages indicate relative degree of emphasis that each content area and instructional objective is to be given in a test.
  39. Methods used in Construct Validation
    • 1. Defining the domain or tasks to be measured.
    • 2. Analyzing the response process required by the assessment tasks.
    • 3. Comparing the scores of known groups.
    • 4. Comparing scores before and after a particular learning experience or experimental treatment.
    • 5. Correlating the scores with other measures of the same or similar construct.
  40. Predicting Future Performance by using...
    • 1. Aptitute and departmental test scores.
    • 2. Scatter plots
    • 3. Correlation Coefficients
    • 4. Expectancy Grid/table
  41. What are the factors in the Test or Assessment itself?
    • 1. Unclear directions.
    • 2. Reading vocabulary and sentence structure too difficult.
    • 3. Ambiguity.
    • 4. Inadequate time limits.
    • 5. Overemphasis of easy-to-assess aspects of domain at the expense of important but difficult-to-assess aspects. (construct underrepresentation).
    • 6. Test items inappropriate for the outcomes being measured.
    • 7. Poorly constructed test items.
    • 8. Test too short.
    • 9. Improper arrangement items.
    • 10. Identifiable pattern of answers. (e.g., T/F, Multiple choice)
  42. Correlation coefficient
    A statistic that indicates the degree of relationship between any two sets of scores obtained from the same group of individuals.
  43. Validity Coefficient
    A correlational coefficient that indicates the degree to which a measure predicts or estimates performance on some criterion measure.
  44. Reliability coefficient
    A correlation coeficient that indicates the degree of relationship between two sets of scores intended to be measures of the same characteristic.
  45. Reliability Coefficient
    is determined by correlating the scores to two half assessments.
  46. Spearman-Brown formula for calculating correlation coefficient between scores on assessment's two halves.
    Reliability on full assessment = (2 times correlation between half assessments) divided by (1 + correltaion between half assessments.
  47. High reliability is demanded when the decision:
    • is important
    • is final
    • is irreversible
    • is unconfirmable
    • concerns individuals
    • has lasting consequences (e.g., select/reject college applicants.)
  48. Low reliability is tolerable when the decision...
    • is of minor importance.
    • making is in early stages.
    • is reversible.
    • is confirmable by other data.
    • concerns groups
    • has temporary effects (whether to review a classroom lesson).
  49. Reliability estimates are typically reported in terms of
    a reliability coefficient or the standard error of measurment.
  50. Methods of determining Reliability coefficients
    • 1. Interrater method
    • 2. Test-retest method
    • 3. Equivalent-forms method
  51. Interrater method of determing reliability coefficients
    • 1. requires the same set of student performances be scored by two or more raters.
    • 2. It provides an indication of the consistency of scoring across raters.
  52. Test-retest method involves
    • 1. giving the same assessment twice to the same group with an intervening interval.
    • 2. Resulting coefficient provides a measure of stability.
  53. The equivalent-forms method involves:
    • 1. giving two forms of an assessment to same group in close succession.
    • 2. First assessment yields a measure of equivalence
    • 3. Second assessment yields a measure of stability and equivalence.
    • 4. provides a rigorous evaluation of reliability because it includes multiple sources of variation in the assessment results.
  54. Equivalent-forms method can also estimate reliability from a single administration of an assessment either by:
    • 1. correlating the scores on two halves of the assessment
    • 2. or by applying the coeficient alpha formula.
  55. Coefficient alpha provides
    • 1. a measure of internal consistency and is easy to apply.
    • 2. is not applicable to speeded tests and provides no information concerning the stability of assessment scores from day to day.
  56. Standard error of measurement
    indicates....
    can be computed....
    is frequently reported...
    especially useful in..
    • 1. reliability in terms of the amount of variation to be expected in individual scores.
    • 2. It can be computed from the reliability coefficient and the standard deviation.
    • 3. Frequently reported directed in test manuals.
    • 4. Especially useful in interpreting test scores by the band of error surrounding each score.
    • also remains fairly constant from one group to another
  57. It is important to consider the usability of tests and other assessment procedures, such as:
    • 1. ease of administration
    • 2. time required
    • 3. ease of interpretation and application.
    • 4. availability of equivalent or comparable forms.
    • 5. cost of testing.

What would you like to do?

Home > Flashcards > Print Preview