4130 Chapter 10: Experimental & Quasiexperimental Designs

Card Set Information

4130 Chapter 10: Experimental & Quasiexperimental Designs
2012-06-22 18:48:03

Experimental & Quasiexperimental Designs
Show Answers:

  1. Chapter 10: Experimental & Quasiexperimental Designs
  2. Review: purpose of research designs
    *To provide the plan for testing the hypothesis about the independent & dependent variables.

    *Experimental & quasiexperimental designs differ from nonexperimental in that the researcher actively seeks to bring about the desired effect & does not passively observe behaviors & actions.

    *They test cause & effect relationships.

    *They eliminate potential alternative explanations (threats to validity) for the findings.
  3. Criteria for inferring causailty: 3
    • *To infer that some variable or IV has an effect or cause
    • a change in a DV there must be 3 criteria present:

    1. There must be an association between the casual variable & the effect variable.

    2. The cause must precede the effect.

    3. The relationship must not be explainable by another variable (extraneous variable).

    Most nursing research is not experimental. Most is non experimental that considers relationships of one variable to another.

    These types of studies are required before an experimental study can be done.
  4. Experimental Properties:
    experiment definition
    3 properties of a true experiment
    gold standard
    *An experiment is a scientific investigation that makes observations & collects data according to explicit criteria.

    *True experiment or randomized control trial (RCT): 3 properties:

    1. Randomization of control or treatment groups.

    2. Control of IV on the DV.

    3. Manipulation of IV.

    *The gold standard

    *1 RCT = Level 2 evidence
  5. A. Experimental design types
    *3 types of experimental design:

    1.True experiment

    2.Solomon four

    3.After only

    *The researcher uses the design that:

    •Is appropriate to the research question.

    •Maximizes control.

    •Holds the conditions of the study constant.

    •Establishes specific sampling criteria.

    •Maximizes the level of evidence.
  6. How do you maximize control?? (4)
    *Ruling out extraneous variables through:

    Homogeneous sampling (homogeneity).

    •Constancy in data collection.

    •Manipulation of the independent variable.

  7. 1. True experimental design
  8. 2. Solomon four
  9. 3. After only: NO pretests
  10. Experimental designs advantages and disadvantages

    • -most appropriate for testing cause-and-effect relationships
    • -provide highest level of evidence for single studies


    • -subject mortality...especially control group subjects
    • -difficult logistics (planning) in field settings and may be disruptive
    • -hawthorne effect
    • -not all research questions are amenable to experimental manipulation or randomization
  11. B. Quasiexperimental Designs
    *1 quasiexperimental study = Level 3 evidence.

    *They also test cause and effect relationships. 

    *It lacks full experimental control.

    *Lacks randomization of participants.

    *It is weaker than the true experiment in making causal assertions.

    *Usually contaminated by threats to internal & external validity.


    1. Nonequivalent control group design.

    2. After-only nonequivalent control group design.

    3. One-group (pretest–post-test) design.

    4. Time series design.
  12. 1. Nonequivalent control group design
  13. 2. After only nonequivalent control group design
  14. 3. One group pretest/posttest design 
  15. 4. Time series design
  16. Quasiexperimental design advantages & disadvantages

    practical, feasible, especially in clinical settings & generalizable. 


    • no randomization, weak at testing causal relationships (difficult to make clear cause-and-effect statements)
  17. Evaluation Research
    *Evaluation research is the utilization of scientific methods & procedures to evaluate a program, treatment, practice or policy.

    *Experimental & quasiexperimental designs are used to evaluate the outcomes of a program.  This is called evaluation research or program evaluation. Much evaluation is done using a mixed method (qualitative & quantitative).

    This research focuses on the evaluation of:

    * the effectiveness of clinical interventions.

    *structured programs delivered to specific populations.

    *the quality of service.  

    *the impact of new ways of health care delivery.

    The research is either:

    *formative: as a program is being implemented.

    *summative: assesses the results of a program or its outcomes. 
  18. General Critiquing Criteria
    *What design is used?

    *Is the design experimental or quasiexperimental?

    *Is the problem one of a cause-and-effect relationship?

    *Is the method used appropriate for the problem?

    *Is the design suited to the study setting?

    *What experimental design is used? Is it appropriate?

    *How are randomization, control, and manipulation applied?

    *Are there reasons to believe that alternativeexplanations exist vfor the findings?

    *Are all threats to validity including mortalityvaddressed in the report?
  19. Quasi-experimental critiquing criteria
    What quasiexperimental design is used? Is it appropriate?

    *What are the most common threats to the validity of the findings?

    *What are the plausible alternative explanations for the findings? Are they addressed?

    *Does the author address threats to validity acceptably?

    *Are limitations addressed?
  20. Evaluation research critiquing criteria
    *Is the specific problem, practice, policy, or treatment being evaluated identified?

    *Are the outcomes to be evaluated identified?

    *Is the problem analyzed & described?

    *Is the program involved described & standardized?

    *Are the measurements of change identified?

    *Are the observed outcomes related to the activity or to other causes?
  21. Review
  22. After-only experiment
    this design is also known as the post-test-only control group design, in which neither the experimental group nor the control group is pretested
  23. Experimental
    -particularly suitable for testing cause-and-effect relationships because they help eliminate potential alternative explanations (threats to validity) for the findings

    • -includes 3 properties: randomization, control, & manipulation
  24. Solomon 4
    has two groups identical to the true experimental design plus an experimental after-group and a control after-group 
  25. Time series
    a research approach used when only 1 group is available to study for trends over a longer period
  26. After only experiment
    also known as the post-test-only control group design, in which neither the experimental group nor the control group is pretested
  27. after only nonequivalent control group
    if a researcher wanted to compare the results obtained form an experimental group with those obtained from a control group but was unable to conduct pretests or randomly assign subjects to groups
  28. non-equivalent control group
    when subjects can't be randomly assigned to experimental and control groups but can be pretested and post-tested