# 4130 Chapter 9: Intro to Quantitative Research

### Card Set Information

 Author: hmijares ID: 159230 Filename: 4130 Chapter 9: Intro to Quantitative Research Updated: 2012-06-25 14:56:59 Tags: 4130 chapter Folders: Description: Introduction to Quantitative Research Show Answers:

Home > Flashcards > Print Preview

The flashcards below were created by user hmijares on FreezingBlue Flashcards. What would you like to do?

1. Chapter 9
2. Purpose of a research design
*To aid in the solution of research problems, answering questions & testing hypothesis.

*To maintain control.

*It allows the researcher to apply different levels of control so it can be suggested that the independent variable really influenced the dependent variable & not something else.

• Research design has 3 parts:
• *A plan.
• *A structure.
• *A strategy.
3. From Chapter 3 slide 2: the research process (8 steps)
• The research process:
• probelm statement
• research question
• literature review
• hypothesis
• sample design: plan, structure, strategy
• data collection analysis
• interpretation
• dissemination

4. Sample & Population
*Researcher can't test the whole population SO must represent larger population with a sample

Population = well defined set that has certain specified properties

Sample = subset of sampling units from a population

*Individuals within a sample is called subject or participants

5. Design considerations that affect a study
• 1. objectivity
• 2. accuracy
• 3. feasibility
• 4. control
• 5. homogenous sample
• 6. constancy
• 7. manipulation
• 8. randomization
6. 1. Objectivity: assessing this aspect of the design is asking what is the researcher basing the study on or where is he/she coming from.
This is accomplished by researcher conceptualizing the problem from literature and developing a theoretical framework (concepts that exist in the literature, which may provide theoretical rationale for the development of hypotheses)

• As the consumer, you should see that a literature review was accomplished and adressed:
• *Participants (subjects) or who was studied.
• *observations or what was studied.
• *measurement of time or when the problem was studied.
• *selection of subjects or where the problem was investigated.
• *role of investigator or who investigated the problem.

*If the above is present the study has high objectivity & the  design matches the problem being studied.

*Assessing this aspect of the design is asking what is the researcher basing the study on or where is he/she coming from.
7. 2. Accuracy
*This means that all aspects of a study systematically & logically follow from the research problem.

*Ask if the researcher chose a design that is consistent with the research problem & offers the maximum amount of control.

*The researcher achieves accuracy through the literature review & the theoretical framework.

*Remember that a theoretical framework is a structure of concepts that exists (has been tested) & is a ready-made map for a study.

*Ex: Albert Bandura’s self efficacy theory that people make judgments to do certain behaviors according to past experiences, by modelling others, their emotional state & the support they have to do the task.

How to achieve accuracy in a design:

*Have a literature review.

Have a theoretical framework.

*Do a pilot study or a smaller version of the study before undergoing a larger study. This tests the accuracy of a chosen study design.
8. 3. Feasability or "do-ability/capability for study to be completed"
*This refers to the capability of the study to be successfully completed.

• *Some of the factors that affect feasibility are:
• time
• subject availability
• facility & equipment availability
• money
• researcher experience
• ethics
9. 4. Control
*Defined as the measures that the researcher uses to hold the conditions of the study uniform & avoid possible impingement of bias on the DV or outcome.

*The aim is to have the highest degree of control possible in a study.

• *It involves holding the conditions (or extraneous variables) in check or constant. This is done by:
• *Use of a homogeneous sample (same characteristics).
• *Use of consistent data collection procedures.
• *Manipulation of the IV.
• *Randomization.
10. 5. Homogeneity = similarity of sample characteristics
*Extraneous variables (other than the IV) may affect the DV.

*Homogeneity means that when a researcher chooses a sample that the participants characteristics according to variables that may affect the IV are similar.

*Example: a study is looking at smoking cessation. The researcher is wanting to know if a type of support given to a treatment group (X or IV) will influence the outcome of smoking cessation (Y or DV). Age, gender, length of time, amount of smoking may all be variables that affect smoking cessation besides the treatment support. To control the study the researcher ensures that the study participants are similar in relation to age, gender, length of time & amount smoked.

*A researcher CANNOT control or predict every extraneous variable. It is impossible.

*Before (NOT AFTER) data is collected extraneous variables should be controlled for.

Homogeneity = Similarity of sample characteristics
11. 6. Constancy: consistent means of data collection
*Refers to the ability of the data collection design to hold the conditions of the study to a cook book like recipe.

*Each subject is exposed to the same environmental conditions, timing of data collection & data collection instruments.

*This comes into play when data is collected by a few people. The same processes must be adhered to by each person.

*This may mean the researcher training each data collector to a standard.

*As the consumer look for a clear, consistent means of data collection.
12. 7. Manipulation of the IV: admin of program, intervention, or treatement to one study group but not to another study group
*This means the administration of a program, intervention or treatment to one study group but not to another group of the study.

*This is an experimental or quasiexperimental design.

*Nonexperimental studies DO NOT manipulate an IV.

*One design is not better than the other.

*Whether an experimental or non experimental design is used depends on the research question.

*Blinding is a technique used in which participants DO NOT know whether they are receiving the IV or intervention or not.

*Double blinding means that the researcher & participants DO NOT know who is receiving the IV & who is not.

*The double blind adds more control than the blind.
13. Placebo Effect (???)
*In an open trial, both the researchers & the subjects know the full details of the treatment. These trials are vulnerable to the placebo effect; that is, because the subjects believe that the stuff they're testing should work, it does work.

*A single blind design is used when the experimenters must know the full facts in order to carry out the experiment.

*With a double blind design the influence of the scientists' expectations & unintentional physical cues on the subjects is lessened. This helps to eliminate the placebo effect & experimenter's bias.
14. Gold Standards Experimental Design = randomized, double blind, cross over study
*In a crossover study, each subject is given the intervention for a time & then the placebo for a time, in random order. A crossover study minimizes the variability between subjects because each subject crossing over in effect serves as his or her own control.

*A randomized, double blind, crossover study is the best bet for achieving results that are untainted by the preconceptions & biases of both experimenters & subjects. That's why it's known as the gold standard of experimentaldesign.
15. Cross Over Design
*An example of a cross over design for a medication as an intervention (IV).

*2 periods with a wash out period between them.

16. 8. Randomization
how to randomize subjects?
*When the required number of subjects from a population is obtained in a manner that each subject has an equal chance of being selected.

*It eliminates bias, aids in a representative sample & can be used in various research designs.

How to randomize subjects?

*Draw names from a hat or a computer software method.

*For effective randomization the:

Researchers must be unable to predict the group to which a subject is allocated.

Researchers must be unable to change a subject's group assignment once it is allocated.
17. How do I know a studdys findings are solid or dependable?
Internal validity
External validity
2 evaluation criteria: internal and external validity

Internal Validity:

*Definition: Internal validity asks if it's the independent variable (IV) (or something else) that caused or resulted in the change in the dependent variable.

*It refers to INSIDE of the study.

*To establish internal validity the researcher rules out other variables that threaten the relationship between the IV and DV. These factors or threats are:

• History
• Maturation
• Testing
• Instrumentation
• Mortality
• Selection bias

In depth explanation to threats of internal validity:

History: events that occur over time or the course of a study.

Selection bias: a lack of randomization with sampling.

•Maturation: events that occur within participants over the study's time.

• Diffusion: one subject from one group talks to a subject from
• another group.
•       Both groups end up being affected by the treatment.

Testing: taking the same test more than once such as in a pre-test & post-test.

Mortality: a loss of subject between points of data collection (drop out).

Instrumentation: faulty equipment or a lack of consistency in observation or data collection.

External Validity:

Definition: Questions the conditions under which the findings can be generalized (generalizability).

Deals with the ability to generalize the findings outside the study to the larger population & to other contexts.

Threats to external validity:

• •Selection of subjects.
• •Reactive effects or study conditions.
• •Measurement or testing effects.

In-depth explanation:

*Selection effects: Who is being studied. Has a representative sample been obtained of the larger population. How has the sample been obtained. Is the sample size large enough?

*Reactive effects: a participant’s reaction to being studied. The Hawthorne Effect where the participant reacts in a certain way not to the IV but because they are in a study.

*Measurement effects: A pre-test administered affects the post-test results & does affect the ability to generalize the study findings to other populations. The pre-test primes the participants.

All in all....

18. Study Design Critique Questions:
*Study design appropriate.

*Control measures match design.

*Design reflects feasibility.

*Design flows from research question, framework, literature review, hypothesis.

*Control of threats to internal validity.

*Control of threats to external validity.

*Design linked to levels of evidence hierarchy.
19. Review
20. A sample of subjects similiar to one another
homogenous sampling
21. subject's responses to being studied
reactivity
22. methods to keep study conditions constant during the study
control
23. consideration of whether the study is possible and practical
feasibility
24. vehicle for testing hypothesis or answering research questions
research design
25. process to ensure every subject has an equal change of being selected
random sampling
26. degree to which a research study is consistent within itself
intenal validity
27. degree to which the study's results can be applied to the larger population
external validity
28. all parts of a study follow logically from the problem statement
accuracy
29. pick hx, instrumentation, maturation, mortality, selection bias, or testing
30. researcher tested effectiveness of a new method of teaching drug dosage and solution calculations to nursing students, using a standardized calculation examination at the beginning, midpoint, and end of a 2 week course
testing
31. in a study of the results of a HTN teachign program conducted at a seniors centre, the BP measured by volunteers using their personal equipment were compared before and after the program
instrumentation
32. a major increase in cigarette taxes occurs during a 1 year follow up study of the impact of a smoking cessation program
history
33. the smoking cessation rates of an experimental group of volunteers for a smoking cessation program were compared with the results of a control group of people who wanted to quit on their own without a special program
selection bias
34. 30% of subjects dropped out of an experimental study on the effect of a job training program on employment for homeless women. More than 90% of the dropouts were single homeless women with at least 2 preschool children whereas the majority of subjects who succesfully completed the program had no preschool children
mortality
35. nurses on a maternity unit want to study the effect of a new hospital-based program on mother's confidence in caring for their newborn infants. The resesarchers mail a survey to each mother 1 month after her d/c from hospital

maturation

What would you like to do?

Home > Flashcards > Print Preview