The flashcards below were created by user
on FreezingBlue Flashcards.
Where do beliefs come from?
- Empirical Evidence
- Systematic or formal observaion to obtain objective, reliable, valid and quantitative measures of the matter of interest.
- By itself, empiricism CANNOT explain WHY
Example: All As are Bs. C is an A, therefore C is a B.
A syllogism is only valid if it obeys the rules of logic. Valid logic enables very strong truth claims GIVEN the empirical validity of the premises.
Common logical reasoning errors
- belief bias
- conversion errors
- confirming evidence bias
Science combines Rationalism and Empiricism. Define.
Rationalism: used to develop theories and hypotheses and way to test.
Empiricism: the means of conducting the tests.
What are the 4 features of Scientific Method?
- 1. Objectivity
- 2. Replication
- 3. Self-Correction
- 4. Control
Define Control as related to the Scientific Method
- Two meanings:
- 1. Directly manipulating the variable of interest
- 2. Controlling for unwanted variables that could influence the results.
This is essential to draw conclusions about cause and effect.
List and define the two "other" variables.
Subject Variables: individual differences in participants such as age, gender, IQ, ethnicity.
Quasi-independent variables: variables outside of the participant that the researcher cannot manipulate, such as weather, laws, geographical location.
What is an Extraneous Variable?
Other unwanted, uncontrolled factors that could influence the dependent variable. Confounds which invalidate the experiment.
- Set of procedures for reducing large masses of data to manageable proportions in order to draw conclusions from those data.
- Two types: Descriptive and Inferential.
Descriptive: Numbers that summarize a set of data.
Inferential: Calculations that determine whether an IV has a significant effect; they allow us to draw inferences.
The complete set of events being studied. It is the entire group to which we want to generalize the results.
Numerical values summarizing population data.
The subgroup of the population that we collect data from.
Random Sample / Random Selection
A sample in which each member of the population has an equal chance of inclusion in the study.
Participants selected for their accessibility or ease of testing.
Everyone in the study has an equal chance of being assigned to each of the study groups.
What are the 2 types of data?
Continuous data (aka "measurement" or "quantitative" data). A mean can be determined.
Categorical (aka "frequency" or "count" data). No mean is possible.
List the 4 scales of measurement.
(they go from simple to complex)
N = ?
n = ?
X = ?
- N = total sample size
- n = number of participants per group
X = set of scores for one variable
Four principles from most to least important of the CPA
- Respect for the Dignity of Persons
- Responsible Caring (competence)
- Integrity in Relationships (honesty)
- Responsibility to Society
Respect for Dignity of Persons entails...
- Informed Consent
- No harassment or degrading comment
- No unjust discrimination
- Fair compensation
Responsible Caring entails...
- Protect welfare of others, avoid harm
- Take responsibility for actions
- Keep up to date
- Referral for help
- Maintain appropriate relationships
- Pilot studies
Integrity in Relationships entails...
- No dishonest, fraud or misrepresentation in reporting results
- Do not supress disconfirming evidence
- Acknowledge limitations of findings
- Do not deceive if not necessary
- No coercive enticement to participate
Responsibility to Society entails...
- Contribute to discipline and state of knowledge
- Keep informed
- Critical self-evaluation
- Educate & promote scientific growth of others
- Respect for social customs, cultural expectations
- Sensitive to needs of society when designing research (hot topics)
Tri-Council Ethical principles (for all disciplines)
- Respect for Human Dignity (cardinal principle)
- Respect for Free and Informed Consent
- Respect for Vulnerable Persons
- Respect for Privacy and Confidentiality
- Respect for Justice and Inclusiveness
- Balance of Harms and Benefits
Explanation of purpose of study, methods used, correct misconceptions, ask if any questions
- Education: What was the study about?
- Dehoaxing: Describe deceptions and why?
- Desensitizing: Any psychological discomfort?
Why is it important to survey the literature?
- To determine the current state of the knowledge
- Provide a basis for hypotheses
- Guide you in selecting paradigm, operational definitions
List the steps to beginning research.
- Step 1: Develop a research question
- Step 2: Survey the literature
- Step 3: Build a hypothesis
What are the characteristics of the research hypothesis?
- Synthetic statement: is either true or false
- Falsifiable: can be shown to be wrong
- Can be stated in "General Implication Form" (if...then...)
- Can be directional (more/less than) or non-directional (different from)
Inductive vs. Deductive logic
Deductive: General to specific; How we form our research hypotheses
Inductive: Specific to general; Combining the results of several studies into a theory
List 4 types of independent variables
- physiological (manipulation of biological state)
- experience (manipulation of amount/type of training/learning)
- stimulus/environmental (manipulation of the environment)
- participant (manipulation of aspects of participant)
List the types of dependent variables
- degree or amount
- latency or duration
- Unwanted variables that increase the variability of all scores within groups
- Affects ALL groups
- Makes the effect harder to see
Confounders (aka Extraneous variables)
- Unintended influences on the DV
- biases result in a particular direction
- renders findings MEANINGLESS
Developing good controls for extraneous variables
- Step 1: Randomization
- Step 2: Elimination (of extraneous variables)
- Step 3: Constancy (across all groups of participants)
- Step 4: Balancing (equal distribution of extraneous variables)
What is an order effect?
- When the position in a series affects how participants respond.
- Doesn't depend on the EVENT but on the POSITION
- think fatigue/practice/learning
What is the carryover effect?
- When the effects of one event influences responses to the next event.
- Depends on the EVENT not the POSITION
- (i.e., previous drug intake)
works to counteract carryover and order effects:
- 1: Each event must be presented to each participant an equal number of times.
- 2: Each event must occur an equal number of times at each session.
- 3: Each event must precede and follow each of the other events an equal number of times.
Every measure consists of two elements:
- True score (hypothetical concept); and
- Error (bias and random error)
Observed score =
Observed score = True Score + Error (bias + random)
Experimentor error may be...
- Random error: noise, temp., time of day.
- Bias error: experimenter characteristics or experimenter expectancies (Rosenthal)
How do we control for experimenter characteristics?
- Use standardized methods
How do we control for experimenter expectancies?
- Single-blind research
Random participant error: carelessness, distraction
Participant Bias: Demand characteristics, Good Participant effect, response bias
Define demand characteristics
Features of an experiment that seem to inadvertently cause participants to act in a particular way.
Define Good Participant Effect
Tendency for participants to behave as they think the researcher wants them to behave.
How to control for Demand Characteristics?
- Condcut double-blind research
- Use deception
What is response bias?
- Yea- and Nay-sayers.
- When the context affects participant response
- Can be a factor of the experimental setting or the questions
- Social desirability can be an issue
How to control for Response Bias?
- Include "agree" and "disagree" items
- Randomize question presentation.
- Pilot testing
Describe Observer Error.
- Random observer error: carelessness, distraction
- Observer/scorer bias: confirmatory bias
More important to reduce observer bias (confound) than random error (nuisance).
How to control for Observer Error?
- Eliminate human observer (use mechanical measure to reduce random and bias errors)
- Limit observer subjectivity (focus on observable behavior, standardized coding)
- Make observer blind
What is construct validity and list 4 components.
- Does the manipulation or measure ACTUALLY represent the claimed construct?
- Content validity
- Convergent validity
- Discriminant or divergent validity
How to establish Reliability?
- assess random error
- reliability is a prerequisite for validity
- Test-retest reliability
- Inter-rater reliability
- Internal consistency
Describe Internal Consistency
- measure of participant random error
- variability across items = random error
- Index calculated using Cronbach's Alpha, split-half correlation and average inter-item correlation
What is content validity?
- Is the measure's content relevant to the concept?
- Does it clearly relate to the concept?
- Does it cover all aspects of the concept?
What is convergent validity?
Does measure correlate with other indicators of the same construct?
What is Discriminant Validity?
Is the measure distinguishable from other constructs?
What is Sensitivity?
- Sensitivity is the ability of measures to detect effects.
- "does your measure minimize the influence of error?"
- Use measures with maximal validity and maximal reliability
- Avoid restriction of range, all or nothing measures.
- Add scale points to a rating scale
- Pilot test measure
Why conduct non-manipulation studies?
- Naturlistic research settings;
- manipulation not possible
- natural variation
- prediction and selection
- temporal change
- comparing size of associations
List the types of descriptive studies
- Archival research
- Observational techniques such as case studies,
- naturalistic observation,
- participant observation
- clinical perspective
What are some issues with Archival Studies?
- Limits generalization
- may have missing data values
- may not be ideal to your research question
- cannot show causation
What are the differences between Clinical Perspective and Participant Observation?
- client chooses clinician, whereas participant observer chooses others to study
- clinicians cannot be unobtrusive or passive
- Participant observer's goal is understanding whereas clinician's goal is helping
What are some issues with Observational Techniques?
- Reactivity: when the knowledge of being watched affects behavior, aka "The Hawthorne Effect"
- High on external validity but low on internal validity
- Cannot make cause-effect statements
What are descriptive surveys?
Descriptive surveys seek to determine what % of the population have particular characteristics, beliefs or behaviors.
What are analytic surveys?
Analytic surveys seek to determine the relevant variables and how they are related.
What is Cronbach's Coefficient Alpha?
- Most common estimate of test reliability
- measures how well a group of items measure one uni-dimensional construct
- should be at least 0.7
What is the difference between tests/inventories and surveys/questionnaires?
- surveys/questionnaires: examine an opinion
- tests/inventories: assess a specific attribute, characteristic or ability of the subject
A good test should have VALIDITY which is established by:
- content validity
- concurrent validity
- criterion validity: can the test predict future behavior?
A good test should have RELIABILITY which is established by:
- test-retest consistency
- split half consistency
Samples can be collected by:
- Random sampling; or
- Stratified random sampling: the population is separated into different subgroups and a random sample is taken from each
What are 3 research strategies?
- Single-strata: select from a subgroup of the population
- Cross-sectional: multiple subgroups at the same time
- Longitudinal: one cohort over an extended period of time
What is qualitative research?
- An attempt to capture the complexity of human behaviour in its natural environment.
What is Positivism?
philosophical position that stresses observable facts and seeks universal laws
What is "post-positivism"?
both quant and qual research concerned with the collection of observed information but NOT universal laws.
What is Grounded Theory?
Attempts to use qualitative methods to identify themes and build a theory.
What is correlational research?
- both a statistical technique and a research method
- research designed to determine whether an association exists between 2 variables