Home > Preview
The flashcards below were created by user
faulkebr
on FreezingBlue Flashcards.

Most important reasons for doing research:
 1. to solve practical problems  applied research
 2. To satisfy curiosity and gain scientific understanding  Basic Research

Reasons for doing research:
To solve practical problems  Applied Research
 Motivation: SPECIFIC need
 Purpose:knowledge applied to a very specific situation
 Application: narrow

Reasons for doing Research
To satisfy curiosity and gain scientific understanding: Basic Research
 Motivation:curiositydesire for explanation
 Purpose:basic principles
 application: broad

Research Hypothesis
Version 1: Somewhat abstract
1
2
 1. theoretical constructs (concepts or conceptual variables likely not directly observable)
 2. State (make educated guess)
 a. relationship between theoretical
 constructs OR
 b. that one theoretical construct
 causes another
 Ex: violent TV content  aggression
 Dreaming in color  creative
 Bumper stickers  Road Rage

Research Hypothesis
Version 2: (More specific and testable):
convert abstract concepts (theoretical constructs) into specific (testable) variables

Variable 
has a range of values (levels)

Force =
mass X acceleration

Weight =
mass X gravitational field strength

Canadian dollar =
1.18 X US dollar

Fine Motor skills is associate to:
Gender

Quantative
continuous numerical scale


Kinds of Variables:
Experiments (cause)
 Independent Variable
 Dependent Variable
 Control Variable

Independent Variable
E changes (manipulates) (possible cause)
first in time: independent of the results of the study.

A level of a variable 
one value of an I.V.
An independent variable has at least 2 levels of a treatment. If not NOT a variable

Dependent Variable
measured by E (possible effect)
Second in time: depends on I.V.

Control Variable
could be independent variable but held constant (not the same as control group or condition)

Kinds of variables:
Nonexperiments (no manipulatinion
(relationships)
 Predictor variable
 Outcome or Criterion variable

Predictor variable
 No manipulation
 Possible cause; first in time

Outcome or Criterion Variable
Possible EFFECT; second in time

I.V. before ___
P.V. before ___
D.V. ; C.V.

Comparing effects of different amounts of variable; one amount for each subject;
Between subjects comparison

Comparing effects ofdifferent amounts of variable; all amounts for each subject:
within subjects comparison

Operational Definitions:
Exact description of procedure used to generate independent variable: how to produce each level of the independent variable
Exact description of how to "obtain" dependent variable: how to measure or recognize a particular thing or characteristic

Operational definitions for independent variables:
Statement describing what to do (produce, create, construct, generate) different amounts or levels of the variable
Example: fear vs no fear / frown vs no frown (categories)

operational definitions for dependent variables (& predictor and outcome variables):
statement of what to do to measure (or how to measure) a particular thing or characteristic
ex: fear; smile vs frown; neuroticism

O.D. for dependent variables must be:
 Clear, precise, objective...so repeatable by others
 Captures at least some part of the concept you are trying to measure (valid)
 Can be done consistently (reliably); clarifies the meaning of the concept
Practical

Research Hypothesis: Watching violent cartoons increases aggression in children
I.V.?
D.V.?
I.V.  Cartoons with violence and Cartoons without violence (Describe criteria for violence)
D.V.  Number of times child hits BoBo doll (criteria for hit?)

An operational definition purposes:
 1. public verification
 2. Blueprint for experiment
 fear causes thigmotaxis
 agreed upon methodology, systemati
 observation
 3. Evaluation of research
 captures ideas that researcher
 claims to test. quality of research

examples of established relationships
 gas, pressure, temperature
 crowds and helping
 noise and performance
 weight loss and activity

Good theory if:
it explains a variety of facts

If its not a risky prediction then;
poor predictive power

Testing research hypotheses and theories:
Develop a number of different hypotheses/theories
design and conduct "critical tests" of each
Gather (1) falsifying and (2) confirming (supporting) evidence

Theory/Hypothesis/Explanation:
Lots of supporting evidence and little or no falsifying evidence and closest to the truth
(at the moment)

Induction:
Collect specific observations to infer general principles

Inductions are not:
logically valid

Reasearch Hypothesis can be written in _____ ____ format.
"ifthen"

research hyp. in "if then" format
If the hy. that the full moon increases aggression is true, then there should be more fights in bars around town on nights of full moons. (antecedent)
consequent?
Consequent: there were more fights

Fallact of affirming the consequent
 1. If P, then Q
 2. Q
 3. Therefore P
 If smith is a mother, then she is female
 Smith is a female
 Therefore smith is a mother

Results NEVER prove a hy. tobe correct
(fallacy of affirming the consequent)
always possible new data will be collected

Modus Tollens Argument
Logically valid
 if p then q
 not q
 therefore not p

Measurement
the act of characterizing observations

Measurement
1.
1. Develop o.d.  clear procedures for measuring or classifying observations and producing variables: publicly verifiable, reliable
a.)Decide in advance what you are going to measure(esp. important in naturalistic ob.)
b.)decide how to assign numbers (or categories) to observations

Confirmation Bias:
tend to notice and look for evidence that confirms expectations

Measurement:
2.
Collect data

Measurement
3.
Summariza data
Data from individuals not representative
a.) measures of central tendancy
 Mode
 Median
 Arithmetic mean
 Geometric mean

Different Indices of Variability
All measures of variability show how much scores from the standard, baseline or reference

Different indicies of variability
Variance or Mean Square (MS):
the average of the squared distances each score is from the mean

How do you change to a measure that is similar to the average distance a set of scores is from the mean?
Take the square root: standard deviation

Frinding variance:
 1. find mean
 2. subtract mean from each score
 3. square resulting number
 4. add together
 5. divide by N

Standard Deviation
proportional to the average distance a set or scores is from the mean

Characteristics of measurement:
 Sensitivity
 Reliability
 validity of a test/measurement

Sensitivity of test/measure
Sensitivity:
increases with the number of possible values that can be consistently used.
 differentiate
 discriminate
 distinguish

Reliability of a test/measure
stability orconsistencysometimes over time; repeatable, reproducible
precise in sense that random, UNSYSTEMATIC errors of measurement are minimal/nonexistent

Reliability of a test/measure
Classic approach to reliability:
Obtained Score = true score + unsystematic measurement error
(look at many scores from many individuals)

Unsystematic measurement error influences:
variability of scores  reduces precision

unsystematic measurement errors random should:
balance out.

How do you determine true score?
use arithmetic mean

If you use arithmetic mean, the average errors should;
equal 0

Systematic measurement is;
"true" variance

Reliability =
 Systematic variance
 _________________
 systematic+error variance
 Total Variance
 (will get a number between 0 and 1)

TestRetest Reliability
Consistency of a measure from one time to the next

Systematic vs. unsystematic (random) measurement error ....
 Unsystematic errors  cancel out
 Systematic errors  do NOT cancel out
 Unsystematic errors  influence variablity
 Systematic errors  influence mean  (poor variability)

Sources that influence testretest reliability:
 Time:influenced by time between tests
 Change
 Carry over

Parallel Forms Reliability
Consistency of the results of a test constructed in the same way and from the same content area.

Interitem (internal consistency) Reliability:
consistency of results across items within a test designed to measure same construct.

1. Average interitem correlation
2. Splithalf reliability

Coefficient alpha;
Average of all possible split half reliability

Interrater or interobserver reliability
degree to which different raters or observers give consistent estimates of the same phenomenon

Validity of a test/measure
Measures or test measures what;
 it is supposed to
 it is designed to
 purports to measure

Content validity of test/measure
extent to which it uses items or content representative of the area (concept) you are trying to measure

Content validity of test/measure
Starts with 
Construction of testtheory and research
 1.Items truly from content area
 2.Items are representative same of
 content area

Criterion Validity of test/measure
Degree to which a test correlates with some direct and independent measure of what you are trying to measure.
 IQ
 boss's ratings of salesmanship
 weights? crimes?

Direct (usually behavior) and independent measure:
a single criterion/operational definition

Predictive validity
Future

Concurrent Validity 
Same time

Construct validity of a test/measure
Extent to which a test or measure can be shown to measure a particular theoretical constructconceptual variable (unobservable abstract trait or feature)
 1. test produces "numbers" distinct from that produced by a measure of another construct
 2. Based on accumulated evidence

Construct validity of test/measure
1. Define clearly;
characteristic or trait(construct) to be measuredtheoretical relationships etc.
Sensation seeking: a trait describing the tendancy to seek novel, varied, complex, and intense sensations and experiences and the willingness to take risks for the sake of the experience

Construct validity of a test/measure
2.
Correlate test with a variety of measures that should be positively, negatively, or not correlated with characteristic (construct)

Construct validity of a test/measure
3.
Examine the pattern of results using a diverse body of evidence.
 Convergent validity
 Discriminent Validity
 Criterion Validity

Convergent validity correlations
strong cor. between test and conceptually similar measures

Discriminent Validity Correlations
low cor. between test and measures of different theoretical constructs

Criterion Validity correlations
strong cor. between test and direct and independent measure (ex. behavior)

Often reliability means:
that test is correlated with itselfreproducible

Campbell and Stanley: two criteria regarding experimentation
 internal validity
 external validity

Internal Validity:
degree to which a study measures the effect of hypothesized cause or independent variable
Did different levels of the treatment (IV) cause the change in the outcome(DV)?

External Validity:
Degree to which the finings can be generalized to other subjects, and to other situations (settings, levels of variables, ways of measurement etc)

Response Acqiescience:
yeasaying (or the opposite: response deviationnay saying)

Types of external validity
 ecological validity
 population validity

Population validity
the degree to which sample scores can be generalized to the target population

False Consensus Effect
tend to overestimate the extent to which other people share or behaviors, attitudes, and beliefs

Construct Validity
the degree to which the independent and dependent variables accurately reflect or measure what they are intended to.

External Validity
the extent to which one can generalize from the research setting and participant population to other settings and populations

Internal Validity
refers to whether one can make causal statements about the relationship between variables

Reliability
refers to the consistency of behavioral measures

Splithalf Reliability
involves dividing the test items into 2 arbitrary groups and correlating the scores obtained in the 2 halves of the test

Stratified Sample
divides the population into smaller units

Power
occurs when a statistical test can detect effects

MetaAnalysis
reletively objective technique for summarizing across many studies investigating a single topic
example: lunarlunacy hypothesis. full moon makes for more aggressin

Psychophysical Scaling
scaling of concepts such as brightness

Psychometric Scaling
applies when concepts, such as depression, are measured but usually do not have clearly specified inputs

Weber's Law states:
For a particular sensory modality, the size of the difference threshold relative to the standard stimulus is a constant.

Summated Rating Scale
A popular way of assessing psychological traits that do notseem to lie on a known physical scale.
Provides a score for a psychometric property of a person that derives from how that person responds to several statements about a topic that clearly are favorable or unfavorable

Standard Error of the Mean
is the standard deviation of a distribution of sample means

