Psyc312 Exam1

Card Set Information

Psyc312 Exam1
2010-06-03 09:29:02
psychology statistics

Stats 312 Exam 1
Show Answers:

  1. Where do beliefs come from?
    • Tenacity
    • Authority
    • Experience
    • Empirical Evidence
  2. Empirical Evidence
    • Systematic or formal observaion to obtain objective, reliable, valid and quantitative measures of the matter of interest.
    • By itself, empiricism CANNOT explain WHY
  3. Syllogisms
    Example: All As are Bs. C is an A, therefore C is a B.

    A syllogism is only valid if it obeys the rules of logic. Valid logic enables very strong truth claims GIVEN the empirical validity of the premises.
  4. Common logical reasoning errors
    • belief bias
    • conversion errors
    • confirming evidence bias
  5. Science combines Rationalism and Empiricism. Define.
    Rationalism: used to develop theories and hypotheses and way to test.

    Empiricism: the means of conducting the tests.
  6. What are the 4 features of Scientific Method?
    • 1. Objectivity
    • 2. Replication
    • 3. Self-Correction
    • 4. Control
  7. Define Control as related to the Scientific Method
    • Two meanings:
    • 1. Directly manipulating the variable of interest
    • 2. Controlling for unwanted variables that could influence the results.

    This is essential to draw conclusions about cause and effect.
  8. List and define the two "other" variables.
    Subject Variables: individual differences in participants such as age, gender, IQ, ethnicity.

    Quasi-independent variables: variables outside of the participant that the researcher cannot manipulate, such as weather, laws, geographical location.
  9. What is an Extraneous Variable?
    Other unwanted, uncontrolled factors that could influence the dependent variable. Confounds which invalidate the experiment.
  10. Statistics
    • Set of procedures for reducing large masses of data to manageable proportions in order to draw conclusions from those data.
    • Two types: Descriptive and Inferential.
  11. Descriptive Statistics

    Inferential Statistics
    Descriptive: Numbers that summarize a set of data.

    Inferential: Calculations that determine whether an IV has a significant effect; they allow us to draw inferences.
  12. Population
    The complete set of events being studied. It is the entire group to which we want to generalize the results.
  13. Parameters
    Numerical values summarizing population data.
  14. Sample
    The subgroup of the population that we collect data from.
  15. Random Sample / Random Selection
    A sample in which each member of the population has an equal chance of inclusion in the study.
  16. Convenience Sample
    Participants selected for their accessibility or ease of testing.
  17. Random Assignment
    Everyone in the study has an equal chance of being assigned to each of the study groups.
  18. What are the 2 types of data?
    Continuous data (aka "measurement" or "quantitative" data). A mean can be determined.

    Categorical (aka "frequency" or "count" data). No mean is possible.
  19. List the 4 scales of measurement.
    • Nominal
    • Ordinal
    • Interval
    • Ratio

    (they go from simple to complex)
  20. Notation
    N = ?
    n = ?
    X = ?
    • N = total sample size
    • n = number of participants per group

    X = set of scores for one variable
  21. Four principles from most to least important of the CPA
    • Respect for the Dignity of Persons
    • Responsible Caring (competence)
    • Integrity in Relationships (honesty)
    • Responsibility to Society
  22. Respect for Dignity of Persons entails...
    • Privacy
    • Confidentiality
    • Informed Consent
    • No harassment or degrading comment
    • No unjust discrimination
    • Fair compensation
  23. Responsible Caring entails...
    • Protect welfare of others, avoid harm
    • Take responsibility for actions
    • Keep up to date
    • Referral for help
    • Maintain appropriate relationships
    • Pilot studies
  24. Integrity in Relationships entails...
    • No dishonest, fraud or misrepresentation in reporting results
    • Do not supress disconfirming evidence
    • Acknowledge limitations of findings
    • Do not deceive if not necessary
    • Debriefing
    • No coercive enticement to participate
  25. Responsibility to Society entails...
    • Contribute to discipline and state of knowledge
    • Keep informed
    • Critical self-evaluation
    • Educate & promote scientific growth of others
    • Respect for social customs, cultural expectations
    • Sensitive to needs of society when designing research (hot topics)
  26. Tri-Council Ethical principles (for all disciplines)
    • Respect for Human Dignity (cardinal principle)
    • Respect for Free and Informed Consent
    • Respect for Vulnerable Persons
    • Respect for Privacy and Confidentiality
    • Respect for Justice and Inclusiveness
    • Balance of Harms and Benefits
  27. Explain Debriefing
    Explanation of purpose of study, methods used, correct misconceptions, ask if any questions

    • Education: What was the study about?
    • Dehoaxing: Describe deceptions and why?
    • Desensitizing: Any psychological discomfort?
  28. Why is it important to survey the literature?
    • To determine the current state of the knowledge
    • Provide a basis for hypotheses
    • Guide you in selecting paradigm, operational definitions
  29. List the steps to beginning research.
    • Step 1: Develop a research question
    • Step 2: Survey the literature
    • Step 3: Build a hypothesis
  30. What are the characteristics of the research hypothesis?
    • Synthetic statement: is either true or false
    • Falsifiable: can be shown to be wrong
    • Can be stated in "General Implication Form" (if...then...)
    • Can be directional (more/less than) or non-directional (different from)
  31. Inductive vs. Deductive logic
    Deductive: General to specific; How we form our research hypotheses

    Inductive: Specific to general; Combining the results of several studies into a theory
  32. List 4 types of independent variables
    • physiological (manipulation of biological state)
    • experience (manipulation of amount/type of training/learning)
    • stimulus/environmental (manipulation of the environment)
    • participant (manipulation of aspects of participant)
  33. List the types of dependent variables
    • correctness
    • rate/frequency
    • degree or amount
    • latency or duration
  34. Nuisance variables
    • Unwanted variables that increase the variability of all scores within groups
    • Affects ALL groups
    • Makes the effect harder to see
  35. Confounders (aka Extraneous variables)
    • Unintended influences on the DV
    • biases result in a particular direction
    • renders findings MEANINGLESS
  36. Developing good controls for extraneous variables
    • Step 1: Randomization
    • Step 2: Elimination (of extraneous variables)
    • Step 3: Constancy (across all groups of participants)
    • Step 4: Balancing (equal distribution of extraneous variables)
  37. What is an order effect?
    • When the position in a series affects how participants respond.
    • Doesn't depend on the EVENT but on the POSITION
    • think fatigue/practice/learning
  38. What is the carryover effect?
    • When the effects of one event influences responses to the next event.
    • Depends on the EVENT not the POSITION
    • (i.e., previous drug intake)
  39. Complete Counterbalancing
    works to counteract carryover and order effects:

    • 1: Each event must be presented to each participant an equal number of times.
    • 2: Each event must occur an equal number of times at each session.
    • 3: Each event must precede and follow each of the other events an equal number of times.
  40. Every measure consists of two elements:
    • True score (hypothetical concept); and
    • Error (bias and random error)
  41. Observed score =
    Observed score = True Score + Error (bias + random)
  42. Experimentor error may be...
    • Random error: noise, temp., time of day.
    • Bias error: experimenter characteristics or experimenter expectancies (Rosenthal)
  43. How do we control for experimenter characteristics?
    • Use standardized methods
    • Replication
  44. How do we control for experimenter expectancies?
    • Standardization
    • Objectivity
    • Single-blind research
  45. Participant Error
    Random participant error: carelessness, distraction

    Participant Bias: Demand characteristics, Good Participant effect, response bias
  46. Define demand characteristics
    Features of an experiment that seem to inadvertently cause participants to act in a particular way.
  47. Define Good Participant Effect
    Tendency for participants to behave as they think the researcher wants them to behave.
  48. How to control for Demand Characteristics?
    • Condcut double-blind research
    • Use deception
  49. What is response bias?
    • Yea- and Nay-sayers.
    • When the context affects participant response
    • Can be a factor of the experimental setting or the questions
    • Social desirability can be an issue
  50. How to control for Response Bias?
    • Include "agree" and "disagree" items
    • Randomize question presentation.
    • Pilot testing
  51. Describe Observer Error.
    • Random observer error: carelessness, distraction
    • Observer/scorer bias: confirmatory bias

    More important to reduce observer bias (confound) than random error (nuisance).
  52. How to control for Observer Error?
    • Eliminate human observer (use mechanical measure to reduce random and bias errors)
    • Limit observer subjectivity (focus on observable behavior, standardized coding)
    • Make observer blind
  53. What is construct validity and list 4 components.
    • Does the manipulation or measure ACTUALLY represent the claimed construct?
    • Reliability
    • Content validity
    • Convergent validity
    • Discriminant or divergent validity
  54. How to establish Reliability?
    • assess random error
    • reliability is a prerequisite for validity
    • Test-retest reliability
    • Inter-rater reliability
    • Internal consistency
  55. Describe Internal Consistency
    • measure of participant random error
    • variability across items = random error
    • Index calculated using Cronbach's Alpha, split-half correlation and average inter-item correlation
  56. What is content validity?
    • Is the measure's content relevant to the concept?
    • Does it clearly relate to the concept?
    • Does it cover all aspects of the concept?
  57. What is convergent validity?
    Does measure correlate with other indicators of the same construct?
  58. What is Discriminant Validity?
    Is the measure distinguishable from other constructs?
  59. What is Sensitivity?
    • Sensitivity is the ability of measures to detect effects.
    • "does your measure minimize the influence of error?"
    • Use measures with maximal validity and maximal reliability
    • Avoid restriction of range, all or nothing measures.
    • Add scale points to a rating scale
    • Pilot test measure
  60. Why conduct non-manipulation studies?
    • Naturlistic research settings;
    • manipulation not possible
    • natural variation
    • prediction and selection
    • temporal change
    • comparing size of associations
  61. List the types of descriptive studies
    • Archival research
    • Observational techniques such as case studies,
    • naturalistic observation,
    • participant observation
    • clinical perspective
  62. What are some issues with Archival Studies?
    • Limits generalization
    • may have missing data values
    • may not be ideal to your research question
    • cannot show causation
  63. What are the differences between Clinical Perspective and Participant Observation?
    • client chooses clinician, whereas participant observer chooses others to study
    • clinicians cannot be unobtrusive or passive
    • Participant observer's goal is understanding whereas clinician's goal is helping
  64. What are some issues with Observational Techniques?
    • Reactivity: when the knowledge of being watched affects behavior, aka "The Hawthorne Effect"
    • High on external validity but low on internal validity
    • Cannot make cause-effect statements
    • objectivity
  65. What are descriptive surveys?
    Descriptive surveys seek to determine what % of the population have particular characteristics, beliefs or behaviors.
  66. What are analytic surveys?
    Analytic surveys seek to determine the relevant variables and how they are related.
  67. What is Cronbach's Coefficient Alpha?
    • Most common estimate of test reliability
    • measures how well a group of items measure one uni-dimensional construct
    • should be at least 0.7
  68. What is the difference between tests/inventories and surveys/questionnaires?
    • surveys/questionnaires: examine an opinion
    • tests/inventories: assess a specific attribute, characteristic or ability of the subject
  69. A good test should have VALIDITY which is established by:
    • content validity
    • concurrent validity
    • criterion validity: can the test predict future behavior?
  70. A good test should have RELIABILITY which is established by:
    • test-retest consistency
    • split half consistency
  71. Samples can be collected by:
    • Random sampling; or
    • Stratified random sampling: the population is separated into different subgroups and a random sample is taken from each
  72. What are 3 research strategies?
    • Single-strata: select from a subgroup of the population
    • Cross-sectional: multiple subgroups at the same time
    • Longitudinal: one cohort over an extended period of time
  73. What is qualitative research?
    • An attempt to capture the complexity of human behaviour in its natural environment.
  74. What is Positivism?
    philosophical position that stresses observable facts and seeks universal laws
  75. What is "post-positivism"?
    both quant and qual research concerned with the collection of observed information but NOT universal laws.
  76. What is Grounded Theory?
    Attempts to use qualitative methods to identify themes and build a theory.
  77. What is correlational research?
    • both a statistical technique and a research method
    • research designed to determine whether an association exists between 2 variables