# Research Methods

 The flashcards below were created by user faulkebr on FreezingBlue Flashcards. Most important reasons for doing research: 1. to solve practical problems - applied research2. To satisfy curiosity and gain scientific understanding - Basic Research Reasons for doing research: To solve practical problems - Applied Research -Motivation: SPECIFIC need-Purpose:knowledge applied to a very specific situation-Application: narrow Reasons for doing Research To satisfy curiosity and gain scientific understanding: Basic Research -Motivation:curiosity-desire for explanation-Purpose:basic principles-application: broad Research Hypothesis Version 1: Somewhat abstract 1 2 1. theoretical constructs (concepts or conceptual variables likely not directly observable)2. State (make educated guess) a. relationship between theoretical constructs OR b. that one theoretical construct causes anotherEx: violent TV content - aggression Dreaming in color - creative Bumper stickers - Road Rage Research Hypothesis Version 2: (More specific and testable): convert abstract concepts (theoretical constructs) into specific (testable) variables Variable - -has a range of values (levels) Force = mass X acceleration Weight = mass X gravitational field strength Canadian dollar = 1.18 X US dollar Fine Motor skills is associate to: Gender Quantative continuous numerical scale Categorical Discrete Kinds of Variables: Experiments (cause) -Independent Variable-Dependent Variable-Control Variable Independent Variable E changes (manipulates) (possible cause) first in time: independent of the results of the study. A level of a variable - one value of an I.V. An independent variable has at least 2 levels of a treatment. If not- NOT a variable Dependent Variable measured by E (possible effect) Second in time: depends on I.V. Control Variable could be independent variable but held constant (not the same as control group or condition) Kinds of variables: Non-experiments (no manipulatinion (relationships) -Predictor variable-Outcome or Criterion variable Predictor variable- No manipulationPossible cause; first in time Outcome or Criterion Variable Possible EFFECT; second in time I.V. before ___ P.V. before ___ D.V. ; C.V. Comparing effects of different amounts of variable; one amount for each subject; Between subjects comparison Comparing effects ofdifferent amounts of variable; all amounts for each subject: within subjects comparison Operational Definitions: Exact description of procedure used to generate independent variable: how to produce each level of the independent variable Exact description of how to "obtain" dependent variable: how to measure or recognize a particular thing or characteristic Operational definitions for independent variables: Statement describing what to do (produce, create, construct, generate) different amounts or levels of the variable Example: fear vs no fear / frown vs no frown (categories) operational definitions for dependent variables (& predictor and outcome variables): statement of what to do to measure (or how to measure) a particular thing or characteristic ex: fear; smile vs frown; neuroticism O.D. for dependent variables must be: -Clear, precise, objective...so repeatable by others-Captures at least some part of the concept you are trying to measure (valid)-Can be done consistently (reliably); clarifies the meaning of the concept Practical Research Hypothesis: Watching violent cartoons increases aggression in children I.V.? D.V.? I.V. - Cartoons with violence and Cartoons without violence (Describe criteria for violence) D.V. - Number of times child hits BoBo doll (criteria for hit?) An operational definition purposes: 1. public verification2. Blueprint for experiment fear causes thigmotaxis agreed upon methodology, systemati observation3. Evaluation of research captures ideas that researcher claims to test. quality of research examples of established relationships gas, pressure, temperaturecrowds and helpingnoise and performanceweight loss and activity Good theory if: it explains a variety of facts If its not a risky prediction then; poor predictive power Testing research hypotheses and theories: Develop a number of different hypotheses/theories design and conduct "critical tests" of each Gather (1) falsifying and (2) confirming (supporting) evidence Theory/Hypothesis/Explanation: Lots of supporting evidence and little or no falsifying evidence and closest to the truth (at the moment) Induction: Collect specific observations to infer general principles Inductions are not: logically valid Reasearch Hypothesis can be written in _____ ____ format. "if-then" research hyp. in "if then" format If the hy. that the full moon increases aggression is true, then there should be more fights in bars around town on nights of full moons. (antecedent) consequent? Consequent: there were more fights Fallact of affirming the consequent 1. If P, then Q2. Q3. Therefore P If smith is a mother, then she is femaleSmith is a femaleTherefore smith is a mother Results NEVER prove a hy. tobe correct (fallacy of affirming the consequent) always possible new data will be collected Modus Tollens Argument Logically valid if p then q not q therefore not p Measurement the act of characterizing observations Measurement 1. 1. Develop o.d. - clear procedures for measuring or classifying observations and producing variables: publicly verifiable, reliable a.)Decide in advance what you are going to measure(esp. important in naturalistic ob.) b.)decide how to assign numbers (or categories) to observations Confirmation Bias: tend to notice and look for evidence that confirms expectations Measurement: 2. Collect data Measurement 3. Summariza data Data from individuals not representative a.) measures of central tendancy ModeMedianArithmetic meanGeometric mean Different Indices of Variability All measures of variability show how much scores from the standard, baseline or reference Different indicies of variability Variance or Mean Square (MS): the average of the squared distances each score is from the mean How do you change to a measure that is similar to the average distance a set of scores is from the mean? Take the square root: standard deviation Frinding variance: 1. find mean2. subtract mean from each score3. square resulting number4. add together5. divide by N Standard Deviation proportional to the average distance a set or scores is from the mean Characteristics of measurement: SensitivityReliabilityvalidity of a test/measurement Sensitivity of test/measure Sensitivity: increases with the number of possible values that can be consistently used. differentiatediscriminatedistinguish Reliability of a test/measure stability orconsistency-sometimes over time; repeatable, reproducible precise in sense that random, UNSYSTEMATIC errors of measurement are minimal/nonexistent Reliability of a test/measure Classic approach to reliability: Obtained Score = true score + unsystematic measurement error (look at many scores from many individuals) Unsystematic measurement error influences: variability of scores - reduces precision unsystematic measurement errors random should: balance out. How do you determine true score? use arithmetic mean If you use arithmetic mean, the average errors should; equal 0 Systematic measurement is; "true" variance Reliability = Systematic variance_________________systematic+error variance Total Variance(will get a number between 0 and 1) Test-Retest Reliability Consistency of a measure from one time to the next Systematic vs. unsystematic (random) measurement error .... Unsystematic errors - cancel outSystematic errors - do NOT cancel outUnsystematic errors - influence variablitySystematic errors - influence mean - (poor variability) Sources that influence test-retest reliability: Time:influenced by time between tests Change Carry over Parallel Forms Reliability Consistency of the results of a test constructed in the same way and from the same content area. Interitem (internal consistency) Reliability: consistency of results across items within a test designed to measure same construct. 1. Average inter-item correlation 2. Split-half reliability Coefficient alpha; Average of all possible split half reliability Inter-rater or inter-observer reliability degree to which different raters or observers give consistent estimates of the same phenomenon Validity of a test/measure Measures or test measures what; it is supposed toit is designed topurports to measure Content validity of test/measure extent to which it uses items or content representative of the area (concept) you are trying to measure Content validity of test/measure -Starts with - Construction of test--theory and research 1.Items truly from content area 2.Items are representative same of content area Criterion Validity of test/measure Degree to which a test correlates with some direct and independent measure of what you are trying to measure. IQboss's ratings of salesmanshipweights? crimes? Direct (usually behavior) and independent measure: a single criterion/operational definition Predictive validity- Future Concurrent Validity - Same time Construct validity of a test/measure Extent to which a test or measure can be shown to measure a particular theoretical construct-conceptual variable (unobservable abstract trait or feature) 1. test produces "numbers" distinct from that produced by a measure of another construct2. Based on accumulated evidence Construct validity of test/measure 1. Define clearly; characteristic or trait(construct) to be measured-theoretical relationships etc. Sensation seeking: a trait describing the tendancy to seek novel, varied, complex, and intense sensations and experiences and the willingness to take risks for the sake of the experience Construct validity of a test/measure 2. Correlate test with a variety of measures that should be positively, negatively, or not correlated with characteristic (construct) Construct validity of a test/measure 3. Examine the pattern of results using a diverse body of evidence. Convergent validity Discriminent Validity Criterion Validity Convergent validity correlations- strong cor. between test and conceptually similar measures Discriminent Validity Correlations- low cor. between test and measures of different theoretical constructs Criterion Validity correlations strong cor. between test and direct and independent measure (ex. behavior) Often reliability means: that test is correlated with itself-reproducible Campbell and Stanley: two criteria regarding experimentation internal validityexternal validity Internal Validity: degree to which a study measures the effect of hypothesized cause or independent variable -Did different levels of the treatment (IV) cause the change in the outcome(DV)? External Validity: Degree to which the finings can be generalized to other subjects, and to other situations (settings, levels of variables, ways of measurement etc) Response Acqiescience: yea-saying (or the opposite: response deviation--nay saying) Types of external validity ecological validitypopulation validity Population validity the degree to which sample scores can be generalized to the target population False Consensus Effect tend to overestimate the extent to which other people share or behaviors, attitudes, and beliefs Construct Validity the degree to which the independent and dependent variables accurately reflect or measure what they are intended to. External Validity the extent to which one can generalize from the research setting and participant population to other settings and populations Internal Validity refers to whether one can make causal statements about the relationship between variables Reliability refers to the consistency of behavioral measures Split-half Reliability involves dividing the test items into 2 arbitrary groups and correlating the scores obtained in the 2 halves of the test Stratified Sample divides the population into smaller units Power occurs when a statistical test can detect effects Meta-Analysis reletively objective technique for summarizing across many studies investigating a single topic example: lunar-lunacy hypothesis. full moon makes for more aggressin Psychophysical Scaling scaling of concepts such as brightness Psychometric Scaling applies when concepts, such as depression, are measured but usually do not have clearly specified inputs Weber's Law states: For a particular sensory modality, the size of the difference threshold relative to the standard stimulus is a constant. Summated Rating Scale A popular way of assessing psychological traits that do notseem to lie on a known physical scale. Provides a score for a psychometric property of a person that derives from how that person responds to several statements about a topic that clearly are favorable or unfavorable Standard Error of the Mean is the standard deviation of a distribution of sample means Authorfaulkebr ID103033 Card SetResearch Methods DescriptionResearch Hypothesis and Variables Updated2011-09-27T02:57:45Z Show Answers