study guide methods.txt

Card Set Information

study guide methods.txt
2011-02-07 01:13:10
UF methods study guide

UF methods study guide
Show Answers:

  1. Linking theory and methods important questions
    What form of theory are we talking about? And Is data being collected to test a theory or build one?
  2. Theories of the middle range
    • Represent attempts to understand and explain a limited aspect of social life; operate in a limited domain and vary somewhat in their application.
    • Ie: Structural functionalism, symbolic interactionsim, critical theory etc
  3. Grand theories according to Merton
    Offer few indications to researchers as to how they might guide or influence the collection of empirical evidence ie: they are hard to test; very abstract; therefore are of limited usefulness in social research.
  4. What was Merton�s goal?
    To seek to clarify what is meant by theory when social scientist write about the relationship between theory and research
  5. Na�ve empiricism
    Research that has no obvious connection with theory and is often prone to being dismissed
  6. Deductive theory
    Represents the commonest view of the nature of the relationship between theory and social research
  7. Inductive theory
    Is In opposite movement of deduction when the researcher infers the implications of his finding for the theory that promoted the whole exercise.
  8. An epistemological issue concerns what question?
    What is regarded as acceptable knowledge in a discipline and should the social world studied according to the same principles, procedures and ethos as the natural sciences.
  9. Positivism
    An epistemological position that advocates the application of the methods of the natural sciences to the study of social reality and beyond
  10. What does positivism entail?
  11. What two features does realism share with positivism?
    • A belief that the natural and the social sciences can and should apply the same kinds of approach to the collection of data and to explaination.
    • A commitment to the view that there is an external reality to which scientist direct their attention
  12. Empirical realism
    Through the use of appropriate methods, reality can be understood
  13. Critical realism
    A specific form of realism whose manifesto is to recognize the reality of the natural order and the events and discourses of the social world that holds that �we will only be able to understand- and so change- the social world if we identify the structure at work that generate those events and discourses.
  14. Interpretivism
    A term given to a contrasting epistemology to positivism; subsumes the views of writers who have been critical of the application of the scientific model to the study of the social world and who have been influenced by different intellectual models.
  15. Hermeneutics
    -concerned with the empathic understanding of human action rather with the forces that are deemed to act on it.
  16. Phenomenology
    -a philosophy that is concerned with the question of how individuals make sense of the world around them and how in particular the philosopher should bracket out preconceptions in his grasp of that world; a main intellectual tradition that has been responsible for the anti-positivist position
  17. Interpretivism
    -an alternative to the positivist orthodox; a view that a strategy is required that respects the differences between people and the objects of the natural sciences and requires that scientist grasp the subjective meaning of social action. Hermeneutic phenomenological tradition->Symbolic interactionism
  18. Ontological considerations
    • -concerned with nature of social entities
    • -question of whether social entities can and should be considered objective entities that have a reality external to social actors (objectivism) or whether they can and should be considered social constructions built up from the perceptions and actions of social actors (constructionism)
  19. Objectivism
    -an ontological position that implies that social phenomena confront us as external facts that our beyond our reach or influence
  20. Constructionism
    -an alternative to objectivism; an ontological position that asserts that social phenomena and their meanings are continually being accomplished by social actors. Social phenomena and their meanings are in constant revision. The researcher always presents a specific version of social reality rather than one that can be definitive.
  21. Quantitative vs qualitative research regarding theory
    • Quantitative: theory precedes research
    • Qualitative: research precedes theory
  22. Three of the most prominent criteria for evaluation of social research
    -Reliability, Replication and validity
  23. Reliability
    Are the results of a study repeatable?
  24. Replication
    Quite rare in social research; however, replication is highly regarded, and proves reliability.
  25. Validity
    Concerned with the integrity of the conclusions that are generated from a piece of research.
  26. Measurement validity
    Aka construct validity; whether a measure that is devised of a concept really does reflect the concept that it is suppose to be denoting.
  27. Internal validity
    • Concerned with the question of whether a conclusion that incorporates a causal relationship between two or more variables holds water. Can we be sure that x causes Y and not some other variable? Independent variable= the factor that has the causal impact
    • Dependant= effect variable
  28. External validity
    Can the results be generalized beyond the specific research context? Can the results be applied beyond the research sample?
  29. Ecological validity
    Can the finding be applicable to peoples everyday, natural social settings?
  30. Naturalism
    Viewing all objects of study, whether natural or social, as belonging to the same realm and a consequent commintment to the principles of the scientific method.; being true to the nature of the phenomena being studied; a style of research that seeks to minimize the intrusion of artificial methods of data collection.
  31. Five types of design
    Experimental, cross-sectional, longitudinal, case study and comparative
  32. Experimental
    Rare in sociology but are the �gold standard�; includes manipulation and treatment and control; Eliminates threats to internal validity in insurance level of health example
  33. Quasi-experimental design
    A control group is compared to two treatment groups but no random assignment
  34. Cross-sectional design
    Entails the collection of data on more than one case at a single point in time in order to collect a body of quantitative or qualitative data in connection with two or more variables, which are then examined to detect patterns of association; Ambiguity about causation/ direction what causes what.
  35. Longitudinal design
    Timely and costed therefore relatively little use; a sample is surveyed and then survey at least one other time on another occasion.
  36. Two types of longitudinal design
    Cohort and panel
  37. Panel longitudinal design
    Repeated observations on a sample drawn from the entire population
  38. Cohort
    Repeated observations on members of one cohort (e.g., British Cohort Child Development Studies)
  39. Advantages of Longitudinal Designs
    • Partially distinguished between age, period, and cohort effects
    • Examines individual change
  40. Disadvantages of Longitudinal Designs
    • Cost
    • Several years until data are completed
    • In long-term studies, early measure may not meet current measurement standards
    • attrition
  41. Case study design
    The detailed and intensive analysis of a single case; Provide rich accounts of how communities, organizations, and groups work;
  42. How are case studies useful/ how are they limited?
    • Very useful to generate Hypotheses and An understanding of �how things work�
    • Difficult to assess reliability and validity
  43. When is internal validity threatened?
    In non-experimental research
  44. Systematic review
    A replicable, scientific, and transparent process that aims to minimize bias through exhaustive literature searches of published and nonpublished studies by providing an audit trail of the reviewers decisions, procedures, and conclusions. Used more to study benefits rather than haphazard. attempts to provide same rigor as is found in research procedures
  45. Meta analysis
    A systematic review that includes only quantitative studies. Includes summarizing the results of a large number of quantitative studies and conducting analytical test to show whether or not a particular variable has an effect.
  46. Standard lit review
    No rules for material selection, material may be omitted or disregarded.
  47. Components of systematic review
    Explicit, replicable procedures; Comprehensive and unbiased in coverage; Leaves an �audit trail�
  48. Issues in Meta-Analysis
    • Requires full reporting of variables that the meta-analyst would like to use
    • Meta-analysis is not always feasible
    • Easier to find published research than unpublished
    • There may be biases in what research is included
    • Systematic reviews and meta-analyses are usually an end in themselves
  49. Referencing
    • Harvard approach to referencing is standard in sociology
    • Avoid footnotes for references or for substantive material
  50. Plagiarism
    • Passing off others' ideas or words as our own
    • Includes paraphrasing
    • Like passing counterfeit money
    • It may be accidential, but one has a lot of explaining to do
  51. The process of Quantitative research
    • Theory->hypothesis->Research design->devise measurement of concepts-> select research sites
    • ->select research subjects/respondants->administer research instruments/collect data-> process data-> Analyze data-> Findings/conclusions-> write up findings/conclusions
  52. Indicator
    Used to tap concepts that are less directly quantifiable; something that is devised or already exist and that is employed as though it were a measure of a concept. Ie Job satisfaction
  53. Advantages of multiple indicator measures
    • It is possible that a single indicator will incorrectly classify many individuals.
    • One indicator may capture only a portion of the underlying concept or be too general. Ie: job satisfaction according to pay, yet there is more involved in job satisfaction than just pay.
    • You can make much finer distinctions
  54. Operational Definition
    • How we actually measure a concept
    • � E.g., education or some combination of income
    • and education
  55. Indicator
    • Another name for the measure
    • � Most common: One indicator per concept
    • � Better: Multiple indicators
    • � Bryman uses �measure� for an indicator that is �unambiguous�
    • (e.g., income) but that oversimplifies the measure of income (e.g.,
    • forgotten items of income
  56. Why Study Something We Already Know?
    • � More precise statement of relationship
    • � Better measures
    • � Extend what is already known
    • � E.g., Compare siblings to eliminate family effect
    • � Examine change over time
  57. Additive scale
    • � Likert scale is typically used as additive
    • � Better if each measure is reliable
    • � Reliability means little random error
  58. Factor analysis
    • � Throws out random error; keeps only �true score�
    • � Draws out different dimensions
  59. 5 Ways to Evaluate Measurement Validity
    • 1. Face validity
    • 2. Concurrent validity
    • 3. Predictive validity
    • 4. Construct validity
    • 5. Convergent validity
  60. Although each of 5 ways to evaluate measurement validity seems reasonable what are the associated problems?
    • � Each assumes we know correct relationship
    • � In real world, nothing is perfectly correlated
  61. Face validity
    The measure apparently reflects the content of the concept in question
  62. Concurrent validity
    The researcher employs a criterion on which cases are known to differ and that is relevant to the concept in question
  63. Predictive validity
    The researcher uses a future criterion measure rather than a contemporary one.
  64. Construct validity
    The researcher is encouraged to deduce a hypothesis from a theory that is relevant to the concept
  65. Convergent validity
    The validity of a measure can be gauged by comparison to measures of the same concept through other methods
  66. Why does Validity assumes reliability and Unreliable measure cannot be valid?
    • � Measure = true score + random error
    • � A true score is reliable (by definition)
    • � Random error is not reliable (by definition)
    • � True score portion may be valid
    • � If random error predominates, changes not related
    • to anything
  67. Preoccupations of Quantitative Researchers
    • � Measurement
    • � Causality - �explaining things�
    • � Generalization
    • � Replication
  68. Causality
    • � We infer causation; we do not observe it
    • � Similar issue to �critical realism� in Chapter 1
    • � Empirical (�na�ve�)realism avoids causation
  69. Explain how Empirical (�na�ve�)realism avoids causation
    • � We only know what we observe
    • � Humans always search for causes to make the
    • work explicable
    • - Causal thinking is a product of the human mind,
    • not external reality
    • � �Causes� are unknowable
    • � Hence we should only talk about association
  70. J.S. Mills four methods to infer causation
    • � Agreement
    • - X always present when Y occurs
    • � Difference
    • - X not present when Y does not occur
    • � Joint
    • - First two methods together
    • � Concomitant variation
    • - Amount of X covaries with amount of
  71. More modern approach to mills work:
    • � When X present, higher probability of Y
    • � Probabilistic interpretation
    • � Allows for multiple causes
    • � E.g., Y occurs when there are enough causes
    • present to reach a threshold