Research Methods Final Cards

Card Set Information

Research Methods Final Cards
2014-12-12 00:50:07
suny geneseo research methods psych251

final exam review stuff
Show Answers:

  1. What happens when you restrict the range of one or both of the measured variables
    It weakens the correlation
  2. What does Pearson's r measure?
    • The size of a correlation between two variables
    • (coefficient of correlation)
  3. What is the coefficient of determination?
    • For two correlated factors, the proportion of variance in one factor that can be attributed to the second factor
    • (square Pearson's r)
  4. What occurs through a regression analysis?
    Predicting a value for variable Y by knowing the size of a correlation and a value for variable X
  5. What is a regression line?
    It summarizes the points of a scatterplot and provides the means for making predictions
  6. What is a criterion variable?
    • In a regression analysis, the variable being predicted from the predictor variable
    • Aka the Y variable
  7. What is a predictor variable?
    • In a regression analysis, the variable used to predict the criterion variable
    • Aka the X variable
  8. What interpretation problems can occur with correlations?
    • Directionality problem
    • Third variable problem
  9. What is the directionality problem?
    • In correlational research, the fact that for a correlation between variables X and Y, it's possible that X is causing Y but it's also possible for Y to be causing X
    • The correlation alone provides no basis for deciding between the two alternatives
  10. What is a cross-lagged panel correlation?
    • A type of correlational research designed to deal with the directionality problem
    • If variables X and Y are measured at two different times and if X precedes Y, then X might cause Y but Y cannot cause X
  11. What is the third variable problem?
    • The problem of drawing causal conclusions in correlational research
    • Third variables are uncontrolled factors that could underline a correlation between variables X and Y
  12. What is partial correlation?
    • A multivariate statistical procedure for evaluating the effects of third variables
    • If the correlation between X and Y remains high, even after some third factor Z has been partialed out, then Z can be eliminated as a third variable
  13. What is the split-half reliability?
    A form of reliability in which half of the items (ex: even numbered items) on a test are correlated with the remaining items
  14. What is the test-retest reliability?
    A form of reliability in which a test is administered on two occasions and the correlation between them is calculated
  15. What is intraclass correlation?
    A form of correlation used when pairs of scores do not come from the same individual, as when correlations are calculated for pairs of twins
  16. What are the three popular multivariate procedures?
    • Partial Correlation
    • Multiple Regression
    • Factor Analysis
  17. What is a multiple regression?
    • A multivariate analysis that includes a criterion variable and two or more predictor variables
    • The predictors have different weights
  18. What is a factor analysis?
    • A multivariate analysis in which a large number of variables are intercorrelated
    • Variables that correlate highly with each other form factors
  19. What is a correlation matrix?
    A table that summarized a series if correlations among several variables
  20. What design problems can occur in applied research?
    • Ethical dilemmas
    • A tradeoff between internal and external validity
    • Problems unique to between-subjects designs
    • Problems unique to within-subjects designs
  21. What ethical dilemmas can occur in applied research?
    • A study conducted outside of the laboratory may create problems relating to informed consent and privacy
    • Proper debriefing is not always possible
    • Research done in an industrial/corporate setting may include an element of perceived coercion if employees believe their job status depends on whether they volunteer to participate in a study
  22. What tradeoff between internal and external validity can occur in applied research?
    • The researcher can lose control over the variables operating in the study because the research often takes place in the field
    • The danger of possible confounding can reduce the study's internal validity
    • External validity is usually high in applied research because the setting more closely resembles real-life situations, and the problems addressed by applied research are everyday problems
  23. What problems unique to between-subjects designs can occur in applied research?
    • It is often impossible to use random assignment to form equivalent groups
    • The studies often use ex post facto designs and must therefore compare nonequivalent groups
    • This introduces the possibility of reducing internal validity by subject selection problems or interactions between selection and other threats such as maturation or history
    • When matching is used to achieve a degree of equivalence can occur
  24. What problems unique to within-subjects designs can occur in applied research?
    • It is not always possible to counterbalance properly in applied studies using within-subjects factors
    • The studies might have uncontrolled order effects
    • Attrition can be a problem for studies that extend over a long period
  25. What is a quasi-experimental design?
    Occurs when causal conclusions about the effects of an independent variable cannot be drawn because subjects cannot be randomly assigned to the groups being given different levels of an independent variable
  26. What is archival research?
    A method in which existing records are examined to test a hypothesis
  27. What is a nonequivalent control group design?
    A quasi-experimental design in which participants cannot be randomly assigned to the experimental and control groups
  28. What is an interrupted time series design?
    A quasi-experimental design in which a program or treatment is evaluated by measuring performance several times prior to the institution of the program and several times after the program is put into effect
  29. What are trends?
    • Predictable patterns of events that occur over time
    • Evaluated in time series studies
  30. What is an interrupted time series with switching replications design?
    A time series design in which the program is replicated at a different location and at a different time
  31. What is archival data?
    Data initially collected for a purpose not related to a current research study and used later for a specific purpose in the current research
  32. What is content analysis?
    A procedure used to systematically categorize the content of the behavior (often verbal behavior) being recorded
  33. What is reactivity?
    Occurs when participants' behavior is influenced by the knowledge that they are being observed and their behavior recorded
  34. What is a program evaluation?
    A form of applied research that includes a number of research activities designed to evaluate programs from planning to completion
  35. What does a program evaluation include?
    • Procedures for determining if a need exists for a particular program and who would benefit if the program is implemented
    • Assessments of whether a program is being run according to plan and, if not, what changes can be made to facilitate its operation
    • Methods for evaluating program outcomes
    • Cost analyses to determine if program benefits justify the funds expended
  36. What is a needs analysis?
    A form of program evaluation that occurs before a program begins and determines whether the program is needed
  37. What are some ways to identify the potential need for a program?
    • Census data
    • Surveys of available resources
    • Surveys of potential users
    • Key informants, focus groups, and community forums
  38. What is a formative evaluation?
    A form of program evaluation that monitors the functioning of a program while it is operating to determine if it is functioning as planned
  39. What is a program audit?
    • An examination of whether a program is being implemented as planned
    • A type of formative evaluation
  40. What are summative evaluations?
    A form of program evaluation, completed at the close of a program, that attempts to determine its effectiveness in solving the problem for which it was planned
  41. What is cost-effectiveness analysis?
    A form of program evaluation that assesses outcomes in terms of the costs involved in developing, running, and completing the program
  42. What is individual-subject validity?
    The extent to which the general outcome of a research study characterizes the behavior of the individual participants in the study
  43. When should small N designs be used instead of large N designs?
    • Sometimes potential subjects are rare or difficult to find
    • Small N designs reduce random variability by achieving precise control over the experimental situation affecting the single subject
  44. What is operant conditioning?
    A process in which the frequency of occurrence of a bit of behavior is modified by the consequences of the behavior
  45. What is rate of response?
    • The favored dependent variable of researchers working in the Skinnerian tradition
    • Refers to how frequently a behavior occurs per unit of time
  46. What is a cumulative recorder?
    An apparatus for recording the subject's cumulative rate of response in operant conditioning studies
  47. What is applied behavior analysis?
    Research using various methods to evaluate the effectiveness of conditioning procedures in bringing about changes in the rate of response of some behavior
  48. What is the A-B design?
    A small N design in which a baseline period (A) is followed by a treatment period (B)
  49. What is the withdrawal design?
    • A-B-A design
    • A small N design in which a baseline period (A) is followed by a treatment period (B) followed by a period in which the treatment is reversed/withdrawn (second A)
  50. What is a multiple baseline design?
    • A small N design in which treatment is introduced at staggered intervals when trying to alter:
    • a) the behavior of more tan one individual
    • b) more than one behavior in the same individual
    • c) the behavior of an individual in more than one setting
  51. What is a changing criterion design?
    • A small N design in which the criterion for receiving reinforcement begins at a modest level and becomes more stringent as the study progresses
    • Used to shape behavior
  52. What is social validity?
    The extent to which an applied behavior analysis program has the potential to improve society, whether its value is perceived by the study's participants, and whether participants actually use the program
  53. What is the alternating treatments design?
    A small N design that compares, in the same study and for the same participant(s), two or more forms of treatments for changing some behavior
  54. What are some criticisms against small N behavioral designs?
    • External validity - the extent to which results generalize beyond the specific conditions of the study
    • Lack of use of statistical analyses and relying on mere visual inspection of the data
    • They can't test adequately for interactive effects
    • Reliance on rate of response as the dependent variable
  55. What is a case study?
    A descriptive method in which an in-depth analysis is made of a single individual, a single rare event, or an event that clearly exemplifies some phenomenon
  56. What are the two strengths of the case study method?
    • Case studies can provide a level of detailed analysis not found in other research strategies
    • Well-chosen cases can provide prototypical descriptions of certain types of individuals
  57. What are the limitations of a case study?
    • Conclusions drawn on the basis of a single individual may not generalize (problems with external validity)
    • Ample opportunity exists for the theoretical biases of the researcher to color case study descriptions
    • Participants in case studies of individuals are often required to recall events from the past, and the writers of case histories also have to rely on memories of their encounters with the object of the case
  58. What are the two types of observational research?
    • Naturalistic observation
    • Participant observation
  59. What is a naturalistic observation?
    Descriptive research method in which the behavior of people or animals is studied as it occurs in its everyday natural environment
  60. What are the two strategies to make sure that the observed behavior observed is not affected by the experimenter's presence?
    • The observer is hidden from those being observed
    • The observer makes no attempt to hide; rather, it is hoped that those being observed will become habituated to the observer and behave normally after some time
  61. What is participant observation?
    Descriptive research method in which the behavior of people is studied as it occurs in its everyday natural environment and the researcher becomes a part of the group being observed
  62. Why is participant observation a common technique of qualitative research?
    The descriptions usually involve a narrative analysis of the group being studied rather than quantitative data
  63. What are the challenges facing observational methods?
    • Absence of control
    • Observer bias
    • Participant reactivity
    • Ethics
  64. How can biasing effects be reduced?
    • Use of good operational definitions
    • Interobserver reliability
    • Time sampling
    • Event sampling
  65. What is time sampling?
    A procedure in observational research in which behavior is sampled during predefined times only (ex: every 10 mins)
  66. What is event sampling?
    A procedure in observational research in which only certain types of  behaviors occurring under precisely defined conditions are sampled
  67. How can reactivity be reduced?
    Use of unobtrusive measures
  68. What are unobtrusive measures?
    • A measure of behavior that can be recorded without participants knowing their behavior has been observed
    • ex: contents of trash to study eating and drinking habits; accumulation of dust on library books as an indication of usage; degree of wear on floor coverings placed in strategic locations to study foot traffic; analysis of political bumper stickers in an election year
  69. When does the APA ethics code condone the use of naturalistic observation without requiring informed consent or debriefing?
    • Only when certain safeguards are in place
    • ex: informed consent of participants is not considered essential if behavior is studied in public environments, people are not interfered with in any way, and strict confidentiality and anonymity are maintained
  70. What is probability sampling?
    A general strategy that is used whenever the goal is to survey a clearly identifiable group of individuals
  71. What is the self-selection problem?
    In surveys, when the sample is composed of only those who voluntarily choose to respond, the result can be a biased sample
  72. What are the two problems with simple random sampling and their solutions?
    • There may be systematic features of the population you might like to have reflected in your sample
    • Solution: use of stratified sampling
    • The procedure may not be practical if the population is extremely large
    • Solution: cluster sampling
  73. What is a stratified sample?
    A probability sample that is random, with the restriction that important subgroups are proportionately represented within it
  74. What is a cluster sample?
    A probability sample that randomly selects clusters of people having some feature in common (ex: students taking history courses) and tests all people within the selected cluster (ex: all students in three of the nine history courses available)
  75. What is a convenience sample?
    • A nonprobability sample in which the researcher requests volunteers from a group of people who meet the general requirements of the study (ex: teenagers)
    • Used in most psychological research, except when specific estimates of population values must be made
  76. What are the three forms of convenience sampling?
    • Purposive sampling
    • Quota sampling
    • Snowball sampling
  77. What is a purposive sample?
    A nonprobability sample in which the researcher targets a particular group of individuals (ex: Milgram using working adults and avoiding college students)
  78. What is a quota sample?
    A nonprobability sample in which the proportions of some subgroups in the sample are the same as those subgroup proportions in the population
  79. What is a snowball sample?
    • A nonprobability sample in which a member of a particular group, already surveyed, helps recruit additional group members through a network of friends
    • Often occurs for surveys of a relatively small group or a group that generally wishes to remain hidden
    • Sometimes called referral sampling
  80. What are the methods of surveying?
    • Interviews
    • Phone surveys
    • Electronic surveys
    • Written surveys
  81. What are the issues in an interview survey?
    • Sampling - in many cases, sizable segments of the population may not be included if they refuse t be interviewed, cannot be located, or live in an area the interviewer would prefer to avoid
    • Cost
    • Logistics
    • Interviewer bias
  82. What are the issues with phone surveys?
    • Telemarketing produced a level of annoyance sufficient to lead to the creation of national do-not-call lists
    • The marketing strategy had the effect of creating high levels of suspicion and distrust in the general public, especially when telemarketers begin the call by pretending they are conducting a survey when in fact they are selling a product
  83. What is a social desirability bias?
    • A type of response bias in survey research¬†
    • Occurs when people respond to a question by trying to put themselves in a favorable light
  84. What are the two types of survey questions?
    • Open-ended question
    • Closed question
  85. What is an open-ended question?
    A type of question found on surveys that requires a narrative response rather than a yes or no answer
  86. What is a closed question?
    A type of question found on surveys that can be answered yes or no by marking a point on a scale
  87. What is response acquiescence?
    A response set in which a participant tends to respond positively to survey questions, all else being equal
  88. What are the two guidelines in the case where surveys attempt to assess the respondent's memory or what they know?
    • Do no overburden memory
    • Use DK alternatives sparingly
  89. What is demographic information?
    Data that classifies or identifies individuals (ex: gender, age, income, etc.)
  90. What is a double-barreled question?
    • In a survey, a question or statement that asks or states two things in a single item
    • ex: It is wrong for female patrons in bars to swear and buy drinks for men who are unknown to them
  91. What is a leading question?
    In a survey, a question asked in such a way that the answer desired by the questioner is clear
  92. When does the APA not require informed consent in survey research?
    The APA excuses anonymous questionnaires assuming that disclosure of responses would not place participants at risk of criminal or civil liability or damage their financial standing, employability, or reputation and confidentiality is protected