research methods1.222

Card Set Information

research methods1.222
2012-08-05 22:06:33
rhtm comp

rhtm comp
Show Answers:

  1. Definition of Reliability?
    • Reliability means that scores from an instrument are stable and consistent.
    • Scores should be nearly the same when researchers administer the instrument multiple times at different times. Also, scores need to be consistent.
    • When an individual answers certain questions one way, the individual should consistently answer closely related questions in the same way.
  2. Definition of Validity?
    • Validity is the development of sound evidence to demonstrate that the test interpretation (of scores about the concept or construct that the test is assumed to measure) matches its proposed use. 
    • Validity is the degree to which all of the evidence points to the intended interpretation of test scores for the proposed purpose.
    • Thus, a focus is on the consequences of using the scores from an instrument
    • Do the measures we are using accurately reflect the true score of the object we are attempting to measure?
  3. What is Internal Validity?
    • Internal validity relates to the validity of inferences drawn about the cause and effect relationship between the independent and dependent variables.
    • Changes in the dependent variable is due to an observed independent variable and not due to outside effects.
  4. External validity
    External validity refers to the validity of the cause-and-effect relationship being generalizable to other persons, settings, treatment variables, and measures (rather than the observed variables).
  5. Threats to validity refer to what?
    Threats to validity refer to specific reasons for why we can be wrong when we make an inference in an experiment because of covariance, causation constructs, or whether the causal relationship holds over variations in persons, setting, treatments, and outcomes.
  6. What is construct validity?
    • Construct validity, which means the validity of inferences about the constructs (or variables) in the study.
    • Does the measure of attitude we are using measure attitude?
  7. When threats to validity occur, what happens?
    • Threats to external validity are problems that threaten our ability to draw correct inferences from the sample data to other persons, settings, treatment variables, and measures.
    • If the results of our study is only applicable to the sample in our study rather then generalizable, then the value of the results are much more limited.
  8. If scores are not reliable, can they still be valid?
    • No. If scores are not reliable, they are not valid.
    • Scores need to be stable and consistent first before they can be meaningful.
  9. If scores are valid, can they be unreliable?
    Yes, they can be. The scores can measure what was intended but can still be unreliable (inconsistant).
  10. What is the ideal situation?
    • The ideal situation exists when scores are both reliable and valid.
    • In addition, the more reliable the scores from an instrument, the more valid the scores will be.
    • Scores need to be stable and consistent before they can be meaningful.
  11. What are some of the factors that result in unreliable data?
    • Questions on instruments are ambiguous and unclear.
    • Procedures of test administration vary and are not standardized.
    • Participants are fatigued, are nervous, misinterpret questions, or guess on tests.
  12. What is face validity?
    • In face validity, you look at the operationalization and see whether "on its face" it seems like a good translation of the construct.
    • This is probably the weakest way to try to demonstrate construct validity.
    • Here we are using common sense to determine if the instrument/constructs are measuring what is intended.
  13. What is Discriminant Validity?
    • Discriminant validity requires that a measure does not correlate too highly with measures from which it is supposed to differ.
    • Doing a CFA or EFA will help to assure that the measure represent separate constructs.
  14. What is Convergent Validity?
    If a number of items are highly correlated, then they should all be measuring the same construct.
  15. What do we use to measure reliability?
    Cronbach's alpha
  16. Explain the difference between validity and reliability for a paper.
    reliability refers to the consistency of measurement, while validity refers tothe extent to which an instrument measures what it isintended to measure (Hatcher, 1994).
  17. What are three threats to external validity?
    1. Sample Selection (sample is atypical of larger population)

    2. Sample Setting (results are specific to the region where study was conducted even if have a representative sample)

    3. Time (results may be particular to a given point in time (e.g., gun control support increased after Columbine shootings); works similar to history theat)
  18. How can we maximize external validity?
    a. Representativeness sampling:: identify important subgroups and randomly sample from each subgroup (maintains some element of randomness)

    • b. Heterogeneity sampling:: construct sample to be as different as possible (purposeful construction is very time
    • consuming but effective)

    c.Modal distance sampling: identify the most intact group that most closely resembles the population and work only with that group (no other subgroups of population).
  19. What are some threats to construct validity?
    • 1. Hypothesis guessing by the subjects
    • (respondents think they know the purpose of the study and tailor responses to how they think you want them to respond)

    2. Evaluation apprehension (respondents are concerned about issuing responses seen as socially undesirable/acceptable – results in severe over- or underestimated results

    3. Construct generalizability (constructs are too closely related/correlated to separate)
  20. How do I determine if the instrument for my study is valid?
    • Identify an instrument (or test) that you would like to use
    • Look for evidence of validity by examining prior studies that have reported scores and use of the instrument
    • Look closely at the purpose for which the instrument was used in these studies
    • Look as well at how the researchers have interpreted (discussed if the instrument measured what it is intended to measure) the scores in light of their intended use
    • Evaluate whether the authors provide good evidence that links their interpretation to their use
  21. Evidence for validity and reliablility of measures can be seen in __________ .
    Previous studies.
  22. What is predictive validity?
    Can determine predictive validity by observing how well the meausre predicts a criterion (IE how well GMAT predicts success in an MBA program)
  23. What does the acronym CCCCIEPDF stand for?
    Construct, Content, Convergent, Concurrent, Internal, External, Predictive, Discriminant, Face
  24. What is criterion validity?
    • the ability of the measure to predict a variable that is
    • designated as a criterion.
  25. What are two types of criterion validity?
    • concurrent validity
    • predictive validity
  26. What is concurrent validity?
    the relationship between measures madewith existing tests. The existing tests is thus the criterion.For example a measure of creativity should correlate withexisting measures of creativity.
  27. Predictive validity?
    measures the extent to which a future level of avariable can be predicted from a current measurement.

    IE MBA program performance predicted by GMAT scores
  28. What is content validity
    • Content validity occurs when the experiment provides adequate coverage of the subject being studied.
    • This includes measuring the right things as well as having anadequate sample.
    • Samples should be both large enough and be taken from appropriate target groups.
    • Content validity is related very closely to good experimental design.
  29. In order to determine satisfactory convergent validity
    • Useful assessments are composite reliability and Average Variance Extracted (AVE)
    • Composite reliability (Cronbach's Alpha) should be equal to or greater than .7 and AVE should be greater than .5.
  30. In order to determine satisfactory discriminant validity:
    • Can do a factor analysis to see if items load on the factors they are supposed to.
    • AVE (average variance extracted) for the constructs should be greater than their squared correlation (shared variance).
    • Can use SEM to check discriminant validity. By setting all the constructs to have a correlation of 1 and do a Chi Square difference test with an unconstrained model to determine if the difference is significant. If the difference is significant than the correlation is different from 1.
  31. One can use statistics to calculate discriminant and convergent validity.
    AVE, SEM
  32. Goodness of fit (CFI, NFI) is a popular measure for ________
    Model usefullness.

    A high goodness of fit, however, is neither a sufficient nor even a necessary condition for model usefulness.
  33. Goodness of fit measures (advantages)
    Goodness of fit measures have some obvious advantages in model development. First, they are quantitative indices which can be compared across models so that the best of a set of models can be selected. Secondly, there exist established statistical procedures for testing these measures for significance. Third, they can be used as a means of deducing new or modified models from a set of data. Fourth, they appear as output in canned computer programs, which is no small reason for their prominence in reported research results.
  34. I will more often than not be using _______ scales for my research (Likert Items), but will also use __________ scales for my demographic data.
    I will more often than not be using interval/ratio scales for my research (Likert Items), but will also use nominal/categorical scales for my demographic data.
  35. Test-retest reliability
    examines the extent to which scores from one sample are stable over time from one test administration to another
  36. Alnernate forms reliability
    using two instruments, both measuring the same variables and relating (or correlating) the scores for the same group of individuals to the two instruments.both instruments need to be very similar
  37. Interal consistancy reliability
    The subject's answers should be internally consistant. If someone  completes items at the beginning of the instrument one way (e.g., positive about negative effects of tobacco), then they should answer the questions later in the instrument in a similar way (e.g., positive about the health effects of tobacco).
  38. Interrater reliabiltiy
    When there are two people observing one or more individuals. The observers need to be trained such that their observations are consistant.
  39. What does TAII stand for?
    • Test-retest
    • Alnernate forms
    • Interrater reliability
    • Internal consistency reliability
  40. People travel because they are _________ into making travel decisions by internal, psychological forces
  41. People travel because they are _________ into making travel decisions by external forces of the destination attributes.
  42. In psychology and sociology, the definitionof motivation is directed toward emotional and cognitive motives (Ajzen & Fishbein, 1977) or internal andexternal motives (Gnoth, 1997). What is an internal motive?
    An internal motive is associated   with   drives,   feelings,   and   instincts.
  43. In psychology and sociology, the definitionof motivation is directed toward emotional and cogni-tive motives (Ajzen & Fishbein, 1977) or internal andexternal motives (Gnoth, 1997). What is an external motive?
    An external motive involves  mental  representations  such as knowledge or beliefs.
  44. Why do people travel?
    People travel because they are pushed and pulled to do so by‘‘some forces’’ or factors (Dann, 1977, 1981).
  45. Push motivations are related to the  ______, while pull motivations are associated with the _______.
    Push motivations are related to the tourists’ desire, while pull motivations are associated with the attributes of the destination  choices.
  46. What does STCRWSPRP stand for?

    Hint: Procedure for developing a questionnaire
    • Specify (information sought)
    • Type (determine type of questionaire)
    • Content (Determine question content)
    • Response (Determine form of response)
    • Wording (Determine wording of each question)
    • Sequence (Determine sequence of questions)
    • Physical (Determine physical characteristics of questionare)
    • Revise (if needed)Pretest (pretest then revise if necessary)
  47. In order to determine convergent and divergent validity, what can we do?
  48. What does CFI stand for in SEM?
    • comparative fit index
    • Others are Bentler-Bonnett [sic] normed fit index (BBNFI)
    • Normative Fit Index (NFI)
  49. What number should CFI be in SEM?
    0.90 or greater
  50. What can SEM do that is the most meaningful for construct validity?
    • its ability to provide evidence of construct validity or the lack  thereof through a  meaningful  confirmatory factor analysis (CFA).
    • So CFA provides evidence of construct validity, or a lack of construct validity.
  51. In SEM, we should have ____ or more items per construct.
  52. GFI stands for
    Goodness of fit index.
  53. For basic or applied research, a reliability of _____ sufficient.
    For basic or applied research, a reliability of 0.80 is sufficient (Nunnally 1978).

    Measured with Cronbach's alpha.Some use 0.7 as a cutoff.
  54. eigenvalues-greater-than-1
    are factors in a factor analysis kept for the structural equation model.
  55. Cronbach’s alpha is a measure of _________ and, more specifically, _____________ .
    Cronbach’s alpha is a measure of reliability and, more specifically, internal consistency
  56. What are the 7 steps of conducting a research study I will identify?
    • 1. Identify the research problem
    • 2. Review the literature
    • 3. Specifying a PURPOSE for the research
    • 4. DESIGN the data collection method and forms
    • 5. Collecting data
    • 6. Analyzing and interpreting data
    • 7. Reporting and evaluating research
  57. What does the acronym IRSDCAR stand for?
    • Identify
    • Review
    • Specify (PURPOSE)
    • Design
    • Collect
    • Analyze/Interpret
    • Report

    "The IRS will demolish your car!"
  58. What is a research problem?
    These needs, issues, or controversies that arise out of tourism/hospitality literature and industry are referred to as research problems.
  59. When I identify a research problem:
    • I am identifying a issue or problem in tourism/hospitality that needs to be resolved
    • I am specifying an issue to study
    • I am developing a justification for studying it
    • I am suggesting the importance of the study for select audiences that will read the manuscriptIn focusing on a problem,
    • I am limiting the subject matter and focusing attention on a specific aspect of study.
    • I will state my research problem in the introductory section of my research paper and provide a rationale for their importance.
  60. When I identify the problem in the introduction of my paper, it is often called a 'statement of the problem.' What is the statement of the problem? What does this section of my introduction consist of?
    The research problem is part of a larger written section called the “statement of the problem.” This section includes: the topic, the problem, a justification for the problem, and the importance of studying it for specific audiences such as industry professionals, or researchers.
  61. When I review the literature:
    • It is important to know who has studied research problem I plan to examine
    • I do not want to replicate prior research but rather create new research or extend/add to prior research
    • Sometimes it adds value to replicate the research with a different sample or in a different setting
    • But I want to be sure that what I am doing hasn’t already been done
    • I want to build on existing knowledge or add to the accumulation of findings on a specific topic.
  62. What does reviewing the literature mean?
    • Reviewing the literature means locating summaries, books, journals, and publications on a topic, selectively choosing which literature to include in my review, then summarizing the literature in the written  report. 
    • Being skilled at locating literature on a topic is very important
    • Utilizing library resources, online catalogs,search engines, google scholar, and databases on the library website
    • It is important to choose and evaluate thequality of research on my topic
    • Then summarize in the lit review
    • The literature review helps to justify need forthe research problem and suggests potential purposes and research questions for the study.
    • The researcher uses the literature to helpjustify the current research problem for the report.
    • Somewhere in the literature, you hope to findsomeone that mentioned the importance or need for your current study or arecommendation for your current study
  63. I specify my purpose of my research in step 3 using a ___________ .
    Purpose statement.
  64. What is a purpose statement?
    • This statement conveys the overall objective or intent of the research
    • The purpose statement introduces the entire study, and signals the procedures I will use to collect data, and indicates the types of results I hope to find
  65. The purpose for the research consists of identifying the major intent or objective of a study and narrowing it into specific ________ or __________
    research question or hypotheses.
  66. Some characteristics of a purpose statement:
    • The purpose statement contains the major focus of the study, the participants in the study, and the location or site of the inquiry
    • The purpose statement is then narrowed to research questions or predictions that I plan to answer in my research study
    • In quantitative research, the researcher identifies a research problem based on trends in the field or on the need to explain why something occurs
  67. Difference between primary data and secondary data?
    • In the majority of my studies, I will be using primary datarather than secondary data. For this study, I will be collecting primary data.
    • Primary data is collected specificallyfor the study while secondary data is third party, and is used for someoneelse’s study (with permission from the data collector).
  68. When I design the data collection method and forms:
    • The data collected for this study will be in questionnaire form.  Event attendees will be approached randomly as they are leaving the event.
    • Other methods of data collection could be requesting attendees to complete an online survey after attending the event or mailing the survey, including a return envelope with a raffle drawing for added incentive
    • for the participants. The survey design will consist of fixed answers rather thanopen-ended answers. All items will consist of 7pt Likert-Type statements (Strongly Disagree – Strongly agree). Respondents will indicate a response which best fits their feeling about each statement. All of the statements will be structured and undisguised, meaning that the question’s purpose is clear, and the respondants are limited to one choice on the 7pt Likert scale. In this study I would design the survey in steps. I know the type of information I am seeking, push-pull motivational items, many of which I will acquire from the REP scale.I may add some factors/items that have been used in similar studies. In order to determine the appropriate items that reflect push/pull motivations for an event, I will conduct a focus group of prior event attendees to determine the most significant motivations that account for event attraction and positive outcomes. The focus group was designed to uncover the motives that account for attraction to a ______ event and the subsequent attachment and positive outcomes. The focus group will consist of 10-12 individuals from a recent culinary event. The focus group will concentrate on participants’ experiences of the event to better understand the motives leading to event participation, as well as the attachment process. A list of possible questions for the focus group will be prepared to serve as a guide for the session.  These
    • questions will be developed with the goal of facilitating discussion of motivation.  The list of questions will be based on participants’ perceived motives and needs to be satisfied through event participation.  Participants for the focus groups will be recruited via mass e-mails sent out by event organizers.  These e-mails will instruct recipients to contact the researcher if they are interested and available.  After the focus groups are completed, transcriptions will be analyzed and consistsof the following steps suggested by Creswell(2003).  When designing the questionnaire I need to determine whatimformation will be sought and the type of questionnaire that will be used. I need to determine the method of administration. There are different ways I can administer the survey. I can hire a company to survey event attendees for me, providing an online survey. I should arrange in the questions in order of the proposed SEM path model relationships. Push-pull motivations will be first in the survey followed by the questions on event attachment, followed by the questions representing positive outcomes. I need to determine the sample selection process. Will it be randomized (i.e. approach every 5th person that walks by)? I need to identify how large I want the sample to be. My goal for the sample number should be at minimum, greater than 300 but my preference is 500. I currently have 862 in my whitewater rafting study. The sample will be a convenience sample, a nonprobability sample of people that happen to be in the area where I am surveying.
  69. To analyze my focus group data:
    • First, each transcription was read thoroughly to obtain a general overview of the information. 
    • Next, the transcriptions will be coded looking for text related to individuals’ motivation for event participation.
    • The themes uncovered are then presented narratively within the results chapter to further highlight the findings from the data.
  70. Why do I collect data in the 5th step?
    I collect data In order to provide answers to research questions and hypotheses.
  71. What is collecting data?
    • Collecting data means identifying and selecting individuals for a study,obtaining their permission to study them, and gathering information by asking people questions. 
    • In the report, the process of collecting data goes in the methods section·        
  72. What do I need to do before collecting data?
    IRB approval (Include what I need to submit to IRB)
  73. The 6th step: Analyzing and interpreting data is important because:
    • During or immediately after data collection, I need to make sense of the information supplied by individuals in the study.
    • Analysis consists of ‘taking the data apart’ to determine individual responses and then ‘putting it together to summarize it.’        
    • Analyzing and interpreting the data involves drawing conclusions about it, representing it in tables, figures, and pictures to summarize it, and explaining the conclusions in words to provide answers to your research questions.
    • I need to make sure the model fits the data and the instrument is valid and reliable.
  74. 7. The last step is writing the research report.
    • This is when I develop the written report and distribute it to select audiences.
    • These audiences can include other scholars, journal editors, students, industry publications, industry professionals.
    • When we report research, we need to decide on the audience.
    • The report should be structured in a format acceptable to these audiences.
    • More wordiness and professional for scholars
    • Less wordiness and easy to understand for laypeople and industry professionals
    • The structure for the research report varies for each audience.   
    • Be sure that my report is suitable for the intended audience
  75. Making sense out of statistical analysis
    • I plan on performing a CFA of the Push-pull, attachment and outcome variables.
    • I expect some of the items would be eliminated based on the factor loadings.
    • I would determine the number of factors for the push and pull measures by observing if how many facotrs had an eigenvalue greater than 1.
    • I would keep measurement items with beta values greater than 0.4.
    • After this I would group the items together and name the
    • factors for the study.
    • I would use these factors as the independent variables leading to place attachment and positive outcomes such as word-of-mouth and satisfaction.