HINF 461 Test 1

Card Set Information

HINF 461 Test 1
2013-02-02 19:34:36

Show Answers:

  1. Major roles of evaluation


    • –provides evidence for using a health
    • information technology

    –studies can provide evidence and inform practice

    nProvides evidence for current practice


    nEvidence in the form of:

    • –Qualitative and
    • Quantitative Evaluations
  2. nThere is a great deal of variability in the quality of evaluations
    • –Poorly developed
    • evaluation questions

    • –Poor evaluation
    • designs

    • –Poorly executed
    • evaluations

    • –The use of the
    • wrong statistics
  3. Evaluation: Main Considerations

    • -Evaluation defines the 
    • variables that are to be studied
    • Includes:
    • Conceptual definitions:
    • –Provide clear
    • statements of what is meant by a variable

    • –Help one to
    • decide how to operationalize and measure a variable

    • Conceptual variables:
    • –Are an idea
    • which has a dimension that can vary
    • –Can be simple or
    • complex
    • –One dimension of
    • a concept
    • (assuming concepts are multi-dimensional)

    Conceptual hypothesis:

    • –A statement of
    • the relationship between two or more conceptual variables



    • nare represented
    • as variables

    • –Usually take the form of :
    • Indicators
    • Measures
    • –Are used to collect the data
    • –Are selected based on their ability to
    • represent a variable (or dimension of a concept)

    –e.g. standardized scales

    • –Allow for data to be collected
    • -Using measures
  4. Linkages Between Levels

    • –refers to the
    • extent to which a measure reflects a concept

    • –reflecting
    • neither more nor less than what is implied by the definition of the concept

    • –affects the
    • quality of the research (if a measure is not reflective of the concept)


    • –Refers to the
    • extent to which, on repeated measures, an indicator will yield similar readings

    • –Affects the
    • quality of the research (if measures do not adequately measure what they are
    • intended to measure)
  5. Quantitative Evidence
    • The goal is to arrive at
    • general statements that can be applied to a variety of situations: a generalization

    • Quantitative observations
    • are:

    –Numeric – some form of count

    –Free from bias

    –Confounding factors are controlled
  6. Qualitative Evidence
    The emphasis is placed on:

    • –the extent to which an explanation and
    • descriptions ring true for both the:

    • -Researcher
    • -People who are
    • being described

    • Based on an analysis of
    • documents, interviews or focus groups etc.
  7. Misuses of Evidence
    nInsufficient (anecdotal) evidence

    nSuppressed evidence

    nMissing evidence

    nUnwarranted conclusions

    • –Confusing
    • correlation or cause


    • –Culture
    • influences our view

    nIllegitimate authority

    • –Authority
    • arising from other sources than data

    nFalse dilemma

    • –Failing to
    • consider other possible reasons for an outcome
  8. Evaluation in Health Informatics
    nGeneric Evaluation

    • nDomain Specific (sometimes
    • has its origins in other disciplines)





    nComputer Science

  9. Evaluation in Health Informatics

    • –General
    • observations of what is occuring in health informatics



    • –Development
    • Evaluation Matrix

    • –House’s Eight
    • Approaches to Research

  10. Development Evaluation Matrix

    nDeveloped by Stead

    • nDescribes the relationship between the stages of system
    • development and levels of evaluation


    • –Stages of System
    • Development


    nComponent development

    nComponents into system

    nSystem into environment

    nRoutine use

  11. House’s Eight Approaches to Research
    • nApplied to health
    • informatics by Friedman and Wyatt


    • nResearch methods are divided
    • into:

    –Objectivist (quantitative)

    –Subjectivist (qualitative)

  12. CHEATS
    nDeveloped by Shaw

    • –Aspects for
    • evaluating health informatics applications

    –Aspects include:


    nHuman and organizational






    • –Qualitative and
    • quantitative research methods are used
  13. Domain Specific Evaluation
    • nExamples of domain specific
    • theories:


    –Organizational Behaviour

    –Information Technology
  14. Examples of Uses of Cognitive Evaluation in Health
    nDeveloped by the Kushniruk and Patel

    • –Origins in the
    • psychology and human computer interaction literatures

    • –Focuses on
    • process:

    nInformation processing

    nOrganization of information

    nDecision making


    nQualitative methods

    • –(e.g.
    • propositional analysis and semantic analysis, verbal protocol analysis)

    nQuantitative methods

    • –(e.g. cognitive
    • activities are observed and counted)

  15. Examples of Uses of Organizational Evaluation in Health
    nRogers Innovation Diffusion


    nTechnology acceptance model


    nUnified theory of acceptance and use of technology (UTAUT)


    nKaplan’s 4C’s


    • nSocio-technical evaluation (Industrial engineering, sociology
    • and management)


    • –Task-technology
    • fit


    nSocial Network Evaluation (sociology)
  16. Technology Acceptance Model
    nFocus is on:

    –User skill level

    • –Perception of
    • usefulness of the system

    • –System
    • functional abilities



    –e.g. interviews

    nIterative throughout the software development lifecycle
  17. Kaplan’s 4 C’s
    nOrigins in social interactionist theory

    • nUsers are active participants in the changes arising from the
    • implementation of systems


    n4 C’s refer to:





  18. Socio-technical Evaluation
    nOrigins in the management and engineering literatures

    nIntroduced to health informatics by Berg and Aarts


    nFocus is on task-technology fit

    • –implications for
    • work processes

    • –quality of
    • clinical outcomes
  19. Social Network Evaluation
    nOrigins are in sociology

    nIntroduced by Anderson

    • –Examines the relationships between
    • clinicians

    • –Examines the impact of introducing new
    • technologies such as the EPR upon those networks
  20. DeLone and McLean IS Success model

    –Developed from a synthesis of the literature

    • nSome conceptual
    • and empirical papers

    • nAdapted to
    • health informatics


    • n(Lau et al.,
    • 2007)

    • nnSystem quality, Information
    • quality and Service Quality

    • nInfluence use and user
    • satisfaction

    • nLead to benefits in terms of
    • quality, access and productivity

    nSystem quality


    nType and level of DSS


    nAccessibility (distance and availability)


    nType of features



    • nCompleteness, accuracy, relevance and
    • comprehension


    • nTimeliness, reliability and consistency of
    • information when and where needed



    nUser training, technical support



    –Use behaviour and pattern

    nFrequency, duration etc.

    • –Self-reported
    • use

    nFrequency, duration etc.

    –Intention to use

    • nProportion of and factors for current
    • non-users to become users



    • –User
    • satisfaction

    • –Ease of use
    • nNet benefits


    nPatient safety

    nAppropriateness and effectiveness

    nHealth outcomes



    • nAbility of patients and providers to access
    • services

    nPatient and caregiver participation




    nCare coordination

    n net cost

    n(Lau et al., 2007)
  21. Total Evaluation and Acceptance Methodology
    nIntroduced by Grant

    • nExamines role, time and
    • structure

    • nIntegrates evaluation
    • throughout the software development lifecycle

    nResearch methods:




    • �Vs-x�`b k-override:
    • none;punctuation-wrap:hanging'>nAbility of patients and providers to access
    • services

    nPatient and caregiver participation




    nCare coordination

    n net cost

    n(Lau et al., 2007)
  22. Usability Engineering and the Software Development Lifecycle
    • nOrigins in the computer science, engineering and psychological
    • literature


    • nUsability engineering is conducted throughout the software
    • development lifecycle


    nResearch methods include:

    • –Cognitive task
    • analysis

    –Focus groups

    • –Usability
    • testing
  23. Evaluation in Health Informatics
    • nEvaluation in health
    • informatics is both:


    –Discipline Specific


    • nCan help with developing and
    • understanding of health information system evaluation

  24. Quantitative versus Qualitative Evaluation Methods
    • nEvaluation involving healthcare information systems can
    • have a number of motivating objectives such as:

    • nDetermining if a
    • new application:

    –can prevent disease

    • –help patients to better self-manage their
    • chronic illness

    –improve patient outcomes

    –improve health care processes

    –reduce health care costs

    • –Improve the timeliness of receiving
    • laboratory or diagnostic imaging results

    –Improve patient safety

    –Improve user satisfaction
  25. Quantitative and Qualitative Evaluation
    • nYour evaluation question will influence
    • your use of specific evaluation approaches


    • nSome evaluation questions may lead to
    • the use of qualitative evaluation approaches


    nOthers will lead to the use of quantitative evaluation approaches

    • nOr both: mixed
    • method studies

    nQualitative evaluation methods emphasize the study of:

    –verbal descriptions

    –human actions and interactions

    –human behaviour

    nQuantitative evaluation methods emphasize the study of:

    • –The frequency of occurrence of human actions
    • and interactions

  26. Qualitative Evaluation: Interviews, Focus Groups and
    • nUse data collection methods
    • such as:


    –focus groups


    • nGeneric data collection
    • methods

    • –Also part of other qualitative research
    • methods

    nGrounded theory


    nInterviews and focus groups:

    • –are qualitative
    • evaluation methods

    • –provide detailed
    • descriptions


    nin some cases visual

    –video clips


    • –may be used to
    • understand

    ncultural practices

    nsocial practices

    nactions and interactions


    nindividuals’ experiences

    nindividuals’ worlds

    • –involving health
    • information technology
  27. When Should Interviews and Focus Groups be Used?
    nWhen a researcher is:

    • –initially
    • attempting to describe what is happening

    • –When there are
    • issues that are not easily partitioned

    • –When there are
    • dynamics of a process, culture or social setting (rather than its static
    • characteristics) that must still be described

  28. Strengths of Interviews and Focus Groups
    nHelp to understand:

    • –The meaning and context of the issue
    • being studied

    n(e.g. why or why not users are satisfied)

    • –Events as they
    • occur over time

    • n(e.g. events that lead to a successful
    • information system implementation)

    • –The impact of a
    • health information system or application

    nintended or unintended consequences

    ndesired and undesirable consequences


    • nHow technology changes/affects interactions
    • between health professionals and patients, work processes and communication
    • between individuals
  29. Interviews and Focus Groups:
    Getting Started
    nEvaluation Questions

    • –“What”, “How”,
    • and “Why” questions instead of hypotheses

    • –The fundamental
    • question is often “What is going on here?”

    • –The question can
    • be progressively narrowed

    • ne.g. “why are doctors dissatisfied with the
    • health care information system?”
  30. Interviews
    nUsed in health informatics


    • nOrigins of the technique:
    • sociology


    • nUsed in evaluation to learn
    • about the interviewee’s perspective

    • nnThree main types of
    • interviews:


    –Semi-structured ***


    • –*** main ones used in
    • evaluation
  31. Semi-structured Interview Questions


    nThe interviewer attempts to:

    • nfocus in on one
    • or two areas of interest

    • ndelve into
    • greater detail about their areas  of
    • interest to obtain additional descriptions


    • nThe interviewer or
    • interviewee can:

    • –diverge from the main questions in the
    • interview

    • –obtain more detail about an idea that is
    • expressed by the interviewee and at the same time maintain the focus of the
    • interview on the topic of interest

    • nnProbes (or prompts) in the study of system problems with a
    • patient record system

    • –Are key to
    • understanding current use and variation in use

    • –Are key to
    • understanding the underlying rationale or reasons for use


    nProbes act as a “script” for driving the interviews

    nHelp to learn more about the key issues you want to evaluate

    • nExamples of interview probes:

    • –How often do you
    • use the current system?

    • –For what
    • purposes?

    • –Have you had any
    • problems using the system?  (if  yes, ask to describe each)

    • –Do you have
    • suggestions for improvement?

    • –Do you have any
    • other comments or general thoughts about the system?

    • –Can you tell me
    • more about your concerns?

    • n
  32. Sampling for Interviews
    nDependent on the type of evaluation questions


    nStatistical representativeness is not sought


    nSeveral factors influence sample size:

    • –Depth of
    • interview

    • –Duration of
    • interview

    • –Quality of the
    • interviewee


    • –Saturation (more
    • to come)




    –Participants/subjects “self-select”

    • –Saturation is often reached with
    • approximately 10 individuals interviewed

  33. Gaining Entry
    • nBegins with the evaluator
    • gaining access to participants 

    • –Recruitment may
    • take the form of:




    nVerbal invitations to participate

    nPostings to website and list serve

  34. Conducting Interviews
    nInterviewers must be:


    –Sensitive to language used by participants

    –Flexible during the interview

    nThe interviewers role is to:

    –Explore what participants say about a topic

    –Uncover new ideas

    • –Check they have understood participants
    • comments

    –Often done through probing

  35. Interviews: Methodological Issues

    • –Interviewee may wish to please the
    • interviewer with their responses

    nInterviewer directiveness

    • –May try to impose their own perceptions on
    • the interview and this may lead to leading questionsnThe interviewer must:

    –Monitor their interviewing technique

    • nHow directive
    • they are

    • nWhether they are
    • asking leading questions

    • nProviding enough
    • time for participants to explain their thoughts, ideas, actions and underlying
    • rationale

  36. Data Collection
    nAudio taping


    nVideo taping

    • –Need to bring
    • together what is being looked at with what is being said

    • nExample: investigating user satisfaction
    • with a health care application



    nMost will agree to audio taping


    nEquipment failure

    • nAlways test your equipment before you do an
    • interview

    nWritten notes

    • –In some cases
    • interviewees will not want to be audio or video recorded

    nMay be reasons for this


    nAllows for data collection in such cases


    nInterferes with the interview process

    nMay lead to details being left out

    • nMay be biased by interviewer perceptions
    • when collecting data



    • –Six to seven
    • hours to transcribe one hour of audio recorded material

    • –Value of
    • transcribing your own data


    nHelps with coding

    • nHelps with identifying new and emergent
    • themes

    • nGives you an idea of when saturation is
    • beginning to occur

    • nHelps you to evaluate your interview style
    • and ability to cover all questions
    • n

    nData takes the form of words


    nUnit of analysis include:




    –Video clips

    nif video and audio data are recorded

  37. Data Analysis
    nCoding Scheme

    –Principled form of analysis

    –Look for patterns in the data



    –arise from the data

    –are defined by the researcher

    –topics or concepts


    • nCodes are
    • grouped into categories

    • nResearchers may
    • find relationships between the categories


    nInvolves segmenting data into units

    • nCodes are units of analysis and may take the
    • form of a concept, topics or theme

    nthen categorizing them



    nFacilitates the development of:

    –New insights

    • –Allows for
    • comparisons

    • ninsights into variation in response to
    • health information system use

  38. Data Analysis: Coding
    • nCodes are assigned to parts
    • of the transcript (i.e. segmented units of analysis).

    nMany methods of coding:

    • nContent Analysis, Grounded Theory and
    • Ethnography


    nContent Analysis


    • –Most flexible
    • method of analysis

    • –Conventional
    • Content Analysis

    • nA data analysis method where data is scanned
    • for concepts, themes and/or categories that have meanings


    • –Directed Content
    • Analysis

    nCoding based on existing theory

    nAllows for theory:



    • (Hsieh
    • & Shannon, 2005)


  39. Development of Coding Schemes
    • nCoding
    • schemes ensure the analysis is:


    • nIdentifies
    • events of interest

    nE.g. such user problems

    nE.g. issues associated with using the system

    • n
    • When Do You Stop Coding?
    • nnIs done until saturation is reached:


    • –“refers to a situation in data collection
    • whereby the participants’ descriptions become repetitive and confirm previously collected data”

    • –Determines the sample size is adequate –
    • usually 10 participants

    –Jackson & Verberg, 2007

  40. Findings
    nExpressed by:

    –Outlining and defining the codes

    • nusually as
    • concepts or topics

    • –Describing the categories that emerged from
    • the data

    • –Reporting on the relationships between the
    • categories

    • –Reporting frequencies and percentages of
    • codes belonging to particular concepts or topics

    • –Quoting participant interviews as
    • representative examples of a concept
  41. Focus Groups
    nForm of group interview

    • –Involves
    • key stakeholders or those working with health information systems


    nConducted as a group meeting

    • –Ideal
    • size: usually 6 people/group

    • –Mini-focus
    • group – 3-4 people/group


    • n4 focus groups with each type of
    • stakeholder

    • –e.g.
    • physician, nurse etc.

    • –To
    • obtain saturation

    n(Krueger & Casey, 2000)

    –nRole of the Facilitator

    • –Elicits
    • information about:




    • –reactions to
    • health information or products

    • –Follows the
    • script that has been decided upon in advance

    • –Ensures issues
    • are discussed and questions are askednRole
    • of the Facilitator

    • –Presents ideas or demonstrates
    • artifacts and asks for immediate reactions

    –Ensures no one dominates the discussion

    • –Stays close to the issues to be covered
    • (focus)

    • –Asks questions about how users do
    • things, what they have done in the past etc.

    • –Gets opinions, attitudes, preferences
    • and reactions

    • –nFacilitator validates the
    • data

    –end of the focus group

    • nFacilitator
    • presents a summary of the findings from the discussion

    • nAllows the group
    • to verify and further comment

  42. Limitations of Focus Groups
    • nMay not be representative of what
    • people do in the real world


    nMay be dominated by a few individuals


    • nA lot of what people do is automatic so
    • they forget to mention it in the group


    nSupervisors and managers may be invited

    • –May
    • introduce bias

    • Overcoming the Limitations of Focus
    • Groupsnbuild
    • in some task work

    –e.g. Can pass around products and scenarios

    • –e.g. Can provide screen shots of a
    • prototype

    • nensure
    • users are part of the focus groups
  43. Observations
    • nInvolves intensive
    • observation of a group, culture, community or organization


    nAim is to capture:

    –Subjective human behaviour

    • –Objective human behaviour
    • nSuitable for research
    • dealing with:

    –Specific settings


    • –Demographic factors (e.g. indicators of
    • socioeconomic status)

    n(Angrosino, 2007)

    n(Angrosino, 2007)
  44. The Process of Observing
    nSite selection

    –May be selected in order to respond to a:




    nGain entry to observe

    –May involve speaking to gatekeepers

    nObservation may begin
  45. Validity and Reliability of Observation
    nObservations are susceptible to subjective interpretation


    nIssues are:


    • nObservations are consistent with a general
    • pattern and not by random chance


    • nIs a measure of the degree to which an
    • observation actually demonstrates what it appears to demonstrate



    • –Multiple
    • observers or teams of observers

    nCross checking and inter-rater reliability

    nConsensus of the group prevails

  46. nPositivistic approach
    nUsed to:

    –To test theory

    • –To determine the effects of one variable
    • upon another:

    • nDecision Support
    • (variable)

    –Behaviour/process (variable)

    –Outcome (variable)
  47. What is an Experiment?
    nCommon elements:


    • –Independent Variable  Key
    • Elements

    –Dependent Variable


    • –Confounding  Other elements
    • (important to consider)


  48. Subject
    • nIndividual who is studied to
    • gather data for a study

    • nLink to unit of
    • analysis



    • nPhysician
    • office,


    nHealth System

  49. Independent  Variable
    • –A variable that has been selected as having
    • influence upon a dependent variable


    • –“cause” in a “cause
    • and effect relationship”

    n(Jackson & Verberg, 2007)


  50. Dependent Variable
    • –A variable that is influenced by other
    • variables


    • –“effect” in a “cause and
    • effect relationship”

    n(Jackson & Verberg, 2007)

  51. Control Variable
    nVariables that are taken into account when designing a study


    • –Organizational
    • culture

    • –Experience in
    • working with an electronic patient record

    • –Disciplinary
    • experience

    • –Domain
    • Experience

    n(Borycki et al., 2008)

  52. Confounding Variables
    • nVariables that can unintentionally obscure or enhance
    • relationships


    • –prior use of the
    • electronic patient record under study

    –budget cuts

  53. Random Variables
    • nA variable that varies in ways the evaluator/researcher does
    • not control


    • –May impact the
    • dependent variable





  54. What is an Experiment?
    • nA study that is undertaken
    • in which the evaluator/researcher has control over:

    • nSome of the
    • conditions in which the study takes place

    • nSome aspects of
    • the independent variables being studied
  55. True Experimental Design
    • nGold standard in
    • research/evaluation design

    • nBest approach for assessing
    • “cause and effect” relationships

    nRelies on:

    –Random assignment

    –Repeated measures

    • nFirst used in agriculture in
    • the early 12th century

    nFeatures include:

    • –Comparison of an experimental group to a
    • control group

    • nExperimental
    • group

    –Receives treatment or intervention

    nControl group

    • –Sometimes referred to as usual care in
    • clinical trials

  56. Two Basic Types of “True” Experimental Designs
    • nBetween Subjects Design
    • –Each group of subjects is exposed to a
    • differing level of treatment
    • nComparisons can be made
    • between the experimental (or treatment group) and the control (or usual care
    • group)


    • nExperimental (i.e.
    • treatment) and control subjects should be equivalent  before the treatment begins

    • –Tests can be done to determine if the groups
    • are equivalent


    • nEquivalency can be addressed
    • through randomized assignment

    • nSubjects are assigned to
    • treatments by chance or assignment is “randomized”

    nUsually involves a:

    nCoin toss

    • nTable of random
    • numbers
    • nResult is:


    –Experimental Group

    • nGroup that is
    • exposed to the treatment intervention


    –Control or Usual Care Group

    • nExposed to a
    • placebo, neutral treatment or usual carenIf done correctly:

    • –Random
    • assignment creates two or more groups of subjects that
    • are probabilistically similar  to each other (on average)

    • –The two groups
    • are equivalent

    • –Outcome
    • differences between the groups are likely due to the treatment and not due to
    • differences between the groups

    • nWhen certain assumptions are
    • met:

    • –Yields an
    • estimate of the size of a treatment effect that has desirable statistical
    • properties

    Within Subject Design

    nWithin Subject Design

    • –Each subject is exposed to differing levels
    • of the treatment variable

  57. Methods of Achieving Subject Equivalence

    • –Subjects are
    • randomly assigned to either the treatment or to the control group


    nSubjects Act as their Own Controls


    • –subjects are
    • matched on factors that are considered important to the study

    • –(e.g. matching
    • on sex, socioeconomic status, grades)


    • –Subjects are
    • grouped together on some variable that needs to be controlled and the subjects
    • are then randomly assigned to treatment or control groups

    nMeasuring Baseline Stability

    • –When control and
    • experimental groups are equivalent but not on the dependent variable

    • –Baseline data is
    • collected on the dependent variable

    • –Make comparisons
    • as we repeat the measures of the dependent variable throughout the course of
    • the study

  58. Within Subject Designs
    • nProvides additional control
    • over random and control variables through constancy of subjects

    • –i.e. each subject acts as his or her own
    • control

    • –e.g. each subject works with a paper record
    • and also an EMR
  59. Within Subject Designs
    • nProvides additional control
    • over random and control variables through constancy of subjects

    • –i.e. each subject acts as his or her own
    • control

    • –e.g. each subject works with a paper record
    • and also an EMR
  60. Controlling Variance in Experimental Designs
    • nEvery effort should be made to control variance in true
    • experimental designs:

    • –Systematic or
    • experimental variance

    • nAttempts must be made to enhance the
    • relationship between the experimental conditions

    • –Extraneous
    • variance

    • nNuisance or unwanted variance arising from
    • factors other than the independent variable that could lead to differences
    • between the groups

    • –Build extraneous
    • variable into the design

    • –Hold constant
    • variables by selecting a homogenous population

    • –Match subjects
    • on one or more extraneous variable

    • –Statistical
    • control of variance (e.g. ANCOVA)

    –Error variance

    • nVariability of measures due to random
    • fluctuations in measurement error

    • –Control the
    • experimental conditions (e.g. setting and instructions)

    • –Reliability of
    • the measurement instruments

    • –Train the data
    • collectors and determine inter-rater reliability

  61. Strengths and Weaknesses of True Experimental Designs
    nStrength of the Design

    • –Best method of controlling variance (taking
    • into account the factors that contribute to differences)

    • –Provides the most convincing evidence for
    • demonstrating causal relations among variables (internal validity)

    nWeakness of the Design

    –Limits generalizability

    • nSubjects (sample
    • usually not representative)

    • nUnnatural
    • setting/Laboratory setting (although this can be addressed in the study design)

  62. Pseudo-experimental Designs
    Quasi-experimental Designs

    nUsed to test descriptive causal hypotheses about manipulatable causes


    nSimilarities to True Experiments:

    • –Have control
    • groups

    • –Have pre-test
    • measures


    nMain Differences:

    • –Lack of random
    • assignment

    • –Assignment is by
    • self-selection

    nSubjects choose a treatment for themselves

    • nAdministrator chooses a treatment for the
    • subjects

    • –(e.g. one unit
    • receives the EPR – the other does not)
    • nEvaluators/researchers still
    • control:

    –Selection of measures

    –Scheduling of measures

    –How non-random assignment is executed

    • –the kinds of comparison groups with which
    • treatment groups are compared

    • –Some aspects of how the treatment is
    • scheduled

    • n nLimitations:

    • –Control and
    • treatment groups may differ in systematic ways other than the presence of the
    • treatment

    • –need to worry
    • about ruling out other alternative explanations for the observed effects on the
    • dependent variable
    • nLimitations

    • –need to use
    • logic, design, and measurement to prevent other explanations for any observed
    • effects

    • –Need to generate
    • and recognize the presence of other possible other alternative explanations

    • n



  63. Types of Quasi-experimental Designs
    nPre-test/Post-test Designs

    nExposed/Comparison Designs

    nMeasure the group on the dependent variable

    • –e.g. number of
    • flu shots given

    nExpose the group to the independent variable

    –e.g. the EMR

    nMeasure the dependent variable again

    • –e.g. number of
    • flu shots given

    • nAssumption is the exposure to the independent variable results
    • in a change in the dependent variable

    nStrength of the Design

    –Ability to clarify causal relationships

    nWeakness of the Design

    –External validity

    • nResults many not
    • be easily generalizable

    –Not easily generalizable to other persons or populations or settings
  64. Pre-test/Post-test
    Experimental Design

    –Threats to
    internal validity
    • –Threats to
    • internal validity


    • –Current events
    • in addition to the independent variable may influence variation in the
    • dependent variable


    • –Any changes that
    • occur in subject over the course of an experiment may influence the outcome of
    • the experiment


    –Response bias

    • –Asking identical
    • questions at both tests can influence responses

    nInstrument Decay

    • –Test-retest
    • reliability

    nStatistical Regression

    • –When a sample is
    • selected on the basis of extreme scores they will tend to show a statistical
    • regression towards less extreme scores

    • nTherefore…there is a need to actively analyze to rule out the
    • above


  65. Exposed/Comparison
    Group Design
    • –One group is exposed to the treatment and
    • the other is not

    • nE.g. one
    • physician practice receives an EMR and the comparison group doesn’t

    –Comparisons are made between the groups

    nStrength of the Design:

    • –Ability to
    • clarify causal relationships

    nWeakness of the Design:


    • nIf the groups were not equivalent at the
    • outset of the study, rival variables may be the cause of group differences


    nSubjects select themselves into a study


    nSubjects select themselves out of a study

    nWithdraw from the experiment

    • –Lack of control
    • over variance
  66. Natural Experiment
    • nNaturally occurring contrast between a treatment and a
    • comparison condition

    nTreatments are often not manipulatable

    • nOther plausible causal influences must be considered for the differences in the
    • treatment and comparison condition
  67. Limitations of Experimental Designs
    • nMost experiments are highly
    • local but have general aspirations


    nConstruct validity

    • –Causal
    • generalization (moving from abstract concepts to data)

    nExternal validity

    • –Inferring
    • whether a causal relationship holds variations in persons, settings, treatments
    • and outcomes (results from studies of software in differing hospitals)

    • nSampling and causal
    • generalization

    • –Random selection
    • involves selecting subjects to represent a population
  68. Research Questions
    • nHow does information
    • technology’s structuring of work processes influence an information seeker’s:

    –Choice of key information sources?

    –Choice of type of information?

    –Selection of information seeking tactics?

  69. Survey
    • •Common form of evaluating the
    • impacts of health information systems

    • •Involves gathering information
    • from a sample population using questionnaires (that may include developed
    • measurement instruments)
  70. Surveys:

    From a Scientific Perspective…
    • •In order to ensure the generalizability of survey studies one must
    • ensure that:

    • –The sample is
    • representative of the population

    • –Or the survey
    • is administered to a population

    • –There is a
    • 60% response rate
  71. Survey: Collecting
    • •Primary method for collecting data is the
    • survey questionnaire

    • –May include developed, reliable
    • and valid measurement

    • •New measurement instruments should not be
    • developed if existing ones allow you to measure what you intend on studying

    • –Existing measurement
    • instruments allow you to compare across populations

    • •A number of instruments have been used to
    • study aspects of technology evaluation

    • –See Anderson’s text (available
    • through e-books)
  72. Why
    Use Existing Measurement Instruments in Surveys?
    •No need to recreate the wheel

    • •Existing measurement instruments have been
    • developed by researchers

    • •Researchers have established the reliability
    • and validity of these measures

    •Allow for comparisons across studies

    • rgo:0f @"�rgin-bottom:
    • 0pt;margin-left:.38in;text-indent:-.38in;text-align:left;direction:ltr;
    • unicode-bidi:embed;vertical-align:baseline;mso-line-break-override:none;
    • punctuation-wrap:hanging'>•A number of instruments have been used to
    • study aspects of technology evaluation

    • –See Anderson’s text (available
    • through e-books)
  73. In
    Health Informatics, Surveys have been used to study:
    •Social impacts of computers. 


    • •Satisfaction (general and end
    • user)

    •Decision making


    •Perceptions of productivity

    •Social Interaction

    •Job enhancement

    •Work role changes

    •Work Environment

  74. User
    Reactions to Computers and Implementation: General Satisfaction
    • •Surveys have been used to
    • assess user satisfaction

    • •For evaluators this is often
    • the starting point to assessing the value of varying health information systems

    • •Useful before or at the outset
    • of an implementation
  75. Surveys: End-User
    • •Scales have been developed to measure varying
    • aspects of end-user satisfaction with the health information system:




    –Ease of use


    •Such scales specifically help to:

    • –identify potential areas of
    • dissatisfaction with health information systems interfaces

  76. Surveys Innovation
    Process, User
    •Some surveys allow one to focus on:

    • –the implementation process
    • itself

    –how an innovation is adopted

    •This can be especially helpful if you:

    • –pilot test an implementation on
    • one unit

    • –need feedback as to how to
    • greater facilitate adoption of the system on other units


    •User adaptation has been studied in terms of:

    • –Employee attitudes towards
    • adaptation

    • –Behaviours that
    • indicate adaptation

    • –Current research suggests
    • employees who have ambiguous jobs are more resistant to change

    • •Such types of work are typical
    • in health care

    • •e. g. physicians, nurses,
    • occupational therapists
  77. Surveys: Provider Patient Interactions
    • •Frequent concern is that health
    • information systems will have detrimental effect on provider-patient
    • interactions

    • –Patient
    • rapport

    • –The
    • depersonalization of the patient-provider interaction•Survey studies examining Patient-Provider
    • Interactions have been used to assess whether:

    • –the patients’ perceived a
    • change in patient-physician communication,

    • –Health professionals
    • communication with other providers changed

    • –Health information systems
    • influenced perceived privacy and confidentiality

  78. Survyes: Characteristics
    of Individual Users
    • •Characteristics of individual users can help
    • system implementers predict individual attitudes toward an information system

    •For example, research suggests, individual:


    –Job tenure

    –Previous computer experience

    • –Can lead to more positive and
    • negative attitudes in different settings

    • •Identifying individual user characteristics
    • in advance will help you to address user attitudes and perspectives through
    • education about the system
  79. Survey: Personality
    •Cognitive Style

    • –Characteristic
    • modes of functioning as shown by individuals in their perceptual and thinking behaviour may
    • influence health information system use

    • –For example, Aydin found
    • “feeling types” use computers less often than “thinking types”

    • –Understanding
    • cognitive style will help you to tailor health information system design to
    • user needs

    •Learning Style

    • –Preferences
    • for specific types of learning styles may influence learning approaches

    • –Use of
    • lecture, readings and CBT’s in computer training will vary according to
    • learning style and will help you plan your educational support

  80. What
    are Surveys?
    •Surveys are a form of quantitative evaluation

    • •Although they can have
    • qualitative origins

    • –e. g.
    • qualitative interviews can be used to generate initial items
  81. Advantages
    of Using Surveys
    • •Allow you to collect
    • information from large a number of individuals

    • •Obtain early insights into
    • individual perceptions:

    • –About the
    • quality of information technology

    • –About the
    • impact of information technology upon work

    • –To collect
    • demographic information

    • –To collect
    • other types of quantitative information

    • •e. g. “How
    • many orders do you enter a day?”

    •       “ What is your educational background?
  82. Limitations
    of Surveys and their Questions
    • •Closed-ended questions do not allow for
    • elaboration

    –e.g. Do you like computers?

    •Yes or No

    • •Surveys are not well suited to
    • learning about processes, workflows or techniques (e.g. to answer “How do you do this
    • process?” - better to interview or observe)

    • •Surveys are also less useful
    • for collecting other types of qualitative information (e.g. system usability,
    • user-interface needs)

    • •Alternatively Open-ended questions (i.e. ones that
    • encourage elaboration) don’t tend to get returned because individuals find it
    • too effortful to complete the questionnaires

    • –e.g. Could you tell me about
    • your experiences with physician order entry?
  83. Advantages
    of Close-ended Questions
    •Alternatives are uniform

    •Responses are uniform

    • •Less demand is placed on
    • respondents

    • •Respondents make their own
    • judgments

    •Recording is simplified

    • •Data entry and analysis is
    • simplified
  84. Disadvantages
    of Close-ended Questions
    •Inadequate response categories

    •Superficiality of responses

    • •Tedium of going through long
    • lists of responses

    • •Inappropriateness of long lists
    • of alternatives
  85. Advantages
    of Open Ended Questions
    • •May provide greater clarity
    • about a topic

    • •Allows one to learn about new
    • or unknown influences on your project
  86. Disadvantages
    of Open-ended Questions
    •Responses may vary considerably

    • •Lack of comparability of
    • answers

    •Vagueness of answers

    • •Recording – people don’t like
    • to write or type much

    • •Coding and summarizing is a
    • problem

    • •Require greater respondent
    • involvement
  87. Modes of Delivering Surveys
    Modes of Delivering Surveys

    • Questionnaires
    • are often given:

    •  paper

    •    telephone

    •  email

    •  deployed over the WWW

    •  Survey Monkey

    • •           Fluid
    • Surveys

    •US Patriot Act considerations
  88. Some
    New Modes of Delivering Survey Questionnaires
    • –Web based
    • tools for creating questionnaires

    • •Survey Monkey
    • and  Fluid Surveys

    • –an be used to
    • collect results

    • •puts results
    • into a database you can access over the Web

    • •supports
    • survey design (including looping and branching)

    • •supports data
    • collection (e.g. pop-up surveys at key point)

    • •supports data
    • analysis (e.g. graphs and reports)
  89. Surveys Some
    Important Considerations When Designing Questions
    •Word selection

    •Focus of questions



    • •Biasing or leading questions
    • should not be used

  90. Survey Issue
    of Response Rate
    • •You can send out a questionnaire (or post on
    • the web) but problems with response rate may arise

    • –e.g. if you get a very low
    • response rate (e.g. 10%, then you might get bias from those who do reply)

    • •Often you may not even get a response rate of
    • 50%

    •Ideal is 60% to be representative

    • •Response rates depends how surveys are
    • administered

    • –E.g. if sent out to people you
    • don’t know in a mass mailing, versus giving questionnaires to people in a
    • specific study situation you will have differing response rates
  91. Survey Issue
    of Length
    •How long should a survey questionnaire be?

    •Answer – not too long and not too short!

    • –If too short you may not be
    • getting the information you want

    • –If too long, people may give up
    • answering all the questions, or just start filling out bogus answers to get
    • through

    • –If on the Web, you may want a
    • major section to be about a page (or maybe longer with scrolling) but not too
    • many pages!
  92. Organization
    of Survey Questionnaires
    • •Questionnaires (or surveys) should be
    • logically ordered and may have many sections

    • –e.g. a section to obtain
    • background demographic data using close-ended questions,  followed by a section with Likert scale
    • questions about their preferences, then followed by a final section on
    • usability etc.
  93. Social Network Analysis
    • •“A methodological approach that allows one to
    • analyze the relationships among entities”

    • –e.g. people, departments and
    • organizations

    –(Anderson, 2005)

    • nf0f �Œy a final section on
    • usability etc.
  94. Why Social Network
    • •Allows
    • for the study of:

    –patterns of interactions

    • •Patterns of interactions among people, departments,
    • organizations and so on

    –Individuals who are embedded in social networks

    –Emerging from social network structures

    • –Individuals attitudes, norms and behaviours in response to direct and
    • indirect exposure to individuals in a network

    »(Anderson, 2005)

  95. What is a Social

    –set of ties between actors 


    –persons, organizations, departments, teams etc. 


    – Relationships (friendships, contracts, marriages etc)

    •(Anderson, 2005)

  96. Key Definitions in
    Network Analysis






    –Degree to which network revolves around a single node


    –a node that if removed would break the net


    –a tie that if removed would break a relationship


    • –Number of ties expressed as percentage of the number of
    • ordered/unordered pairs

    •(Anderson, 2005)

  97. General Social Network
    • •We
    • attach values to ties by representing their quantitative attributes such as

    –Strengths of relationships

    –Information capacity of a tie 

    –Rates of information flow or traffic across a tie

    –Distance between actors 

    –Probability of passing on information

    –Frequency of interaction

    (Source: Borgatti, Steve webtext: www.analytech.com/borgatti)

  98. Network Perspective
     (Source: Borgatti, Steve webtext:
    • •Relationships
    • vs. Attributes

    –Individual characteristics are limited

    –People influence each other and ideas flow

    –We depend on each other

    • •Structure
    • vs. Composition

    –It’s not just the elements of a system but how they fit

    –Non-reductionist, holistic, systemic



  99. Social Networks
    •Patterns of relationships:

    • –Patterns of downward, upward,
    • horizontal and diagonal flows of communication and information

    • –Both with and without the use
    • of information technology

    –(Anderson, 2005)
  100. Social network analysis
    is based on the premise that…
    • •Individuals are influenced by direct and
    • indirect exposure to other person’s attitudes and behaviours

    • •By access to resources and information in a
    • network

    •(Anderson, 2005)
  101. Some Common Social
    Network Patterns
    •Five common patterns:





    –All Channel

    »(Anderson, 2005)
  102. Social networks…
    • •Consider
    • patterns of relationships among members of the organization

    • •Can
    • be used to identify different patterns of relationships within and between
    • occupational groups, departments and organizations

    • •To
    • analyze the effects that these patterns have on individual member’s



  103. Social Network Analysis
    • •Study
    • of the pattern of relationships among people, departments, organizations etc.


    –Physicians consult with one another about a patient’s illness

    • –Physicians interact with nurses, pharmacists and other health
    • professionals in providing care

    • •Possible as physicians, clinics, hospitals,
    • medical laboratories, home care agencies and insurance companies may all share
    • a common EPR

    •Four elements of social network analysis

    –Units that comprise the network

    • –Type of relations among the
    • units

    –The properties of the relation

    –The level of the analysis

  104. Levels of the Network
    • •Several
    • levels of the network can be analyzed

    –Ego networks

    • •Each individual unit or node is involved in a network that
    • comprises all other units with which it has relations and the relations among
    • these units


    •A pair of units


    •Three units

    –Can have more units

    –(Anderson, 2005)