View Options - Click to expand
Print Options - Click to expand
E-Mail Options - Click to expand

Improving Health Care Quality With Outcomes Management

Allen W. Heinemann, PhD
William P. Fisher, Jr.
Richard Gershon, PhD


More than 1 million people in the United States have sustained loss of a limb caused by injury, diabetes, cancer, vascular disease, infection, congenital conditions, or other diseases.1 In 1996, most persons losing limbs were male (69.5%) and white (92.5%).2 The leading causes of limb loss between 1988 and 1996 were dysvascular (82.5%), followed by traumatic (16.5%), and cancer-related (1%), with most amputations (86%) involving lower extremities.3 Almost 200,000 persons in the United States used an artificial limb in 1994, and about 87% of those limbs involved a lower extremity.2

A significant increase in the number of persons requiring orthoses and prostheses is expected in the coming years, just as the number of professional orthotists and prosthetists (O&P) declines significantly.1 The divergence in O&P supply and demand requires a paradigm shift in the service delivery system and enhanced quality improvement efforts.

Health tracking has been shown to provide returns on investment of $1.44 to the federal government alone for every federal dollar invested.4 The American public and economy would be likely to obtain a considerably higher return when potential insurance cost reductions, lost workplace productivity, and the human value of improved health are considered. The need to measure and evaluate rehabilitation practice in general, and orthotics and prosthetics practice specifically, has received growing recognition in the past several years.5–8 Fuhrer5 outlined recommendations for medical rehabilitation outcomes research generated at a 1994 conference organized by the National Center for Medical Rehabilitation Research (NCMRR). Critical to NCMRR's agenda, and reiterated throughout the report, is the need for valid, reliable, and change-sensitive outcome measures to evaluate the efficacy and effectiveness of rehabilitation practices. The American Board for Certification in Orthotics, Prosthetics & Pedorthics (ABC) echoes this call by encouraging outcomes measurement and clinical pathways within the context of a continuous quality improvement process.6,7 ABC's quality assessment and improvement standard states: "There is an ongoing quality assessment and improvement program designed to objectively and systematically monitor and evaluate the quality and appropriateness of patient care, pursue opportunities to improve orthotic and/or prosthetic care and resolve identified problems."9

Despite the ongoing quality efforts and interests of providers and accreditation agencies, health care has not yet reproduced the quality revolution provoked in the automobile industry by Toyota in the 1970s.10 "Executives within health care organizations, both health plans and providers, generally view quality improvement as an ethical responsibility or a social good, rather than as a business strategy for improved financial performance and competitive market positioning" (p 764).11 There are four reasons health care executives find it difficult to make a business case for quality and to "pursue perfection" in the manner of other industries. First, other industries pursue perfection as an ideal or heuristic goal, that is, by computing mathematical parameters for quality and engineering performance relative to those parameters. These parameters have been proposed in a significant body of research in health care but have not yet moved beyond isolated research applications.12,13

Second, other industries produce their desired outcomes, their products, by directly managing their measures. That is, the measures are not indicators separate from the product, something different and apart that represents the outcomes in certain contexts such as accreditation. Rather, the measures themselves are managed. The measuring instrument is the medium of the outcome message. Managing the message and telling the outcome story is then a matter of managing the measure.

Third, other industries, in contrast with health care, share a common frame of reference for evaluating quality. This common frame of reference stems largely from industry-wide shared product definitions. Unfortunately, health care is a $1 trillion per year industry "without a clear measure or definition of its main product."14,15 Without a shared product definition, it is impossible to coordinate industry-wide efforts into focused quality assessment and improvement. Historians of science illustrate how widespread availability of standardized instrumentation is usually a prerequisite for advances in theory and practice.16,17 We have the mistaken perception that technology is a product of science, whereas "historically the arrow of causality is largely from the technology to the science."18,19 The availability of a common language implemented in a stable frame of reference effects a qualitative transformation in our ability to think together in a distributed but collective manner, multiplying and magnifying the impacts of individual ideas by making them more readily accessible and comprehensible.20–22

Fourth, other industries typically control every aspect of the products they produce. In contrast, health care is restricted in dealing 1) with a limited set of the total number of factors that ultimately influence an individual's health; and 2) with sequential and usually nonoverlapping samples of the total population of relevant patients. Because patients often live in environments that reinforce negative health behaviors, health care providers often feel hopeless about their effectiveness in promoting basic improvements in population health. Thus, the return-on-investment model most relevant to health care is not the usually assumed quality improvement focus, but one of facilitating cost-avoidance while maintaining, and less often improving, quality.23

Quality improvement is fundamentally a matter of determining which resource investments make a difference and which do not.24 Resources invested in producing outcomes, but which cannot be shown to contribute to those outcomes, are effectively wasted resources. However, an overly narrow focus on clinical indicators tends to devalue and disregard vitally important human and social outcomes. The historical difficulty associated with measuring the qualitative and intangible human outcomes of health care has been reduced considerably by advances in fundamental measurement theory and practice.25,26 It is now possible for survey-based instruments to provide objective measures that have the veracity of temperature, time, weight, and length.6,27,28

CENTRALITY OF PATIENT SATISFACTION IN HEALTH OUTCOMES

Accordingly, patient perspectives on the benefits of devices and services and satisfaction with services are widely recognized as important areas of rehabilitation and in health care generally. Donabedian29 stated "patient satisfaction may be considered to be one of the desired outcomes of care, even an element in health status itself. . .information about patient satisfaction should be as indispensable to assessments of quality as to the design and management of health care systems" (p 1746). Ware et al.30 stated that health status and patient satisfaction are the primary outcomes of interest for rehabilitation care. The greatest challenge, they argue, is the lack of standardization in measures that would allow outcomes to be compared across programs.

Patient satisfaction reflects expectations, prior health care experiences, and quality of health care. Patients might measure high on a satisfaction scale less as a result of quality care than as a result of being impressed with a prosthetist's authoritative opinion, an orthotist's caring touch, or an expensive device. The relevant question to ask is: To what extent will managing this measure contribute to the quality of the processes and outcomes produced? It is possible that a focus on satisfaction could produce high measures bearing no relationship to the quality of the relevant processes and outcomes.

Patients who are active participants in their care, ask informed questions, and contribute to the decision-making process tend to have better outcomes than do those who do not.31–34 Measures of patient-centeredness or of patient "activation," as it is also called, may be fundamentally important in quality improvement, as was noted by the O&P industry.34–37

A survey sponsored by the Amputee Coalition of America and reported by the Amputee Resource Foundation of America suggests that many persons needing O&P care feel excluded from the care process, with 75% of the respondents saying they need more educational materials than were provided, 57% saying they received no educational materials, and for the 43% who received materials, only 15% to 20% of those materials were deemed helpful.3 It seems that satisfaction with quality of care should not be operationalized so much as a function of patients' opinions as much as a function of the extent to which they have been engaged by the provider in the process of care.

Measurement of patient satisfaction typically focuses on patients' opinions in such a way that the objects of concern are not integrally intertwined with the process of care itself. That is, patients are not made the center of concern; consequently, quality is disconnected from anything of importance to the actual outcomes of care. Instead, the clinic or the staff is made the object of concern, in the name of facilitating an administrative focus that conflates the care process with the care outcome. However, focusing on the relationship with the patient tracks the established association between patient participation in the care process and the quality of the outcome. Hibbard et al.34 show that the activation continuum is defined by four distinctive aspects of attitude and behavior 1) belief in the importance of taking an active role; 2) confidence in one's ability to take action; 3) active participation; and 4) persistence in staying the course under stress. These four domains fall in a relatively invariant order across health care providers and consumers. Patients with higher activation measures are more likely to engage in appropriate disease-management tasks and are less likely to access the health care system.

Patients are more likely to be activated when their providers encourage them to be activated.32 Thus, a patient activation measure ought to function as a tool by which health care providers can effectively locate, engage, and move patients along the activation continuum. This kind of functionality is beyond the scope of satisfaction measures in providing a means of embedding assessment in the practice of care, following the recently established models of integrated assessment and instruction that are becoming commonplace in education.38,39 Rigorously scaled and universally uniform patient-centered outcome measurement has the potential to inform both the valuation of care quality in the most relevant human terms and the evaluation of care quality in the strictest economic terms. For the health care economy to function more in accord with the economies of other industries, common currencies for the exchange of human and monetary value must be scientifically calibrated and deployed.

PRINCIPLES OF QUALITY IMPROVEMENT MEASUREMENT

Four principles of a technically sound, diagnostically relevant, and clinic-based quality improvement system incorporating patient self-assessments can be traced from recent work in measurement theory and alternative assessment methods gaining increasing use in education.38 The four principles assert 1) the value of clear expectations routinely checked against observations; 2) the need for therapeutic validity; 3) that assessment and care must be patient-centered, but the system must be clinicbased and clinician-managed; and 4) that the evidentiary basis of decision making is rigorously consistent, unbiased, and comparable across cases.

The first principle is the basic principle of management. We manage what we measure, and what we measure establishes clear expectations as to what should be happening when and where and with whom. Measurement is most clearly meaningful when numbers consistently represent the same amounts of the construct measured across patients, instruments of a type, clinicians and clinics, time, and space.40,41 When expectations are confirmed or refuted by observations, management relies on measurement to indicate what comes next in the order of things. Instruments that are valued by the O&P industry must define continua of more and less functional status, quality of life, and patient activation, with the therapeutic goal being to raise the measures to their optimum levels and the quality goal being to optimize the cost/benefit ratio.

Thus, it is essential for the measures to be therapeutically valid, taking us to the second principle. What is assessed must match what is being diagnosed and prescribed. This is, of course, the often overemphasized basic tenet of content validity. We must ascertain that the items included on the patient self-report surveys are relevant and representative of the entire population of items, tasks, and problems that a patient might conceivably encounter. But ensuring representative content must be balanced with construct and consequential validity.41 Highly content-valid items may not work together to produce the consistent quantitative evidence required for interpretable and manageable measures. So in addition to covering the relevant content domain, the items on an instrument must fall in a theoretically meaningful and empirically supported order capable of indicating when a better outcome and higher quality care are achieved.

Clinically valid patient-reported measures require clinicians to be responsible for managing them, our third principle. Clinicians, not patients, are the ones best prepared and situated to make the fullest use of the assessment information in the process of guiding diagnosis and treatment. The interpersonal contact established at the point of care involves the clinician and patient in the context of the need for care. Neither the patient nor any third party has the degree of involvement or the background of training and experience required for responsibly managing these patient-based quality indicators. If quality measurement is to be practical, it must be an integral part of the practice.42,43

To meet the demands of this responsibility, clinicians require interpretable feedback at the point of care, bringing us to the fourth principle. One of the most common and longest-standing criticisms of patient reports concerns the quality of the evidence provided. Not only are patients' perspectives on their own conditions variable and of questionable comparability, the instruments representing those perspectives to clinicians are themselves often poorly designed and inadequately evaluated. As noted above, fundamental measurement theory12,13,25,26 can provide clear and effective guidance in instrument design and calibration, such that data quality and comparability are based on mathematically sound principles of invariance, statistical sufficiency, and parameter separation.

SUMMARY

Continuous quality improvement requires that we must first measure to manage care processes and outcomes. Health care quality instruments in general, and O&P instruments in particular, are rarely developed with the kind of care and precision taken for granted in other industries. New technologies for precision measurement in health care have recently emerged, creating new opportunities for informing quality improvement efforts. Orthotics and prosthetics could be one of the first areas of health care to employ these technologies, take advantage of the new opportunities, and establish common product definitions and comparable outcome measures, en route to rebalancing the cost– quality equation.

Correspondence to:Allen W. Heinemann, PhD, Rehabilitation Institute of Chicago, 345 East Superior Street, Chicago, IL 60611; e-mail: .


ALLEN W. HEINEMANN, PhD, Professor, Physical Medicine and Rehabilitation Feinberg School of Medicine–Northwestern University Director, Center for Rehabilitation Outcomes Research Rehabilitation Institute of Chicago, Chicago, IL.

WILLIAM P. FISHER, JR., Principal Investigator, Metametrics, Inc., Durham, NC; now, Chief Science Officer, Avatar International, Sanford, FL.

RICHARD GERSHON, PhD, Director of Psychometrics, Center for Outcomes, Research, and Education, Evanston Northwestern Healthcare, Evanston, IL.

References:

  1. Amputee Resource Foundation of America. Available at: www.amputeeresource.org
  2. National Limb Loss Information Center. Limb loss in the United States. NLLIC Fact Sheet [Online]. 2002. Available at: www.amputee-coalition.org/fact_sheets/limbloss_us.pdf.
  3. Dillingham TR, Pezzin LE, MacKenzie EJ. Limb amputation and limb deficiency: epidemiology and recent trends in the United States. S Med J 2002;95:875–883.
  4. Public Health Foundation. Return on Investment of Nationwide Health Tracking. Washington, DC: Author; 2001.
  5. Fuhrer MJ. Assessing Medical Rehabilitation Practices: The Promise of Outcomes Research. Baltimore: Paul H. Brookes Publishing Company; 1995.
  6. Hoxie L. Outcomes measurement: A primer for orthotic and prosthetic care. J Prosthet Orthot 1995;7:132–136.
  7. Hoxie L. Outcomes measurement and clinical pathways. J Prosthet Orthot 1996;8:93–95.
  8. Polliack AA, Moser S. Outcomes forum: facing the future of orthotics and prosthetics proactively: theory and practice of outcomes measures as a method for determining quality of services. J Prosthet Orthot 1997;9:127–134.
  9. American Board for Certification in Orthotics, Prosthetics & Pedorthics. Standards of Performance Manual. 2002. Available at www.abcop.org/Facility_Standards_of_Performance_Manual.asp#14 .
  10. Coye MJ. No Toyotas in health care: why medical care has not evolved to meet patients' needs. Health Affair 2001;20:44–56.
  11. Coye MJ, Detmer DE. Improving the quality of health care. Quality at a crossroads. Milbank Q 1998;76:759–768.
  12. Heinemann AW, Linacre JM, Wright BD, et al. Relationships between impairment and disability as measured by the Functional Independence Measure. Arch Phys Med Rehabil 1993;74: 566–573.
  13. Fisher WP Jr., Harvey RF, Taylor P, et al. Rehabits: a common language of functional assessment. Arch Phys Med Rehabil 1995;76:113–122.
  14. Kindig DA. Purchasing population health: aligning financial incentives to improve health outcomes. Health Services Research 1998;33:223–242.
  15. Fryback D. QALYs, HYEs, and the loss of innocence. Med Decision Making 1993;3:271–272.
  16. Rabkin YM. Rediscovering the instrument: research, industry, and education. In: Bud R, Cozzens SE, eds. Invisible Connections: Instruments, Institutions, and Science. Bellingham, WA: SPIE Optical Engineering Press; 1992:57–82.
  17. Wise MN. Precision: agent of unity and product of agreement. Part III. "Today precision must be commonplace." In: Wise MN, ed. The Values of Precision. Princeton, NJ: Princeton University Press; 1995:352–361.
  18. Price D. Of sealing wax and string. In Little Science, Big Science–and Beyond. New York: Columbia University Press; 1986.
  19. Ihde D. Instrumental Realism: The Interface Between Philosophy of Science and Philosophy of Technology. The Indiana Series in the Philosophy of Technology. Bloomington, IN: Indiana University Press; 1991.
  20. Hutchins E. Cognition in the Wild. Cambridge, MA: MIT Press; 1995.
  21. Latour B. Cogito ergo sumus! Or psychology swept inside out by the fresh air of the upper deck: review of Hutchins' Cognition in the wild. MIT Press, Mind, Culture, and Activity: An International Journal 1995;3:54–63.
  22. Surowiecki J. The Wisdom of Crowds: Why the Many are Smarter than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. New York: Doubleday; 2004.
  23. Rosenstein AH. Measuring the benefit of performance improvement and decision support. Am J Med Qual 1999;14:262–269.
  24. Womack JP, Jones DT. Beyond Toyota: how to root out waste and pursue perfection. Harvard Bus Rev 1996;74:140–158.
  25. Rasch G. Probabilistic Models for Some Intelligence and Attainment Tests (reprint, with Foreword and Afterword by B.D. Wright, Chicago: University of Chicago Press, 1980). Copenhagen, Denmark: Danmarks Paedogogiske Institut; 1960.
  26. Bond TG, Fox CM. Applying the Rasch Model: Fundamental Measurement in the Human Sciences. Mahwah, NJ: Lawrence Erlbaum Associates; 2001.
  27. Fisher WP Jr. Mathematics, measurement, metaphor, metaphysics. Part I: implications for method in postmodern science. Theor Psychol 2003;13:753–790.
  28. Fisher WP Jr. Mathematics, measurement, metaphor and metaphysics. Part II: accounting for Galileo's ‘fateful omission.' Theor Psychol, 2003;13:791–828.
  29. Donabedian A. The quality of care: how can it be assessed? JAMA 1988;260:1743–1748.
  30. Ware PJ, Phillips J, Yody BB, et al. Assessment tools: functional health status and patient satisfaction. Am J Med Qual 1996;11: S50–S53.
  31. Greenfield S, Kaplan S, Ware JJ. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med 1985;102: 520–528.
  32. Stewart M. Effective physician-patient communication and health outcomes: a review. Can Med Assoc J 1995;152: 1423–1433.
  33. Hibbard JH. Engaging health care consumers to improve the quality of care. Med Care 2003;41:161–170.
  34. Hibbard J, Stockard J, Mahoney E, et al. Development of the Patient Activation Measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res 2004;39:1005–1026.
  35. Little P, Everitt H, Williamson I, et al. Preferences of patients for patient centered approach to consultation in primary care: observational study. Br Med J 2001;322:1–7.
  36. Hibbard J, Stockard J, Mahoney E. How ‘activated' are patients and consumers and why should we care? Paper presented at the 5th International Conference on the Scientific Basis of Health Services, Washington, DC, September 21, 2003.
  37. Pike AC, Nattress LW. The changing role of the amputee in the rehabilitation process. Phys Med Rehabil Clin N Am 1991;2: 405–414.
  38. Wilson M, Sloane K. From principles to practice: An embedded assessment system. App Measure Educ 2000;13:181–208.
  39. Mislevy RJ, Steinberg LS, Almond RG. On the structure of educational assessments. Measure: Interdisc Res Persp 2003; 1:3–62.
  40. Wright BD, Linacre JM. Observations are always ordinal: measures, however, must be interval. Arch Phys Med Rehabil 1989;70:857–860.
  41. Wright BD. Fundamental measurement for psychology. In Embretson SE, Hershberger SL, eds. The New Rules of Measurement: What Every Educator and Psychologist Should Know. Hillsdale, NJ: Lawrence Erlbaum Associates; 1999:65–104.
  42. Bates DW, Pappius EM, Kuperman GJ, et al. Measuring and improving quality using information systems. Medinfo 1998;9: 814–818.
  43. Bates DW, Pappius E, Kuperman GJ, et al. Using information systems to measure and improve quality. Int J Med Inform 1999;53:115–124.