20 years after ‘To Err is Human,’ hospital care quality measures are still of little use

The findings in the now two-decades-old report To Err is Human: Building a Safer Health System sparked a call for new, more stringent, quality measures.

The industry had been measuring quality for more than a century, but “it had been a backwater phenomenon until the report happened,” said Dr. Kedar Mate, chief innovation and education officer at the Institute for Healthcare Improvement. “The report upped the ante. It put a new kind of imperative around quality in general and quality measurement especially. The consequence of not understanding quality of care was losing lives.”

Before the Institute of Medicine’s report, quality measurement reporting wasn’t tied to federal reimbursement. Institutions and doctors weren’t penalized for performing poorly.

In the years since the report, much has changed. The CMS now requires hospitals, outpatient settings and nursing homes to track quality in order to receive full Medicare payment. Performance on some measures is publicly displayed on Medicare’s Compare websites. And with the 2010 passage of the Affordable Care Act, the agency now penalizes hospitals for performance on some measures such as readmissions and infections. The Medicare Access and CHIP Reauthorization Act of 2015 authorized a similar fate for physicians.

But those moves have done little to protect patients. In fact, some research suggests one of the more successful metrics, lowering the rate of readmissions, may have led to unintended deaths.

“When you look at the body of measures that are being used today, whether they are Medicare or the private sector, it came from the industry itself. And they weren’t being put up against core criteria,” said Francois de Brantes, senior vice president of business development at consultancy Remedy. “The industry always shoots for the lowest common denominator and you end up with what we have, a (group of measures) that don’t do a good job of actually determining whether or not we are closer to the system the IOM report recommended.”

Of course, it makes sense for quality information to be available for consumers and that payment for services be based in part on the quality of care provided. The problem is that clinicians and quality leaders largely don’t trust the measures and don’t think they help them do their job any better.

“What I’m hearing from our members and what they are hearing from their practicing physicians is that we need to reform the quality measurement regime we are living under right now,” said Dr. Jerry Penso, CEO of AMGA, a trade association that represents 175,000 physicians nationally. “There are too many measures, they are not harmonized, they are not meaningful to the patients or the providers, they are costly and they are burdensome to collect, and it’s contributing to physician burnout.”

The distaste across the industry for the current set of quality measures comes at a time when value-based payment makes measurement imperative. Measures, after all, are the backbone to this payment model.

“There is no doubt the movement to value will drive a greater focus on measurement and measurement accuracy,” said Dr. Peter Pronovost, chief clinical transformation officer at University Hospitals in Cleveland and a well-regarded patient-safety expert. “HHS collects 2,300 different measures and when you ask clinicians how many are meaningful, improve care or are valuable to make care better, most clinicians say preciously few. We have made a lot of progress, but we still have a lot of work to do.”

The Trump administration has made moves to address the issue.

In 2017, the CMS launched the Meaningful Measures initiative, which identifies priority areas for quality measurement while reducing measures that aren’t valuable to the industry.

And most recently, an executive order issued in July directed HHS to establish the Quality Summit, gathering leaders across the industry to align and revise the measures used across its quality programs.

HHS declined a request for an interview with Deputy Secretary Eric Hargan, who chairs the summit along with Pronovost, but in a news release at the time of its announcement Hargan said the efforts thus far to give incentives for quality care have been “met with limited success.”

The agency has since selected the summit’s 18 participants, which include leaders of health systems, health plans and consumer groups.

With this work underway at the federal level, quality researchers claim it’s time to rethink what is measured and the data used.

There are many reasons why dissatisfaction with quality measures is so widespread. Some experts claim it’s because few measures are statistically valid.

“What is so striking to me is that there is no standard for how accurate or inaccurate (a measure) needs to be,” said Pronovost, who has written extensively on the topic.

A recent report from the U.S. Government Accountability Office echoed similar concerns. The report, published in late September, found the CMS has different approaches to decide which quality measures it will develop and use. The agency also doesn’t have a way to assess whether the measures under consideration will achieve strategic objectives, “which increases the risk that the quality measures it selects will not help the agency achieve those objectives as effectively as possible,” the report said.

In response, a CMS spokeswoman said, the measure development process is lengthy and “follows best scientific-based evidence, and ongoing evaluation of measures for impact, reliability, feasibility, usability and extensive testing prior to any submission to” the National Quality Forum.

She added the agency is developing additional tools to evaluate measures.

Pronovost said structurally valid measures can lead to quality improvement, pointing to the gains made thus far in many infections, especially central line-associated blood stream infections. The reductions in such infections dropped by over 60% in eight years, and Pronovost, who helped lead the initial work, said that happened in part because stakeholders across the country agreed to use the measure developed by the Centers for Disease Control and Prevention to track their progress and focus their efforts. The measure was selected because it relied on lab data and counting the days a patient had a central venous catheter, both components the scientific community agreed identified the infections accurately.

“It was a remarkable success, but it wasn’t random. It happened in part because we had good measurement for this type of infection and we still don’t have that for many other” conditions, Pronovost said.

Dr. Shantanu Agrawal, CEO of the National Quality Forum, pushed back on the argument that the measures the CMS uses aren’t statistically valid. The agency usually only includes measures in its programs that are endorsed by the NQF, which has a process to evaluate measures for their relevance to patient care, scientific acceptability, feasibility and usability. The measures are also tested for their validity, which involves running the measure to assess if the score reflects the quality of care provided, he said.

“The CMS uses endorsement as their check for validity,” Agrawal added.

Yet others question the effectiveness of NQF’s endorsement process.

“We need to reimagine how we do quality measurement,” said Dr. Eve Kerr, professor of internal medicine at the University of Michigan and senior investigator at the VA Center for Clinical Management Research. She co-authored a perspective article in the New England Journal of Medicine that found some measures endorsed by NQF weren’t valid using another endorsement process. A modified method from the RAND Corp. was used to test the validity of 86 measures that are part of Medicare’s Merit-based Incentive Payment System program. The authors found only 48% of the measures endorsed by the NQF were considered valid.

The findings are likely the result of the slightly different criteria the NQF and RAND Corp. use as well as because those who evaluated the measures for the study were members of the American College of Physicians, said Dr. Catherine MacLean, an author of the report and chief value medical officer of the Hospital for Special Surgery based in New York.

“It’s not your average community of internists on the committee,” MacLean said. “People were selected because they had background expertise in quality measures and quality measurement and on the practical application of the measures in real life.”

The physicians were discovering that some of the measures didn’t account for the special considerations required when addressing the unique circumstances of the patient. For instance, a physician will only score well on a depression measure if the patient is still taking medication after designated time periods. But what the measure doesn’t consider is that some physicians will choose to take patients off the medication if they have adverse side effects and replace it with therapy, according to MacLean.

“That physician is still treating the patient appropriately, it’s evidence-based care, but they would fail that measure,” she said.

To combat this, MacLean said it may be time to rethink how stakeholders are used in the endorsement process. NQF stakeholders include not just doctors but patients, consumer groups and insurers.&n

While it’s important to get patient feedback on the usefulness of a measure, MacLean said it should be left to those versed in measurement science to assess a measure’s validity.

“I don’t think we use stakeholders to their best potential, and I think sometimes stakeholders are put in positions that they aren’t qualified to assess,” she said.

Agrawal at NQF said while personal opinions are given merit in the endorsement process, only science is used to consider a measure’s validity. “The science comes first before anyone else does,” he said, adding that different viewpoints are essential to obtaining valid measures.

“Having different stakeholders in the room is a way that we are making the process rigorous,” he said. “The process is going to be less satisfying if you want to have sole say on the outcome … but it’s core to our process that we are going to be multi-stakeholder. What are we doing measurement for if we aren’t involving patients?”

Rather than blaming endorsement activity for invalid measures, some might point to the measurement development process. The CMS relies on contractors, paying them millions to develop measures.

A Modern Healthcare report revealed that the CMS’ current quality measures rely heavily on expertise and research from a select handful of organizations that are paid millions of dollars. For example, academics from Yale New Haven Health’s research department helped develop many measures publicly reported on Hospital Compare, including 30-day readmissions and the overall hospital star ratings.

But Pronovost said it’s not fair to blame the developers. “The outside contractors aren’t the problem. There are no standards to guide them,” he said.

In response to concerns about the measurement development process, a CMS spokeswoman said the agency “utilizes an open and transparent process to gain feedback on measures and follows government contracting requirements when seeking organizations to develop measures.”

She said each year the CMS asks stakeholders to submit measures for consideration, adding “Rigorous measure development is a complex task.”

Pronovost suggests the federal agency identify the top 10 reasons for harm in the hospital setting and establish a standard process to assess the validity of measures. He also said more resources need to be allocated to research measurement science.

“When you read the literature in the movement towards value, we acted as if there was a measure tree in the backyard we can start using and paying people on,” Pronovost said. “There is preciously little research into developing measures. We tend to default to what is available, which are claims measures.”

Although many measures come from insurance claims because it’s data the CMS and other payers have at their disposal, there are limits to that information. Claims don’t demonstrate the full medical history and status of the patient including lab results and vital signs, which are important when measuring quality.

“The way quality measurement has evolved in this country isn’t from the concept of, ‘We really need to measure this because we are worried about how it impacts patient health,’ but more, ‘What can we measure given our data constraints?’ ” Kerr said. “We have been driven in large part by data availability instead of what is clinically meaningful.”

There is also significant lag time when working with claims data. In fact, clinicians can sometimes wait up to a year before they get performance results back from the CMS on quality measures, making them less relevant for clinical care.

“The lack of infrastructure around quality reporting to make it actionable and useful for quality improvement has made people feel like it is all for administrative gain,” said Dr. Karen Joynt Maddox, assistant professor of medicine at Washington University School of Medicine in St. Louis who researches federal quality programs.

The sweet spot is to harness data from claims as well as clinical data from the electronic health record, according to Aneesh Chopra, former U.S. chief technology officer in the Obama administration. Claims data over a long period of time can offer a comprehensive look at the services patients have received, but clinical data from the EHR offers nuanced information about the patient and their health not available in claims.

“The liberation of both datasets can give you a more accurate picture” of the patient, Chopra said.

But there are challenges to getting clinical data from EHRs because the systems were primarily built for billing, not for quality measurement.

“There was great hope with the EHR, but data input is unstructured. You have to define (what you want to measure) very explicitly,” Pronovost said.

Because of the constraints of EHRs, measures based on information from the medical record are for the most part manually abstracted by trained clinicians at hospitals and health systems as part of a time-intensive and costly process.

But a growing set of clinical quality measures can be pulled from the EHR automatically. Called electronic clinical quality measures, or e-CQMs, the CMS has required hospitals to report them since 2017.

With e-CQMs, the CMS writes up the specifications for the measure and the EHR vendor builds the software so the system can pull the information from the record that is needed for that measure to be reported. This process removes the need for manual data abstraction.

Providers generally support the measures and want to see them expanded. Not only do they remove the need for manual abstractors, but they allow clinicians to get their quality scores much faster.

E-CQMs are updated weekly for doctors at Texas Health Resources, which was an early adopter and worked with the CMS to pilot the initial measures.

“To be able to get measurement much closer to the point of care is much more actionable and much more meaningful,” said Dr. Ferdinand Velasco, chief health information officer at the not-for-profit hospital system.

Right now, the CMS requires a small set of e-CQMs for reporting, but it’s interested in expanding.

“CMS is committed to advancing interoperability and leveraging electronic systems for quality measurement,” an agency spokeswoman said.

But implementing e-CQMs successfully is a sensitive process that requires oversight, Velasco said.

Texas Health Resources has to go through a process to validate the results because often the initial scores on a measure don’t match manually abstracted results.

“It’s not like you get the software to the vendor, program it, and boom, that’s it,” Velasco said. “It’s an iterative process.”

Electronically and manually abstracted results can differ because the electronic system is limited in the places it can look for data relevant data to the measure.

A human scouring a record can read notes to find the pertinent information, but an EHR won’t do that.

In the years since it started using these measures, Velasco said clinicians have undergone a lot of training so they are documenting information in the appropriate areas.

The EHR has been modified over the years with alerts and prompts to make it easy for the clinician to put information in the right area, said Barbara Ray, director of quality measurement and reporting at Texas Health Resources.

Given how meticulous clinicians need to be about documentation, the method does add a level of burden to their jobs, said Dr. Keith Woeltje, chief medical information officer at St. Louis-based BJC HealthCare, which was also an early adopter of e-CQMs.

E-CQMs can also potentially be less accurate than manually abstracted data because a human can be “persistent” in pursuing the information in the record, Velasco said.

“Because of the ability for a human to be creative and look for the information, you get a higher yield,” he said.

The ideal future state would remove the need for discrete documentation entirely and artificial intelligence can weed through the data to find the right information for measurement, Woeltje said.

So how to improve quality measurement? The key might be in the recent move by the Trump administration to fix interoperability.

In a still-pending proposed rule, the Office of the National Coordinator for Health Information Technology proposed requiring healthcare providers to adopt a new standard in their EHR called Fast Healthcare Interoperability Resources to certify applications.

The standard, known as FHIR, will allow healthcare organizations to share information with each other no matter the EHR vendor they use.

The ONC frames the use of FHIR as a benefit for patients to access their own health information. By requiring healthcare organizations to use FHIR, patients would be able to download apps on their phone with their healthcare information from a variety of providers.

But the capabilities of FHIR can expand to clinicians, said Dr. Donald Rucker, the national coordinator for health IT at ONC.

Right now, measures are limited to narrow datasets. Standard application programming interfaces supported by FHIR will allow for greater data availability because information from other sources will be available within the EHR. This will allow providers to get a much more holistic view of their patients’ health information.

“You can imagine the world for individual clinicians broadly changes,” Rucker said.

The use of FHIR will also enable doctors to download third-party apps that can offer comprehensive quality information in a user-friendly, real-time format. “We are entering a new era where you have a marketplace of apps that are competing to help doctors make smarter decisions,” Chopra said.

The rule, which was proposed in February, is being criticized for imposing deadlines that don’t leave enough time to implement changes as well as concerns about patient privacy since smartphone apps aren’t covered by HIPAA. Rucker said he hopes the ONC rule will be finalized before year-end.

The NQF’s Agrawal said he imagines achieving interoperability will transform how clinicians think about quality measurement and accelerate the progress on quality overall.

“We would change the very dialogue around measurement burden if we could solve the data issues first,” he said. “More seamless data transfer is central to this effort. … Without EHRs that are interoperable, without having the data follow the patient regardless of where they get care, we aren’t going to see dramatic improvement in quality.”

Subscribe for the latest Celebrity Gossips

Spread the love