top of page

What to make of the millions of dollars Hopkins spent on quality data reporting

Updated: Nov 9

Use board-type context questions to turn one surprising statistic into general healthcare insights

An interesting new study, “The Volume and Cost of Quality Metric Reporting’ made ripples in the healthcare trade press and professional networks earlier this month. Published in the Journal of the American Medical Association (JAMA), a set of researchers, Anirudh Saraswathula, et al., put a tremendous amount of work into generating original data to the effect that, in 2018, the Johns Hopkins Hospital spent approximately $5.6 million on collecting and reporting data for 162 inpatient quality metrics. That’s the hospital, not the Hopkins system (JHHS) as a whole. It’s just for collecting and reporting the data; it did not include the capital cost of any technology, nor any staff/vendor time spent on actual quality improvement initiatives. The $5.6 million went to a total of 108,478 personnel hours, plus an additional over $600,000 in related vendor fees.

What to make of this? Think like a board

The trade press and professional network chatter leaped to making judgements based on the data. The judgements were things like '‘That is a staggering burden to put on providers just for collecting and reporting on quality data' .... ‘Sure, safety, outcomes, and data-driven quality improvement are important and worthy goals. But the data part alone should not be this costly' and 'It does not seem like we the nation are getting a lot of return on effort for this mandatory expenditure.’

All excellent points. However, I think a good starting place for making maximum sense out of this interesting new data point would be to avoid leaping to any conclusion at all -- and instead, to ask some good, contextual, information-generating questions -- like an effective board member would do. For example:

  • “Is what's being reported similar to the situation at our institution?"

  • “OK this is news to me, but is this news to everyone? If yes, why?”

  • “OK it SEEMS like a lot of money, but is it actually?”

  • “Does it HAVE to cost this much, or are there ways to reduce this spend?”

  • “Do we HAVE to do this? What total benefit are we receiving for this spend?”

Let’s use these as our jumping-off point and explore the answers—because, figuring out what to make of $5.6M for quality data collection and reporting reveals a number of interesting features of the U.S. healthcare situation in general.

Q #1: “Is this news to everyone? Why?”

A: Yes—and the ‘why’ is the important part: The fact that actual healthcare cost (as in, the direct cost to providers of providing healthcare) is an ongoing, research-worthy mystery.

Thanks to Saraswathula et al, we now have a figure for one organization’s direct cost to collect various inpatient quality metrics, with additional data on how the cost per metric varies based on data source. That’s wonderful—and, it’s also sobering that, as the researchers point out in their paper, this type of data for inpatient environment had yet to be published on; i.e., did not exist in the information environment, until just now.

The truth is, many input costs (to providers) of healthcare delivery are not well quantified or understood. To take a different example: there is a whole (rather arcane) mini research world around trying to estimate the cost of an OR minute. Authors of a 2018 study on the subject say that until their work (which generated averages using standardized accounting data from reports required by the state of California), the available estimates of the direct cost of an OR minute ranged from $7 to over $100. That would make a per-hour range of $420-$6,000, i.e., useless.

Bottom line: kudos to these researchers for creating this data point; AND, let’s all keep in mind how much running room we continue to have, as an industry, in understanding the true cost of healthcare delivery. Especially if we are trying to generalize across different kinds of facilities, in different markets, at different points in time. It may be a challenge to do, but it’s important. It’s difficult to have any kind of substantive conversation about pricing without an accurate picture of costs.

Q #2: Is this a lot of money?

A: Not as much as it seems—because big provider organizations have big finances.

Yes; $5.6M is a lot of money on an individual human scale; but if the aim is to understand healthcare economics in general, one does have to keep the total organizational operating and financial scale in mind. Saraswathula, et al., point this out directly in the main paper (though trade press coverage has mostly left out this point). In 2018, the hospital’s operating budget was $2.4B; that means the $5.6M spent on quality data represented just 0.2% of that budget.

Putting numbers into the context of organizational scale is important for both the cost side and the income and revenue side of major organizations—for example when one sees revenue figures for major hospitals and health systems. In both cases, one needs to look at the total budget and margin. This kind of mental discipline in the face of headline-grabbing dollar figures will become ever more essential as consolidation continues and health systems become ever larger.

Q #3: Does it have to cost this much?

A: In the long term, maybe not—but no quick or easy fixes are available today.

In healthcare, it’s always critical to shift from being shocked about how costly a thing is—to quantifying (and taking action on) how avoidably costly it is. That, of course, is always the harder part.

In this case, the authors suggest that a possible way to reduce the total cost of quality reporting is to (1) reduce the number of required quality metrics hospitals need to report on, and/or (2) shift the required metrics to ‘electronic data’ measures based on data taken directly from the EMR, instead of resource-intensive claims data. Both of these suggestions are reasonable—but the key missing context is that both are well-known, multi-stakeholder-consensus-requiring developments that are already moving along, albeit at glacial pace.

On cutting down the sheer number of required-reporting quality metrics: This is a universally lauded goal that entails getting all the various payers, regulators, and accreditors to take their own requirements list down and/or agree on a more streamlined common set. In general progress is coming along slowly; a notable recent development is that the Joint Commission cut 168 quality standards for 2023 and is looking to make a second round of cuts later this year.

On shifting to electronic quality metrics (that use data straight from the EMR): Electronic clinical quality metrics, known as eCQMs, are a significant ongoing focus in the middle of the industry Venn diagram between “quality” and “IT”. The challenge is the slow road to getting to the point where all the input sources and data definitions and so forth are standardized enough—and comprehensive enough across the care continuum—and the clinician input workflow feasible enough—to deliver the information needed to assess all the various dimensions of quality that CMS and other evaluators are looking for. For more on this topic, check out this great article.

In general, the costliness of quality data collection and reporting is part of a much larger pattern: In healthcare, technological efficiency is often still largely theoretical.

It’s important to remember that cumbersome data processing isn’t only a problem for quality data collection and reporting. Due to the nature of today’s healthcare IT environment, the staff cost intensity of anything that a big provider organization does with data and IT these days is likely to be painful to behold. Most provider organizations have a consistently massive amount of manual work going on, pulling, reconciling, cleaning, and reporting out clinical and administrative data from different data feeds—despite tons of existing (and ongoing) investments in IT.

This is a function of at least two major factors:

Backward engineering: The classification systems and sources that are most widely used in healthcare were designed for one thing: Fee-for-service billing. Any other purpose—including but not limited to quality—is simply not what that system was designed to do in a thorough or clean way. This is why claims data, despite being ‘already collected anyway’ as Saraswathula, et al put it, is still so cumbersome and expensive as a source of quality details.

Persistent data-feed fragmentation: As health systems consolidate, most will be operating many different EMRs and other relevant platforms/data sources under one corporate roof. If the system has acquired medical groups, add their different legacy EMRs to the mix. Even just within the inpatient world, many systems have multiple EMRs across system hospitals. And even if all are on the same EMR, there may be meaningful differences between instances and configurations of those EMRs. It’s a mess, and it’s not getting all that much better all that quickly.

That being said, the general IT environment among providers is slowly moving toward greater efficiency. Major change vectors include the provider-facing Trusted Exchange Framework and Common Agreement (TEFCA), CMS’s payer-facing CMS’s Interoperability and Patient Access rule , and the Fast Healthcare Interoperability Resources standard (FHIR, pronounced ‘fire’). In theory, all this should help make healthcare analytics more streamlined and nimble in future. The industry and its observers can hope so anyway.

Q #4: “Do we have to do this/what benefit are we receiving for this spend?”

A: Probably yes; and, the hard part about hospital economics is that the benefit is diffuse

Purchasers (especially CMS), accreditors, and regulators require the kind of quality data collection and reporting quantified in the study we are focusing on today. Opting out is largely not feasible.

As for the benefits—two thoughts.

‘Data collection and reporting’ sounds like a bureaucratic waste of time—but, at least some of this data is informing action on quality improvement

The researchers specifically report data collection and reporting time, ‘excluding all time spent designing andimplementing quality-improvement interventions’. This is discouraging, but I would keep in mind that data is an essential element of quality improvement activity. Not all of these metrics are probably used to inform action; but some of them certainly are. That portion of this activity, at least, is therefore, a foundational investment in quality improvement.

It’s a fact of healthcare that the cost of collecting quality data is largely borne at the provider level, while the benefit accrues at the all-industry/society level.

There are areas in the provider business in which an organization’s bang-for-buck for IT and staff time investments can be more easily measured. The most obvious example: revenue cycle. There, providers can use cost-to-collect benchmarks to see how much ROI they are achieving, within their own books, from various IT and staff time investments.

But in quality, the ‘return’ picture doesn’t work this way. Sometimes there is a direct return on effort for providers in documenting and reporting on quality-related data (for example, accurately and thoroughly risk-adjusting data is critical for provider economics under Medicare Advantage). But more often, collecting and reporting quality data is a heavy (and mandatory) lift for providers, and the benefit does not flow to them in a commensurate way.

Instead, the benefits of quality data are more general, long-term, and public. Some of the utility of this data is not even known yet at the time of collection. For example, it’s a relatively new development in the industry at large to take long-collected quality and safety metrics and cut them by demographic variables to find patterns in health disparities. But now that more researchers and organizations are doing exactly that, it turns out all those data points and metrics are going to be crucial in uncovering, understanding, and addressing health disparities.

I’m sure many industry stakeholders have many different perspectives about the lack of a clear-cut ROI for mandatory expenditures around quality data collection and reporting. Here are two neutral-ish comments from us:

First, quality is a big part of the mission for all hospitals, especially not-for-profit ones (which is most of them), so benefits that accrue at the societal level are more of a feature than a bug.

And second, the lack of a direct ROI to providers, combined with their necessary role in painting a clear picture on quality, mean that policymakers/payers will have to continue to require and/or incentivize these types of efforts in order to generate the quality data needed for various purposes down the road.

In sum: Spending figures on healthcare quality are eye-popping, but tell a much bigger story about nature of U.S. healthcare

When you see an eye-popping healthcare cost/spending number, it’s always worthwhile to explore the context in and around it. Not because the number will turn out to be justified—in many cases, it may not be! But because understanding the context informs whether and what kind of action may be needed—and also, because the answers lead to excellent vantage points on important features of the healthcare industry landscape.

Do you have some learners who need a boost on healthcare strategy topics?

Reach out to or directly to me at

Join our mailing list to see future posts

Thanks for submitting!

bottom of page