Current Interventional Radiology-Related Benchmark Clinical Quality Measures Are Less Likely to be “Capped” Than Diagnostic Radiology Clinical Quality Measures
When contributing to the merit-based incentive payment system (MIPS) composite score under the Center for Medicare and Medicaid Services (CMS), are current IR-related clinical quality measurements (CQMs) more likely to be benchmarked and noncapped (worth the maximum 10 points) than DR-related CQMs?
Take away point
Significantly more IR-related CQMs are worth 10 points compared to DR-related CQMs, which can help mixed IR/DR groups maximize their MIPS composite score and thus their CMS payment adjustment.
Noor M, Bivins E, Manchek B, Contreras F, Shah R, and Ward T. Current Interventional Radiology-Related Benchmarked Clinical Quality Measures Are Less Likely to be “Capped” Than Diagnostic Radiology Clinical Quality Measures. J Vasc Interv Radiol. 2021; 32:677-682. doi.org/10.1016/j.jvir.2020.11.016
Click here for abstract
Retrospective, database analysis using 2020 data from the quality payment program (QPP) resource library.
No reported funding.
Multi-database study using 2020 MIPS Historical Quality Benchmarks file, 2020 Cross-Cutting Quality Measures, and 2020 MIPS Quality Measure List; USA.
Cross-cutting CQMs identified as potentially applicable to IR periprocedural or clinic care and their respective MIPS point score based on benchmark and capped statuses.
The Quality Payment Program (QPP) under the Center for Medicare and Medicaid Services (CMS) aims to decrease healthcare costs by providing a payment system based on high-value care. Under the QPP, the merit-based incentive payment system (MIPS) generates a composite score (up to 100 points) to reflect the value-based care of a practice based on 4 essential performance categories: quality measures (QMs), improvement activities, cost, and promoting interoperability. In general, cost and promoting interoperability do not apply to radiology, and a radiology practice’s points from these categories are re-weighed to the QM category, which then accounts for 85% of the final MIPS composite score. In general, a practice can submit up to 6 different QMs for a total of up to 60 points. Three methods of QM submission exist: submission via MIPS clinical quality measurement (CQM) program, submission via electronic heath record (EHR) data as eCQMs, or submission via Medicare Part B claims if the practice is comprised of fewer than 15 clinicians.
Importantly, the MIPS composite score determines a percentage rate applied to future allowed charges via a minimum threshold (required to avoid a negative payment penalty) and a maximum threshold (above which a bonus is issued). These percentage rates and thresholds are becoming increasingly stringent, with CMS making annual changes based on prior years’ data. A CQM is considered “topped out” if it meets a national median performance rate of ≥ 95% and can be “capped” (decreased in point value from 10 to 7) if it has been topped out for 2 or more years. Given that 95% of specialty-specific QMs for DR were topped out in 2019 (the highest of any specialty) and 30% were capped, IR-related QMs could offer additional 10-point QMs to submit as contribution to a practice’s MIPS composite score. The authors perform a multi-database study analyzing the number and likelihood of IR-related QCMs worth 10 points compared to DR-related QCMs.
The 2020 MIPS Quality Measure List was used to identify measures directly attributed by CMS to DR and IR and those that might reasonably apply to IR periprocedural or clinical care. Additional cross-cutting measures that might apply to IR were identified via the 2020 Cross-Cutting Quality Measure file. Numeric identifiers were then cross-referenced with the 2020 MIPS Historical Quality Benchmark file to determine benchmark, topped out, and capped status. Only MIPS QCMs were included in the study; submission types via EHR and Medicare Part B were excluded.
Of the 713 QMs listed during the 2020 year, 196 were MIPS CQMs, 143 of which had a benchmark. Of the benchmarked CQMs, 9 were directly attributed to DR and 5 were directly attributed to IR (1 DR and 4 IR-related QCMs did not yet have a benchmark). Of these, 2/9 DR-related CQMs were not capped and 2/4 IR-related CQMs were not capped. An additional 6 cross-cutting measures and 2 potential IR periprocedural/clinic measures were identified, 7/8 of which were not capped.
Overall, 75% (9/12) of IR-related CQMs were worth 10 points (having both a benchmark and noncapped status) compared to 22% (2/9) of DR-related CQMs. This was a statistically significant difference with p=.03.
The authors asses the number of IR-related and DR-related CQMs valued at 10 points in the context of the MIPS composite score. Since the quality measure category holds 85% of the weight in most radiology practice scores and a specialty is not limited to measures attributed to them by CMS, it makes sense to maximize this contribution by increasing overall practice scope with mixed IR and DR physicians.
The authors’ results show that only 2 of the DR-related CQMs can be submitted for 10 points and of those, one measure has already been topped out for 2 years, implying an imminent cap in 2021. Additionally, the other measure, which was uncapped in 2020 due to extenuating circumstances, may very well be recapped in 2021, which could result in zero DR-related CQMs being eligible for 10 points in 2021.
In comparison, there are up to 9 IR-related CQMs that can be submitted for 10 points, none of which have been topped out for 2 years. Of important note, most of the cross-cutting measures that the authors suggest do rely on either an having an IR clinic or performing specific procedures in order to qualify and submit them. Indeed, it is likely that many of the initially identified IR-related measures have yet to be benchmarked or capped because IR clinics remain to be established in many practices. Therefore, early development of IR clinic would prove advantageous in this setting, particularly for growing practices.
The main limitation of this study is that it focused solely on MIPS CQMs and excluded submissions via EHRs or Medicare Part B claims. Although EHR integration is necessary for eQCMs and only small practices may submit via claims, this study does lack validity and generalizability as a result of such exclusions.
Lastly, the authors highlight the need for medical societies to have increasingly robust measure-creation methodologies. This will likely rely on the creation of large, multi-center databases to identify gaps in care such as the Society of Interventional Radiology’s (SIR) VIRTEX data registry.
Catherine (Rin) Panick, MD
Resident Physician, Integrated Interventional Radiology
Dotter Interventional Institute
Oregon Health & Science University