tech

The Prospective Risk Adjustment Measurement Framework That Actually Predicts Success

Your organization is implementing prospective risk adjustment. You’re building point-of-care tools that alert providers to potential documentation gaps during patient visits. Leadership is excited. IT is building the technology. Everyone assumes this will improve capture rates.

Six months in, you have no idea if it’s working. You’re tracking activity metrics (alerts generated, alerts clicked), but you don’t know if those activities translate to better coding outcomes.

Here’s the measurement framework that actually tells you whether prospective risk adjustment is succeeding.

The Activity Trap

Most prospective risk adjustment programs measure activity. Alerts generated. Provider logins. Gap closure recommendations displayed. These metrics tell you the system is being used. They don’t tell you if it’s creating value.

I’ve seen organizations celebrate 10,000 alerts generated per month without knowing how many of those alerts led to actual documentation improvements. They’re measuring outputs, not outcomes.

Activity metrics make vendors happy. They can show high engagement numbers. But high engagement doesn’t equal high value if the engagement doesn’t change provider behavior.

The Documentation Completion Rate

Better metric: of the gaps identified by your prospective system, what percentage get documented during the visit?

Your system alerts Dr. Johnson that her diabetic patient doesn’t have diabetes complications documented. Does Dr. Johnson add that documentation during the visit? Or does she dismiss the alert and move on?

Documentation completion rate tells you whether providers are actually responding to prospective prompts. If your completion rate is below 20%, providers are ignoring most alerts. If it’s above 60%, providers are finding the alerts valuable and actionable.

Track this by provider. Dr. Martinez might have 75% completion rate while Dr. Thompson has 15%. That tells you Dr. Martinez finds the system helpful while Dr. Thompson doesn’t. You can investigate why and improve the experience.

The Coding Lag Metric

Prospective risk adjustment should reduce the time between encounter and code submission. If a provider documents a diagnosis during the visit and the coding happens immediately (or within days), that’s prospective working correctly.

If the diagnosis gets documented during the visit but doesn’t get coded until three months later during retrospective review, your prospective system isn’t actually accelerating coding.

Measure: average days between encounter and code submission for conditions identified through prospective alerts. Compare that to average days for conditions identified through other methods.

If prospective-identified conditions are being coded 90% faster than retrospective-identified conditions, your prospective system is working. If the lag is similar, the system isn’t changing your coding timeline.

The Retrospective Reduction Test

The ultimate test of prospective effectiveness: did your retrospective workload decrease?

If prospective is working, you should be finding fewer gaps during retrospective review. The gaps were already closed prospectively.

Track your retrospective chart volume and HCC opportunities per chart over time. If prospective is effective, retrospective volume should decrease or the incremental HCC yield per retrospective chart should drop.

If retrospective workload stays constant despite prospective implementation, one of two things is happening: (1) prospective isn’t actually closing gaps, or (2) prospective is identifying new opportunities that wouldn’t have been found retrospectively. The latter is valuable. The former is a problem.

The Provider Satisfaction Score

Prospective tools that annoy providers create long-term problems. Providers start ignoring alerts. They complain to leadership. The program becomes politically toxic.

Regularly survey providers about prospective tools. Simple questions: “Do the alerts help you provide better care?” “Are the alerts accurate and relevant?” “How often do you dismiss alerts without reading them?”

If provider satisfaction is low, the program is at risk regardless of what other metrics show. Unsatisfied providers will eventually revolt or disengage.

Track satisfaction over time. It typically starts high (novelty effect) and drops as alert fatigue sets in. If you can maintain satisfaction above 60-70% after six months, you’re doing well.

The False Positive Rate

Your prospective system alerts Dr. Lee that her patient might have CKD based on lab values. Dr. Lee checks. The patient doesn’t have CKD. The lab was a one-time abnormality.

That’s a false positive. False positives waste provider time and erode trust in the system.

Track: of the alerts generated, what percentage are clinically inappropriate? Aim for false positive rates below 25%. Above 40%, providers stop trusting alerts.

This requires clinical review. Sample 100 alerts per month and have clinicians evaluate whether each alert was appropriate. It’s manual work, but it’s essential for maintaining system credibility.

The Audit Defensibility Check

Prospective documentation improvements need to be audit-defensible. If your system prompts providers to add diagnoses without adequate clinical basis, you’re creating audit risk.

Periodically audit a sample of prospectively-captured HCCs using the same standards CMS would apply. Do the diagnoses have adequate MEAT criteria? Is there clinical evidence supporting them? Would they survive a RADV audit?

If prospectively-captured HCCs fail audits at higher rates than retrospectively-captured HCCs, your prospective system might be encouraging inappropriate coding.

The Cost Per Incremental HCC

Prospective programs aren’t free. Provider time, technology costs, ongoing support. Calculate: what does each incrementally-captured HCC cost?

Take total program costs (technology, support, provider time spent responding to alerts) divided by incremental HCCs captured that wouldn’t have been found otherwise.

If your cost per incremental HCC is $500 and the average HCC value is $3,000, that’s a good ROI. If your cost per incremental HCC is $2,000 and the average value is $2,500, the economics are marginal.

This calculation requires estimating counterfactual: how many of the prospectively-captured HCCs would have been captured retrospectively anyway? It’s imperfect, but necessary for ROI analysis.

The Temporal Comparison

Compare performance before and after prospective implementation. Use a control group if possible.

If you implement prospective for half your provider network, compare capture rates between providers using prospective tools and providers not using them. Control for baseline differences.

The cleanest measurement: same providers before and after prospective implementation, controlling for other changes (coder training, retrospective process improvements, etc.).

Look for 10-15% improvement in capture rates attributable to prospective. Smaller improvements might not justify the investment. Larger improvements are rare unless baseline processes were very weak.

What Actually Works

Measuring prospective risk adjustment requires multiple metrics working together. Activity metrics show engagement. Documentation completion rates show provider response. Coding lag shows workflow improvement. Retrospective reduction shows gap closure. Provider satisfaction predicts sustainability. False positive rates measure accuracy. Audit defensibility ensures compliance. Cost per HCC measures efficiency.

Organizations that measure only activity metrics don’t know if prospective is working. Organizations that measure outcomes know whether their investment is paying off and where to improve.

Build your measurement framework before you launch prospective. Define what success looks like. Measure it systematically. Adjust based on what the data shows.

newsatrack.co.uk

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button