Every risk adjustment vendor claims they use AI now. Some say “machine learning algorithms.” Others tout “natural language processing.” A few neuro-symbolic AI companies are positioning themselves as the next evolution in healthcare technology.
Most of this is marketing noise. But neuro-symbolic AI actually represents something different, and understanding what these companies are actually selling matters if you’re evaluating technology.
The Black Box Problem with Traditional AI
Traditional AI companies in risk adjustment use machine learning models trained on thousands of charts. The model learns patterns: when it sees certain words or phrases, it suggests certain HCC codes.
The problem is that you can’t see inside the model’s reasoning. It’s a black box. The model says “this chart supports HCC 18” but can’t explain why in terms you can verify. It found statistical patterns in training data, but it can’t point to the specific clinical evidence that justifies the coding decision.
This creates an audit nightmare. Three years from now, CMS questions an HCC. You need to produce the evidence that supported the code. Your AI said to code it, but can’t explain its reasoning in terms of MEAT criteria. You’re stuck trying to reverse-engineer the justification.
Most traditional AI companies in healthcare can’t solve this problem because their technology wasn’t built for explainability. It was built for pattern matching.
The Rules-Based Alternative
Some companies solve this by building rule-based systems instead of pure machine learning. These systems encode clinical knowledge: “if you see GFR below 60 and documentation of kidney disease, suggest CKD Stage 3.”
Rules are transparent. You can see exactly why the system made each suggestion. But rules are brittle. They break when they encounter documentation patterns they weren’t programmed to handle.
Rules also require constant maintenance. Every time coding guidelines change, someone needs to write new rules. That’s expensive and slow.
What Neuro-Symbolic AI Companies Actually Promise
Neuro-symbolic AI companies claim to combine both approaches. The “neuro” part is the machine learning that recognizes patterns in unstructured clinical text. The “symbolic” part is the rules-based reasoning that applies clinical knowledge and coding logic.
The machine learning component reads the clinical note and extracts relevant information: lab values, symptoms, medications, diagnoses. The symbolic reasoning component applies clinical rules to that information: does this combination of findings support a specific HCC code according to CMS guidelines?
The key selling point is explainability. These companies promise systems that can show you exactly what clinical evidence they found and how they applied coding rules to reach a conclusion. “I identified GFR of 42 in the lab results, found ‘chronic kidney disease’ in the assessment, and verified MEAT criteria through the documented treatment plan. This supports HCC 138.”
That sounds audit-defensible. You can verify the reasoning. You can produce the specific evidence.
The Gap Between Promise and Reality
Here’s what neuro-symbolic AI companies often don’t tell you: building these systems is incredibly hard. It requires deep expertise in both AI/ML and clinical coding rules. The symbolic reasoning layer needs to encode accurate clinical knowledge, which requires domain experts who understand risk adjustment deeply.
Many companies claiming to offer neuro-symbolic AI are really just using traditional machine learning with some basic rules layered on top. They call it “neuro-symbolic” because it sounds sophisticated, but the symbolic reasoning component is too shallow to matter.
When evaluating neuro-symbolic AI companies, ask them to demonstrate explainability on your actual charts, not demo data. Pick a complex case and ask the system to explain why it suggested specific HCCs. Can it point to the clinical evidence? Can it show you the reasoning steps? Can it explain how MEAT criteria was satisfied?
If the company can’t demonstrate this clearly, they’re using the label without delivering the substance.
Why This Matters for RADV Audits
During a RADV audit, you need to prove that documentation supports every coded HCC. Pure machine learning can’t help because it can’t explain its reasoning. Basic rule-based systems can help, but only if the rules correctly captured the relevant documentation pattern.
True neuro-symbolic AI can produce an audit trail: “Here’s the clinical evidence I found. Here’s the coding logic I applied. Here’s why this meets MEAT criteria.” That trail stays intact for years. When CMS questions a code, you can instantly produce the justification.
This dramatically reduces audit preparation time. Instead of sending coders back into old charts to reconstruct reasoning, you pull up the evidence trail the AI preserved when the code was originally assigned.
What to Actually Ask Neuro-Symbolic AI Companies
Don’t take their word for it that they’re using neuro-symbolic approaches. Ask for proof.
Request a detailed explanation of their architecture. How does the neural network component work? What does the symbolic reasoning layer actually do? How do they work together?
Ask about the knowledge base. How is clinical knowledge encoded? How often is it updated? Who maintains it? The symbolic reasoning is only as good as the clinical rules it’s built on.
Demand demonstrations with your real charts, not sanitized demos. See how the system handles ambiguous documentation, incomplete information, and edge cases. True neuro-symbolic systems should handle these gracefully while still explaining their reasoning.
Ask about their team’s expertise. Do they have people who understand both AI/ML and clinical coding? If it’s just engineers without clinical domain experts, they probably can’t build sophisticated symbolic reasoning.
The Vendor Landscape
The neuro-symbolic AI companies working in risk adjustment tend to be smaller, specialized firms rather than large general AI companies. Building these systems requires such specific domain expertise that generalists struggle to compete.
Look for companies whose leadership includes both technical AI expertise and deep risk adjustment experience. If their team is all engineers or all clinicians, they’re missing half the equation.
Also be wary of large platform companies claiming to have added neuro-symbolic capabilities recently. This technology takes years to build properly. A company that’s been doing pure machine learning for five years and suddenly claims neuro-symbolic capabilities in their latest release is probably rebranding rather than fundamentally changing their approach.
The Bottom Line
Neuro-symbolic AI isn’t magic, and many companies claiming to offer it aren’t delivering the real thing. But when implemented properly, this approach solves the explainability problem that pure ML creates and the brittleness problem that pure rules create.
If you care about audit defensibility and want AI assistance that can explain itself, look for neuro-symbolic AI companies that can actually demonstrate the explainability they’re promising. Ask hard questions, demand real demonstrations, and don’t accept marketing claims at face value.
The right technology can transform your risk adjustment operation. The wrong technology marketed with sophisticated terminology will just create expensive problems.

