By Betsy Castillo — August 15, 2025
Healthcare organizations are rushing to embrace AI tools for a range of use cases. AI use in clinical data abstraction for medical registries, for example, is becoming increasingly popular due to its potential to drive cost savings and other benefits.
A number of healthcare technology vendors are offering such AI-based abstraction tools they promise will accelerate the clinical data abstraction process while cutting costs. However, too often these AI platforms – many of which are little more than retooled Gen AI algorithm strained on generic data – were developed with no input from medical professionals, making them unsuited for application in the complex and high-stakes world of healthcare.
There are two main reasons why clinical expertise is frequently missing in the development of AI tools for clinical abstraction. The first is naivety, the mistaken assumption that you can develop AI tools for healthcare without input from people with clinical backgrounds. The second, and this is a real challenge for start-ups, is the high cost of bringing on board nurses and doctors to consult during product development and testing.
Technology alone in the healthcare realm isn’t a solution. You need that clinical understanding in order for the technology to work effectively. It doesn’t matter how fast an AI model is if it answers questions incorrectly, and without input from clinical experts, wrong answers are inevitable – and dangerous.
When AI Misses the Point
Consider some real-world scenarios: In one case, clinicians asked an AI model for a patient’s most recent ejection fraction(EF) percentage, a critical measure of heart function. AI returned three clinical documents from the EHR, each with differentEF measures. If the clinicians took the most recent EF value, it would indicate there was no cause for concern. Fortunately, an experienced clinician knew better: An earlier, lower EF was the reason the patient was undergoing a procedure. The values were technically correct, yet only one made sense in the clinical context. That nuance is something AI can’t capture.
Clinical context is not optional, it's foundational. The AI finds the trends, but a clinician can tell you what matters in a crisis, or if that data even makes sense.Here’s another example: When asked if a patient returned to the operating room (OR), an AI model responded “yes” based on the patient’s post-surgical visit to the endoscopy suite. While that suite is affiliated with the OR, it’s not the same thing. A clinician immediately noticed the error in interpretation; AI did not.
The Role of Clinical Expertise
AI today has a critical limitation: It can process data, but it can’t infer or apply judgment. A random lab value won’t have meaning to AI. You have to put it in context. AI is going to give you information; you have to use your clinical knowledge to make a decision.
That’s why we need humans in the loop, both during the development of AI-based healthcare models and to conduct oversight during clinical data abstraction. Clinical experts can interpret registry questions, know where relevant data may be hidden, and recognize when documentation is incomplete or misleading.
These clinically trained abstractors have been in the OR, standing next to doctors during procedures. They have real-world experience that AI can never possess. The human in the loop is there to oversee the whole workflow process and also validate AI results.
Biases and Blind Spots
Data may contain hidden biases that AI can expose and amplify. Let’s say an abstractor checks a box indicating a specific physician gave a patient aspirin, even though there’s no documentation. The clinician knows from experience that this doctor always gives patients aspirin. In contrast, AI will incorrectly conclude that the doctor’s patients aren’t being given aspirin because there’s no documentation confirming it.
This process can result in higher levels of quality and compliance because physicians can take corrective action to improve documentation. But it takes a human to recognize the pattern; AI can’t do it by itself.
While good AI alone doesn’t mean good abstraction, AI can help any organization’s abstraction program to improve by highlighting inconsistencies. This means a provider with bad or impartial documentation can still benefit from AI because the model forces them to evaluate definitions and validate inputs.
Conclusion
Some healthcare AI vendors may view abstraction as simply a process of data retrieval and matching labels to values. It’s far more than that: Abstraction is clinical storytelling. Quality abstraction translates raw clinical data into a coherent view of a patient’s journey. Though AI is an incredibly powerful technology, it lacks a clinician’s insight. That’s why it’s critical that humans remain in the healthcare AI loop.
About the Author: Betsy Castillo is the director of clinical data abstraction at Carta Healthcare.
Originally published on dotmed.com
At Carta Healthcare, we believe high-quality data is the foundation for better healthcare outcomes. Traditional clinical data abstraction methods are labor intensive, time consuming, and costly. Our hybrid intelligence approach combines advanced AI with expert clinicians to deliver accurate, actionable data, reduce costs, save time, and improve efficiency for hospitals and health systems nationwide. With 100% customer retention, Carta Healthcare is honored to serve hospitals and health systems nationwide. Recognized with industry honors such as the Merit Award for Best Use of AI in Healthcare, the CB Insights Digital Health 50, and the BIG Innovation Award, Carta Healthcare is redefining the future of healthcare data management. Discover more at carta.healthcare