AI-Powered Diagnostic Imaging for Regional Healthcare Network
Client: MedVista Health Systems · November 15, 2025
What did MedVista Health Systems need from this AI project?
MedVista needed an AI-assisted radiology workflow that could absorb rising imaging volume across 14 hospitals and 40 outpatient facilities without adding radiologist headcount the labor market could not supply. The mandate was specific: triage faster, surface critical findings within minutes of image acquisition, and standardize reporting turnaround across the network — all without disrupting the radiologist-led workflow or putting the network on the wrong side of HIPAA.
The pressure was structural. Radiology volumes were climbing year over year while the supply of subspecialty radiologists was flat or shrinking. Critical findings — intracranial hemorrhages, large pulmonary emboli, acute fractures — were occasionally sitting in queues longer than was clinically acceptable. Generic vendor AI tools either covered a narrow modality, required shipping PHI to third-party clouds, or integrated so awkwardly with the PACS and reporting stack that radiologists ignored them. MedVista needed a system tightly integrated with its existing workflow, validated against subspecialty radiologist judgment, and deployed in a way that kept patient data under the network's control.
How did Clearframe Labs approach the build?
Phase 1: Data foundation and de-identification
We worked with MedVista's IT and clinical informatics teams to build a secure data pipeline that pulled DICOM images from the network's PACS, ran de-identification on every header and pixel-level identifier, and prepared training datasets inside MedVista's environment. Over 2.3 million annotated images across chest X-ray, CT, and MRI modalities were curated with subspecialty radiologist oversight, with explicit case mix to reflect the network's real demographics — not a public benchmark distribution that would have biased the models.
Phase 2: Modality-specific computer vision models
We trained a suite of CNN-based detection models, each scoped to a single clinical task rather than chasing a single multi-task model. Chest X-ray triage and critical-finding flagging. Lung nodule detection and characterization on CT. Intracranial hemorrhage detection on head CT. Fracture detection on extremity X-rays. Each model was validated against a held-out test set scored by a panel of subspecialty radiologists, with sensitivity targeted to clinical thresholds and specificity tuned to keep false-positive rates inside the radiologist's tolerance for noise.
Phase 3: Clinical workflow integration
We integrated the AI directly into MedVista's existing PACS and reporting workflow rather than building a parallel UI. When a radiologist opens a case, the AI's pre-read analysis, priority score, and preliminary measurements appear inline alongside the images. Critical findings escalate immediately — a flagged head CT can move from acquisition to a radiologist's attention within minutes, not hours. Crucially, the AI never auto-finalizes a report; it surfaces evidence and the radiologist decides.
Phase 4: MLOps, monitoring, and continuous validation
We deployed monitoring for model performance, data drift, and system health across every site, with automated alerts when a model's confusion matrix shifts in production. Retraining pipelines incorporate new validated cases on a quarterly cadence, with shadow evaluation and clinical review before any model promotion. Every inference is logged with provenance — model version, input hash, output, radiologist disposition — to support audit and any future regulatory inquiry.
What were the results?
The system has been in production for 14 months across the network, processing over 800,000 studies, with measurable impact on accuracy, turnaround, and the speed of critical-finding escalation.
- Diagnostic accuracy improvement: 31% versus pre-AI baseline on the targeted findings
- Average reporting time reduction: 45% across covered modalities
- Critical finding detection rate: 98.7% on the validated label set
- Annual cost savings: $2.1M from improved throughput and reduced re-reads
The financial number understates the clinical impact. Critical findings now escalate within minutes of acquisition rather than waiting in a queue, and radiologists report higher job satisfaction because the AI absorbs the triage and measurement work that fills hours without using their judgment.
What technical decisions made this work?
- Modality-specific models over a single multi-task model: four scoped CNNs, each tuned to a single clinical task, outperformed every general-purpose alternative we benchmarked. The complexity cost of running four models is trivial compared to the accuracy gain on the findings that actually matter.
- Inline PACS integration, not a parallel UI: the AI's outputs appear where the radiologist is already working, alongside the images. Vendor tools that required a second tab or a separate workstation got ignored in pilots; integrated tooling got adopted.
- Subspecialty radiologists as validators, not annotators: test sets were scored by subspecialty panels and operating points were tuned to clinical thresholds, not Kaggle metrics. This is the difference between a model that wins on a public benchmark and one that earns clinician trust in production.
- Deployed inside MedVista's environment: the entire pipeline — training, inference, monitoring — runs in the network's infrastructure. PHI never leaves, which solved the data-residency objection and simplified the HIPAA posture.
- Closed-loop validation via radiologist disposition: every AI output is logged against the radiologist's final read, giving us a continuous, real-world performance signal. That data feeds quarterly retraining and is the early-warning system for drift.
Lessons for teams considering similar projects
- Clinical AI succeeds as a workflow augmentation tool, not a replacement. The radiologist remains the decision-maker; the AI absorbs the mechanical work that fills their day.
- Subspecialty validation is non-negotiable. A model that performs on a public dataset but has not been scored by the specialists who will use it should never reach production.
- Inline workflow integration beats every standalone UI. If the AI is not visible where the clinician is already working, it will be ignored regardless of how accurate it is.
- Continuous monitoring and retraining are the norm, not an option, for medical AI. Imaging characteristics drift with new scanners, new protocols, new demographics; the system that does not retrain quietly degrades.
- ROI in clinical AI is real, but the strongest returns are in turnaround and clinician satisfaction, not headcount reduction. The honest pitch is "your radiologists do better work faster," not "you need fewer radiologists."
What's next
MedVista is expanding the same architecture into mammography, cardiac imaging, and pathology, using the de-identification pipeline and clinical validation framework as the template for every new modality the network onboards.
Ready to Transform Your Business with AI?
Let's discuss how our AI solutions can drive growth, reduce costs, and create competitive advantages for your organization.
Schedule a Consultation