AI-navigated medical imaging
Patients receive DICOM files with no way to view them outside vendor software. AI-navigated viewer, ask 'show me the hippocampus' and it jumps to the right slice.
Patients receive DICOM files with no way to view them outside vendor software.
AI-navigated viewer, ask 'show me the hippocampus' and it jumps to the right slice.
The story
I was waiting on an MRI. Not the scan, the reading. The technician handed me a zip with a few thousand .dcm files and said the radiologist would walk me through it in a week. A week of staring at a folder I could not open.
The imagery is mine, the diagnosis is mine, but the tooling is built for a radiology workstation (thick vendor clients, windowing controls, series pickers). Anyone without that training just has a zip and a wait.
So I built what I wished I’d had that week: a viewer a patient can actually drive, in plain English.
What it does
You scroll the series like any DICOM viewer. The difference is the chat and voice agent on the side that can drive the viewer for you: “show me the hippocampus”, “jump to where the implant is visible”, “what plane is this slice in?”. It pulls up the right slice, in the right series, and tells you what you’re looking at.
The modes
The viewer grew into a few modes, each aimed at a different person in the room:
- Viewer: the primary mode. Agent has tools for
GO_TO_SLICE,SWITCH_SERIES,ANNOTATE, and anatomy lookup. - Learn: guided anatomy tour, voice tutor leading a structured walkthrough.
- Report: radiological report bound to the slices. Every finding links to the slice where it’s best visible.
- Doctor: wizard for clinicians (patient info, image selection, analysis).
- Quiz: self-test scoring mode for students.
- Benchmark: multi-model bench across Claude, Gemini, OpenAI, HuggingFace, and Cohere.