Tokenize patient data before it ever touches a language model. The LLM sees placeholders like [NAME_001], never real PHI. Responses are re-hydrated locally — so clinicians get the output they need, without data leaving your infrastructure.
Patient John Doe, DOB 05/14/1968, MRN 12345.
Reports headache x3 days, seen by Dr. Chen.
Patient [NAME_001], DOB [DATE_001], MRN [ID_001].
Reports headache x3 days, seen by [NAME_002].
John Doe, a 58 y/o male seen on May 14, presents with a three-day history of headache. Evaluated by Dr. Chen…
Built on infrastructure trusted by every major health system
Architecture
Every request flows through the same three-stage pipeline — verifiable, auditable, and built on AWS services covered under the same BAA your hospital already trusts.
AWS Comprehend Medical + Clinitect regex catches names, dates, MRNs, SSNs, addresses, NPIs, insurance IDs. Every entity gets a deterministic token. Nothing is guessed.
The LLM receives only tokens. Real values stay in your AWS account, in memory, for the lifetime of a single request. No training, no logging, no third parties.
Tokens in the model output are swapped back with real values before the note ever leaves your server. Clinicians get a complete, accurate note — not a redacted one.
How it works
Six steps. Every request. No exceptions.
Provider pastes protected health information into the extension or web app.
The engine identifies, scrubs, and tokenizes every PHI entity on-device.
Only tokenized content is sent to the model (Bedrock / Claude / Llama / Nova).
The model processes the tokenized prompt and returns structured output.
Tokens are mapped back to the original PHI securely, on the server.
The clinician receives a complete response with PHI restored in place.
Zero retention
PHI never leaves your infrastructure
Token maps live in memory for the lifetime of a single request, then are destroyed. Nothing is written to disk. Nothing is logged. Nothing is sent beyond the AWS region you deploy into.
Use cases
Turn an hour-long veteran interview transcript into a VA-ready, structurally-correct TBI writeup in under a minute.
Convert dictated or typed encounter transcripts into Chief Complaint / HPI / Assessment / Plan structure, ready for the EHR.
Upload a scanned intake form or a PDF. Clinitect runs OCR, de-identifies, and converts the content into a structured note.
Security & compliance
Every architectural choice is visible, logged, and auditable — so your compliance team has answers the first time they ask.
See it in actionPHI is never sent to an LLM in raw form — not Claude, not GPT, not anyone. Tokenization happens before the first network hop.
Tokenization and re-hydration happen inside your AWS account. The token map never leaves the machine handling the request.
Works with Amazon Nova, Claude, Llama, and more. Swap models with a single config change — no architectural rework.
Runs on AWS services covered by the standard BAA. No third-party vendors in the request path. Full audit trail optional.
No signup. No setup. Paste a transcript, pick a model, watch the tokens flow. All without sending a byte of PHI to an LLM.