AI Prescribing Has Entered the Chat
This past week, the tech world has been buzzing with updates from OpenAI and Anthropic, but today’s newsletter focuses on the news out of Utah where Doctronic has officially become the first company in the U.S. to autonomously refill medications using an AI-only solution.
Through Utah’s “regulatory sandbox,” Doctronic is now legally authorized to act as the prescriber for a formulary of 191 routine medications covering hypertension, cholesterol, antidepressants, and birth control. While the scope is limited to refills (and excludes higher-risk categories like injectables or controlled substances), the implications are significant.
The Safeguards: It’s not a total “black box” rollout. The state is requiring a human clinician to review the first 250 refills in each drug class before the AI is granted full autonomy. This tiered validation ensures the algorithm’s logic holds up against real-world clinical nuance before the “human-in-the-loop” is removed.
Why Refills are a Rational Place to Start
My take? This makes sense as an entry point for AI prescribing. The majority of the practice of medicine lives in a gray area where there is no obvious answer, but rather judgment calls being made based on a host of factors. Refills, on the other hand, are highly algorithmic, relatively low-risk, and largely devoid of critical thinking. By the time a patient reaches an AI refill workflow, a clinician has already made a diagnosis, selected a medication, and confirmed initial tolerance. There is no need for triage, no need for net-new diagnosis, and little ambiguity, as patients do not usually present with new symptoms when simply requesting a maintenance refill.
Further, requirements like lab monitoring intervals or refill limits are easily programmable. In the current state of medicine, many human-signed refills probably get less scrutiny than a rigorous AI audit will provide. At a cost of just $4 per AI-refill (compared to $39 for a human video visit), the efficiency gains for patients are hard to ignore.
The Limits of Autonomy
So, should I start looking for a new job because AI is about to replace me? Not yet.
It’s important to recognize what this is and what it isn’t. Refills represent a very small slice of medical decision-making. Most medical decision making lives in the gray, having to evaluate new symptoms, evolving presentations, and judgement calls that require context. AI algorithms have proven excellent at passing standardized board exams, but ask any practicing doctor whether board questions represent what they see in their exam room every day and you’ll learn quickly that standardized case representations do not represent reality.
Thus, I would not expect a rapid leap from autonomous refills to fully autonomous medical care. AI can serve as an amazing clinical decision support tool for clinicians, but to think it is ready to practice medicine autonomously across the board is a stretch. New use cases should continue to focus on low-risk, algorithmic situations, perhaps expanding into asynchronous care for low-acuity conditions like simple UTIs or yeast infections.
The MedMal Frontier: Who is Accountable?
The other fascinating part of the Doctronic news isn’t just the code, it’s the coverage. Doctronic secured medical malpractice insurance that holds the AI system to the same standard of accountability as a physician. This raises a localized version of a much larger question: How do you underwrite an algorithm?
As a practicing doctor, I know that no one “bats 1.000.” Adverse events are a statistical certainty in medicine. When they happen, they are emotionally charged, and our litigious society demands a “who” to hold accountable. By operating under a Regulatory Mitigation Agreement, Doctronic is essentially acting as the “prescriber of record,” assuming the liability that a doctor usually carries.
How liability is assigned and how risk is priced are questions that will need to be answered as AI progresses. Given that prescribing authority is regulated by states, Utah’s AI sandbox approach may well become a blueprint for other states.
Next Week: I’ll go much deeper into how medical malpractice insurers are thinking about underwriting generative AI and the “black box” problem so stay tuned for more on this.


