Home › Articles › Medical error and AI: who is liable when something goes wrong?
Medical AI

Medical error and AI: who is liable when something goes wrong?

22 April 2026 8 min read Clinovus AI Team

On 7 April 2026, Medscape asked directly: "Medical errors involving AI: who is responsible?" That same week, the Académie nationale de médecine published a reference report on the subject.[1] The legal answer is becoming clearer — and it directly concerns every physician using AI software.

The governing legal framework: the EU AI Act

Adopted on 13 June 2024, the EU AI Act is the world's first dedicated AI legal framework.[2] It introduces two key actors:

The EU AI Act does not create liability for AI as an autonomous entity. Liability remains human — shared between provider and deployer depending on the nature of the failure.[2]

The liability chain

Liability chain in case of AI-related medical error AI developer Design & reliability Software vendor Distribution & market launch Institution Deployment & training Physician End user — liable Art. 40 LPMéd Patient Affected person EU AI Act Provider & deployer — Règl. UE 2024/1689 Répartition de la responsabilité selon l'EU AI Act Fournisseur : conception, tests, documentation, signalement des incidents Déployeur (médecin/établissement) : utilisation conforme, supervision humaine, validation Le médecin reste toujours responsable de l'acte médical final
Medical AI liability chain: developer, vendor, institution and physician may all be implicated

What the Académie nationale de médecine says

Position of the Académie nationale de médecine (2026)

The healthcare professional is the last link in the chain. Their liability can only be engaged if they made an error in using the AI — particularly by not validating outputs or by using an inappropriate tool. The duty of care remains the standard and now includes verifying AI recommendations.[1]

When provider liability applies

The AI provider is liable if the harm arises from an intrinsic system defect: biased algorithm, inadequate training data, software bug, absent clinical validation.

When physician liability applies

A microeconomic study published in February 2026 demonstrates that full physician liability is the most effective regime for incentivising providers to improve tool quality: a fully exposed physician is willing to pay more for a more reliable AI, creating positive pressure on the industry.[3]

The 4 practices that reduce risk

Reducing AI error risk: the 4 essential practices Controlled risk Systematically validate Every AI output before use Choose a certified tool CE marking class IIa/IIb Data processing agreement Provider responsibilities Document usage Traceability of AI decision
The 4 practices that reduce legal and clinical risk when using medical AI

What "medical validation" means in practice

Validating an AI output does not mean simply reading it. It means:

See also our articles on medical secrecy and AI in Switzerland and nFADP and medical AI.

Frequently asked questions

Can a physician be held liable for an error made by AI?

Yes. Under Swiss, French and European medical law, the physician remains responsible for the final medical act, regardless of the technology used. The EU AI Act distinguishes between the provider (developer) and the deployer (physician/institution), but clinical responsibility always falls on the practitioner. Using AI outputs without validation exposes the physician to negligence liability.

Does the EU AI Act already apply to physicians in Switzerland?

Not directly — Switzerland is not an EU member. But medical AI providers distributing products in both Switzerland and the EU are subject to the EU AI Act. Swiss physicians using these tools indirectly benefit from its protections and are bound by its requirements through contracts with providers. Switzerland is also developing its own AI regulation aligned with the European approach.

What is a class IIa or IIb medical device?

CE classification of medical devices determines the risk level. Class I: low risk (administrative software). Class IIa: moderate risk (documentation assistance). Class IIb: high risk (diagnostic support, triage). Class IIa and IIb software must obtain CE marking before commercialisation in Europe and meet strict clinical validation requirements.

How can a physician protect themselves legally when using AI?

Four concrete measures: (1) systematically validate every AI output before any clinical use — never copy-paste without review; (2) use only tools with a clear data processing agreement; (3) document AI use in the patient record if it influenced a decision; (4) choose tools designed for medical validation, not consumer products.

Sources and references

  1. Académie nationale de médecine (2026). AI and medical liability. Bull Acad Natl Med 2026;210:3-9. academie-medecine.fr
  2. Regulation (EU) 2024/1689 on Artificial Intelligence (EU AI Act), adopted 13 June 2024. eur-lex.europa.eu
  3. Musy O. & Chopard B. (2026). Medical liability in the AI era. Collège des économistes de la santé, 17 February 2026.
  4. Medscape (2026). Medical errors involving AI: who is responsible? 7 April 2026.
  5. Federal Act on Health Professions (MedPA), art. 40. SR 811.11. fedlex.admin.ch
  6. Panckoucke R. (2026). Physicians of tomorrow: what responsibility facing AI? Revue générale de droit médical.
Disclaimer: this article is for informational purposes only and does not constitute legal advice. Liability regimes vary by country and situation. Consult a specialist lawyer for specific situations.

Clinovus AI — designed for systematic medical validation

Every output is presented as a draft to validate. The physician decides. Documented responsibility. Hosted in Switzerland.

Try free →
A question about this article? Our team replies within 24h.
support@clinovusai.com