On 7 April 2026, Medscape asked directly: "Medical errors involving AI: who is responsible?" That same week, the Académie nationale de médecine published a reference report on the subject.[1] The legal answer is becoming clearer — and it directly concerns every physician using AI software.
Adopted on 13 June 2024, the EU AI Act is the world's first dedicated AI legal framework.[2] It introduces two key actors:
The EU AI Act does not create liability for AI as an autonomous entity. Liability remains human — shared between provider and deployer depending on the nature of the failure.[2]
Position of the Académie nationale de médecine (2026)
The healthcare professional is the last link in the chain. Their liability can only be engaged if they made an error in using the AI — particularly by not validating outputs or by using an inappropriate tool. The duty of care remains the standard and now includes verifying AI recommendations.[1]
The AI provider is liable if the harm arises from an intrinsic system defect: biased algorithm, inadequate training data, software bug, absent clinical validation.
A microeconomic study published in February 2026 demonstrates that full physician liability is the most effective regime for incentivising providers to improve tool quality: a fully exposed physician is willing to pay more for a more reliable AI, creating positive pressure on the industry.[3]
Validating an AI output does not mean simply reading it. It means:
See also our articles on medical secrecy and AI in Switzerland and nFADP and medical AI.
Can a physician be held liable for an error made by AI?
Yes. Under Swiss, French and European medical law, the physician remains responsible for the final medical act, regardless of the technology used. The EU AI Act distinguishes between the provider (developer) and the deployer (physician/institution), but clinical responsibility always falls on the practitioner. Using AI outputs without validation exposes the physician to negligence liability.
Does the EU AI Act already apply to physicians in Switzerland?
Not directly — Switzerland is not an EU member. But medical AI providers distributing products in both Switzerland and the EU are subject to the EU AI Act. Swiss physicians using these tools indirectly benefit from its protections and are bound by its requirements through contracts with providers. Switzerland is also developing its own AI regulation aligned with the European approach.
What is a class IIa or IIb medical device?
CE classification of medical devices determines the risk level. Class I: low risk (administrative software). Class IIa: moderate risk (documentation assistance). Class IIb: high risk (diagnostic support, triage). Class IIa and IIb software must obtain CE marking before commercialisation in Europe and meet strict clinical validation requirements.
How can a physician protect themselves legally when using AI?
Four concrete measures: (1) systematically validate every AI output before any clinical use — never copy-paste without review; (2) use only tools with a clear data processing agreement; (3) document AI use in the patient record if it influenced a decision; (4) choose tools designed for medical validation, not consumer products.
Every output is presented as a draft to validate. The physician decides. Documented responsibility. Hosted in Switzerland.
Try free →