Artificial intelligence officially crosses the threshold of medical practices with the launch of “ChatGPT Health”. But what happens when an algorithm starts interpreting reports and blood tests? In this interview Filippo Anelli, president of the National Federation of Medical Orders, warns against easy enthusiasmunderlining how clinical responsibility and the protection of personal data remain the insurmountable pillars of a national healthcare system that cannot be replaced by artificial intelligence.

President, now that ChatGPT starts explaining the reports, aren’t you afraid that patients will stop listening to you, arriving at the office convinced that they already know everything? How do you manage someone who trusts an algorithm more than a doctor?
«It is a challenge that we have already partly experienced with “Doctor Google”. The internet offers a lot of information, but medicine remains a complex art: without the right keys to understanding it, it is difficult to unravel. I believe that the professional will remain the essential point of reference. We will certainly have patients who are more aware of their concerns, but the very complexity of the matter will always require direct dialogue to decide together the most appropriate therapy for that individual case.”
The AI is always kind and helpful. Isn’t there a risk that a patient disappointed by a struggling healthcare system will end up becoming more attached to the chat than to a real doctor?
«History teaches us something here too. A few decades ago in London there was the Babylon Health trial, which offered citizens the choice between an algorithm and a real doctor. That experience demonstrated unequivocally that the human relationship holds up much more than that with a computer. That company ultimately failed: the bond created between doctor and patient is, in some way, stainless.”
If the AI makes a mistake and causes damage, who pays? Can we afford gray areas on legal responsibility in health?
«Italian law is clear: a computer or an algorithm cannot replace the doctor. Doing so would be an illegal practice in Italy and Europe. Citizens must understand that the exercise of the profession requires requirements defined by law specifically for their protection. There can be no doubt about this: it is good to inform yourself, but you should never blindly trust a machine.”
They say that AI will take away the dirty work and bureaucracy, but isn’t it that in the end it will just become another burden to manage, a “duplicate” whose errors you will have to correct at the end of the day?
«The risk is real. Often digital innovations promise to reduce our work and then have the opposite effect. Using algorithms also requires time and attention; AI doesn’t manage itself. Paradoxically, the technological impact will require an increase in staffing in the National Health Service, not a reduction. If time to interrogate and manage the AI takes minutes away from the clinical visit, we will need more doctors to meet patients’ needs.”
Putting your clinical data on the servers of an American multinational: as a guarantor of doctors, aren’t you afraid of this private management of health?
«At the moment I believe it is imprudent to insert personal data, tax codes or information that could make a citizen identifiable on these platforms. There is currently insufficient privacy protection for this type of operation. Our control systems are very rigorous and, at present, ChatGPT does not offer the necessary guarantees to manage Italians’ sensitive data in complete security.”


