This page deals with the legal and ethical issues of AI systems in diagnostics.
Legal issues: who is responsible?
As the reliability of AI systems is difficult to assess (cp. Ethical issues - Reliability of technology), it is questionable if physicians are fully accountable for a diagnosis partially based on the assistance of an AI system.
Other entities that might be responsible in case of a wrong diagnosis are the AI system itself, the engineers of the AI system and the company that created the AI system, but for all three there are good reasons for them also not being accountable:
- the AI system itself: it is difficult to punish or request compensation from it and victims of a wrong diagnosis might not be content with the AI system being the sole responsible.
- the engineers of the AI system: as many engineers are involved in the creation, it is difficult to say who should be responsible. In case of IBM’s Watson, the original team designed it to win a game show, in the new application field of health they should not be responsible, as they did possibly not intend Watson to be used like this.
- the company that created the AI system: if there were no design failures, it is difficult to make the company responsible for other people’s usage of their product. [1]
There are good reasons because neither the physician, nor the AI system itself, nor the engineers of the AI system, nor the company that created the AI system are fully accountable in case of a wrong diagnosis. In case of a future general use, this must be decided to create a clear legal framework.
Ethical issues
Concerning the ethical issues with AI in diagnostics, one can separate between the reliability of the technology itself, the possible data privacy of such a system as well as the extent of the usage that society is willing to allow.
Data privacy
For making an AI system like IBM’s Watson really useful, it is necessary to compare its treatment proposal to the achieved outcome, meaning it has to be able to learn. To make this happen, it needs to have access to the full medical record of the patient. [2] This is done by storing the information in a health cloud, accessible for researchers, pharmaceutical companies and physicians. IBM states that data that allows personal identification is removed from medical records that are sent to the the cloud. [3]
In general, freely available data can be used to create profiles of users and patients. Based on for example what people buy and what they search for online, data companies can create medical profiles and classify people (e.g. as „allergy sufferers“). These medical profiles could then be sold to insurance companies. For addressing these health information privacy issues a right for patients to monitor the data that is collected from them could be one possibility. [4]
In case of an AI system, the data in the cloud could be fused with data from other sources and possibly re-match the medical records to persons. One solution could be using the approach of a possibility for patients to monitor their collected health data. Therefore they could check if really no medical report is linked to them.
Reliability of the technology
AI diagnostic systems need as much information as they can to work well. This may be a problem for the patient, as pharmaceutical companies might have a strong interest in uploading and enforcing certain information in the health cloud. Machine learning (cp. AI in diagnostics: basics and examples) is not a transparent process, but only the input and the output is known. The inner reasoning (i.e. the weights of the edges between the nodes) is not precalculated, but the process (i.e. the weights) changes during learning. Therefore the process of decision making is not transparent.
Moreover, the selection of information as input is not entirely transparent, either: who guarantees that the stored data of the health cloud is unbiased concerning the interests of pharmaceutical companies? One possible scenario could be that pharmaceutical companies post papers to the cloud that feature their preferred treatments to influence the knowledge base of the AI system.
This is especially important, as the health cloud is not maintained by the goverment, but by a company. A company wants to increase its profit and does not have to be as transparent as a government. In case of an biased influence on the knowledge base, diagnostic imbalances in favour of the pharmaceutical companies are possible.
As neither the source nor the decision process is transparent, AI systems like Watson should only be used to assist the physician. The physician still needs to know about the current research and needs to have the ability to question the diagnosis of the AI system.
Extent of usage that society is willing to allow
On the one hand computers can find the relevant information for a patient’s condition much faster than human physicians, as no physician can check the millions of published papers. On the other hand an AI system does not not look at the entire human being, but only at a limited set of parameters. Therefore, currently AI does not diagnose more accurately than a physician: it is only a helpful system that provides the needed data.
This is may change in the medium term, though, the more sophisticated the AI gets and is allowed to get. But even in case of an AI system that diagnosed as good as or better as a physician, physicians would still be needed. They would have to focus on the social component of today's physician-patient-relationship.
Currently, patients prefer being treated by a human than by a computer. [5] This is probably not going to change and therefore – even if the AI system would be sufficiently evolved in the medium term – most people would prefer the physician doing the diagnosis and the AI system doing only the support. The AI system is either not capable of replacing the physician or not wanted to replace the physician.
Conclusion/outlook
AI systems in diagnostics are a new possibility to cope with the huge amount of data that is available to physicians (e.g. clinical trials, specific patient’s condition and preferences, different treatment options). But as AI systems are intransparent, too, they should only be an assistant system to physicians, not a replacement.
If AI systems are deployed more frequently, physicians may get more used to receive the needed information mainly through the AI system. This may lead to physicians getting less focused on new treatment research, ultimately losing the ability to check the suggested treatment by the AI system for better alternatives. In this case it may be easier for pharmaceutical companies to position more expensive treatment options.
Bibliography
[1] Robert Hart: When artificial intelligence botches your medical diagnosis, who’s to blame?
[2] Ike Swetlitz (2016): Watson goes to Asia: Hospitals use supercomputer for cancer treatment.
[3] International Business Machines Corporation, IBM (2015): How It Works: IBM Watson Health.
[5] Bertalan Mesko: Can An Algorithm Diagnose Better Than A Doctor?