It sounds like a science fiction story: artificial intelligence replacing doctors and radiologists, machines diagnosing sick patients, computer programs crunching numbers to tell you what your ideal diet might be. While artificial intelligence isn’t going to be replacing doctors anytime soon, doctors are beginning to use artificial intelligence in their diagnostic practice. While many of these tools are highly sophisticated and still need to be overseen by a doctor, there has been some concern about who might be held responsible if something goes wrong. What happens if the computer gets a diagnosis wrong? More importantly and more relevant—what happens if a doctor’s interpretation of a person’s illness is wrong because of a mistake made by the AI?

What implications might artificial intelligence have for the field of law? It isn’t likely that medical malpractice lawyers will be suing computers (or Hal from 2001: A Space Odyssey) anytime soon. However, when doctors do choose to use artificial intelligence, they’ll need to be properly trained in using the computer programs in a manner that establishes best practices and offers a standard of care. If there are glitches in the artificial intelligence programs, patients who suffer injuries from misdiagnosis may be entitled to seek damages from the programmers or software developers of the AI if the software developers failed to warn consumers and doctors about limitations in the program. Artificial intelligence is very new in the field of medicine and it is likewise quite new in the field of law. Blizzard Law, PLLC is a medical malpractice law firm in Houston, Texas that is closely watching the ways that AI might impact medicine and medical malpractice law. And, if you believe you suffered misdiagnosis or suffered an injury because a doctor used artificial intelligence or machinery that had design flaws or other issues, you and your family may have the right to seek damages for your medical bills, lost wages, pain and suffering, and rehabilitation expenses.

What Artificial Intelligence Technology Currently Exists in Medicine?

In some areas of medicine, artificial intelligence is already here. According to Vox, there’s software to detect diabetic retinopathy, a condition that can lead to vision loss in diabetic patients. Vox also reported on an artificial intelligence program that can be used to detect polyps in the colon. In one study, the artificial intelligence program was able to detect very small polyps that many doctors miss, but that can nevertheless be deadly. It is quite possible that in many cases, artificial intelligence will improve medicine rather than make it riskier for patients. It might even increase the standard of care that doctors might need to provide to patients. For example, if the AI proves able to detect colon cancer early, use of this technology may someday even become the standard of care.

In another field of medicine, doctors are trying to see if speaking to an avatar can help anxiety and depression patients. Finally, another application is the use of AI to help doctors transcribe their notes into their computer system, thus saving them time and potentially increasing the accuracy of their records. This could also potentially help improve the standard of care for patients.

Yet, there are some real barriers and risks to adopting AI too early, before it is proven. Artificial intelligence is only as robust as the data on which it bases its diagnosis. If the data is flawed, the results the AI presents to doctors will be flawed. Doctors and developers also need to be cautious about placing artificial intelligence systems on the market too soon, before they are tested in real-world applications. The whole idea that doctors will someday be replaced by artificial intelligence doesn’t hold water. We will always need doctors to interpret the results. Artificial intelligence is nowhere near human intelligence in scope and critical thinking ability. At best, artificial intelligence can work well in one modality (for example, by being able to detect small colon polyps). It cannot replace the complexity of the human mind, which can then develop a course of treatment based on these results. If the artificial intelligence gets your diagnosis right and your doctor still gets the treatment wrong, the results provided by the artificial intelligence will mean nothing. And the doctor will still be responsible for the resulting errors.

Another issue artificial intelligence raises is privacy concerns. With electronic medical records taking over written records, doctors and computer programmers have access to more data than ever before. What are the limits of this data use, and what will be the limits for the use of data entered into a computer program?

Ultimately, there are more questions than answers regarding the future of AI and medical malpractice. Blizzard Law, PLLC is a medical malpractice lawyer in Houston, Texas that is closely watching the field unfold and is exploring the implications that can result. If you have questions about your rights if you believe a medical error led to your injuries, Blizzard Law, PLLC is a medical malpractice lawyer in Houston, Texas that may be able to help you.

Who (or What) Would be Liable for Damages if Medical AI Makes a Mistake?

The big question with medical malpractice AI is who (or what) would be liable for damages if an artificial intelligence program leads to a medical mistake. Ultimately, artificial intelligence can be able to help doctors diagnose and treat illness, but what happens if medical errors result from a mistake in diagnosis in the program? You can’t sue a computer program, but it might be possible to sue the manufacturers of the artificial intelligence software or program as a defective product. Artificial intelligence programmers and producers, especially those producing and using artificial intelligence in the medical field, would have a responsibility to test their products and warn users of their limitations.

Another issue that can arise is if doctors start to trust the computer program to monitor the patient and fail to monitor the patient themselves. If this happens, the doctor could be held liable for not using the artificial intelligence program as intended. So, ultimately, doctors will still be responsible for patient care.

These are just some of the ways that artificial intelligence can impact medical malpractice law. Blizzard Law, PLLC is a medical malpractice law firm in Houston, Texas that works with clients who have suffered injury or worsened illness because of a mistake a doctor made. If you have questions, reach out to Blizzard Law, PLLC today, or reach out to to get matched with a medical malpractice lawyer who can help you today.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *