Liability Concerns
Business analytics expert Tinglong Dai has developed models to better understand doctors’ decision to use — or not use — AI in their daily practice, particularly in view of potential malpractice suits. “Liability concerns have always been instrumental in influencing how physicians make decisions. It’s not an exaggeration to say the health care industry has been shaped by the legal industry, and with AI coming on the scene, things get really heated,” he says.
One might expect physicians to use AI technology most often when they are uncertain about the optimal treatment plan for a patient. But in his modeling, Dai and his colleagues discovered the opposite. “Instead of saying, ‘I am going to use AI to tell me something different,’ our model shows that physicians would use it — in fact, overuse it — when they expect it to agree with their assessment. This means that AI is not being used to its fullest potential,” he says.
That’s largely due to liability worries: If physicians take the step to consult an AI tool in a case of high uncertainty, then decide to deviate from the AI recommendation, they may be exposed to legal risks if something goes wrong.
AI demands nothing short of a sea change in the way that medical liability is defined, Dai says: “We need to bring doctors, lawyers, patient advocates and AI developers together to shape a new legal environment that supports improved patient care.”
“The accuracy with using AI for the early detection of breast cancer in screening mammography is over 30% better than the best [human] breast readers on the mammograms. ... Then you’ve got to ask yourself: How can I not be using AI?”
Elliot Fishman