Future Shock
Rapid advances in artificial intelligence are changing medicine by the minute. What are the promises? The potential perils? And what lies just around the corner? We surveyed leading AI experts across Johns Hopkins to find out.
While artificial intelligence technology has been shaping biomedical research and clinical care for years, the release of deep learning models (think: chatbot AI technology) has dramatically accelerated interest in and adoption of AI, particularly in medicine.
Last spring, The New England Journal of Medicine, citing an explosion in manuscript subscriptions, launched a new “AI in Medicine” series — and announced the 2024 debut of a new journal: NEJM AI.
Meanwhile, leaders at the American Medical Association, noting the promise AI holds “for transforming medicine,” have begun developing recommendations about AI-generated medical advice and content that can be used to advise policymakers and protect patients from misinformation.
Clearly the AI genie is out of the bottle — and physicians today who neglect or refuse to engage with this latest technology will do so at their own peril, our experts agree. Several Johns Hopkins physicians referred to an oft-cited dictum: “AI will never replace clinicians, but clinicians who don’t use AI will be replaced by those who do.”
Glossary: Frequently Used Terms*
Generative AI refers to the capability of an AI system to generate new data or content, such as images, text or even entire scenarios, in response to user prompts. It involves learning patterns and structures from existing data, and using that knowledge to create new examples that are relevant to user prompts and reflect these patterns and structures. It can be used for tasks like generating synthetic medical images, synthesizing patient data or creating simulated scenarios for training purposes. It can also be used to develop general-purpose predictive technologies with flexible inputs and outputs.
Predictive AI focuses on making predictions or forecasts based on available data. It involves building statistical, machine learning or other mathematical models that analyze historical data, identify patterns and use those patterns to make predictions about future events or outcomes. These models can be used for predicting disease progression, estimating patient outcomes or identifying high-risk individuals who may benefit from early interventions.
Black box AI technology refers to the use of complex machine learning models or algorithms whose inner workings are not easily interpretable or explainable by users or developers. These models are often referred to as “black boxes” because the reasoning behind their predictions or decisions is not readily understandable.
Trained on large data sets, they use sophisticated algorithms, such as deep learning neural networks, to learn patterns and make predictions. While these models can sometimes achieve high accuracy and performance in tasks like disease diagnosis, treatment recommendation or patient risk assessment, health care professionals may find it difficult to trust and adopt these models if they cannot understand the reasoning behind their recommendations or identify biases or errors that may be embedded in the model’s predictions.
*Definitions generated by ChatGPT and reviewed by Tinglong Dai.‘Ethically Sourced AI’
As a leader in a National Institutes of Health-funded consortium (AI-READI) working to generate AI-ready data sets that are “ethically sourced,” Johns Hopkins ophthalmologist T.Y. Alvin Liu and his colleagues aim to develop an “AI-ready and equitable atlas for diabetes insights” that will serve as a blueprint that other biomedical researchers can follow.
Typically, when technology progresses very quickly, ethical aspects have to play catch-up, Liu notes. With the AI-READI consortium, “ethical inquiries are integrated at every single stage of the project,” and “data sets will draw from subjects from diverse ethnic and socioeconomic backgrounds.”
That’s crucial for two reasons, he notes: If AI is not grounded in ethical principles, it can cause harm in the form of bias. And ultimately, even if AI works, it will never grow to scale if the public doesn’t trust it.
Looming Issues
“How will we incorporate AI into the physician workflow? Who will pay for it?”
T.Y. Alvin Liu
“The first very big barrier is availability of data to train the algorithms on. There are time and cost barriers to collecting data and properly curating it. We have to strive to avoid biases in data acquisition, which otherwise can result in health care disparities.”
Natalia Trayanova
“As biologists, we are experiencing a data explosion, where we are greatly exceeding our capacity to analyze the results that we obtained through experiments. There’s sort of this exponential, dramatic increase in the capacity of what we can do.”
Dwight Bergles
“We’ve created so many checks and balances over decades to ensure patient safety and quality. But that is not aligned with the way that AI models are built. False information can be catastrophic to patient care. Generative AI overall may be a good idea, but we run the risk of destroying the valuable ecosystem we have relied upon to shape how science evolves over time.”
Tinglong Dai
“There may come a time when machines are going to be really good at doing many tasks more efficiently than humans, and their safety is going to be as good as, or better than, humans’. But what about this in-between phase where you have to figure out if that’s actually true? We need to keep our values as humans, and as physicians, in front of us at all times because those guardrails are going to guide us in how to use these new tools."
Antony Rosen