Dispelling Disparities
“I hope that it will be a force that will help mitigate some of the disparities,” says Ferryman. “My fear is that it if it doesn’t, then it’s just another tool that’s part of the same pattern.”
She gives an example of an algorithm that used accurate data and still made a prediction with discriminatory effects. The algorithm identified Black individuals as needing fewer health care resources even though they were sicker, on average, than white individuals.
This is because in the United States, fewer health care dollars are spent on Black people compared with other groups. Instead of sending additional health care resources to sicker people, the algorithm learned from the pattern in the data and repeated it, favoring white individuals.
“AI is really good at recognizing patterns,” Ferryman says. “Where it can get tricky is when the data show us patterns that we may not have context for, especially when there are issues of equity and health disparities.”
When humans judge surgical skill to assess trainee competency, says Swaroop Vedula, “we run the risk of implicit bias.” He and his Johns Hopkins colleagues, including Shameema Sikder and Vishal Patel, are working on a project “examining whether we find implicit biases in assessing surgical skill and, if so, how do we remove that bias when creating [AI] algorithms to evaluate surgical skill.”