Read the original article:
Adobe
Several years ago, teams of scholars published groundbreaking research that pointed to racial biases in algorithms that helped direct patient care at major health systems in the U.S. Those algorithms, the studies found, adversely affected the care of Black and Latinx patients across multiple medical categories. For instance, the researchers uncovered racial biases in the prediction algorithms used by to identify more medically complex patients, such that Black patients were far less likely to qualify for additional care than their white counterparts.
Coverage of the Covid pandemic at the time somewhat buried these findings, but the recent STAT series “Embedded Bias” has pointed a new spotlight on the issue. My broad takeaway: Algorithms cannot be trusted to make safe and fair decisions about patient care.
The timing is critical, given the rise of generative AI. Pharma companies, care-delivery organizations, and health insurers are being deluged with pitches from AI companies that promise to automate everything from the creation of marketing materials to the drafting of patient visit notes following a doctor’s appointment.
Comments