Researchers urge tighter regulation for healthcare algorithms

In recent commentary published in the New England Journal of Medicine AI, researchers from the Massachusetts Institute of Technology in Cambridge, Equality AI and Boston University underscored the need for more regulation regarding both AI and non-AI algorithms in healthcare.

Here are six takeaways from the report:

1. The commentary comes after the The Office for Civil Rights and HHS issued a new rule under the ACA that prohibits discrimination on the basis of race, color, national origin, age, disability or sex in "patient care decision support tools," including AI and non-automated tools used in healthcare. 

2. The rule expands commitments made under President Joe Biden's Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence in 2023. 

3. In a Dec. 12 article on MIT's website, senior commentary author and associate professor Marzyeh Ghassemi, PhD, said while the rule is "an important step forward," it should "dictate equity-driven improvements to the non-AI algorithms and clinical decision-support tools already in use across clinical subspecialties." 

4. According to the MIT article, the FDA has approved nearly 1,000 AI-enabled devices, many of which are designed to support healthcare decision-making. 

5. The researchers noted that there is no regulatory body overseeing the clinical risk scores produced by clinical-decision support tools, while 65% of physicians report using these tools on a monthly basis to determine patient care. 

"Clinical risk scores are less opaque than 'AI' algorithms in that they typically involve only a handful of variables linked in a simple model," Isaac Kohane, MD, PhD, chair of the department of biomedical informatics at Boston-based Harvard Medical School and editor-in-chief of NEJM AI, said in the article. These scores, he added, are "only as good as the datasets used to 'train' them," and should thus be held to the same standards as "their more recent and vastly more complex AI relatives." 

6. The researchers also noted that even among decision-support tools that do not use AI, there is still a potential to perpetuate biases in healthcare, which necessitates further oversight. 

"Regulating clinical risk scores poses significant challenges due to the proliferation of clinical decision support tools embedded in electronic medical records and their widespread use in clinical practice," commentary co-author Maia Hightower, MD, CEO of Equality AI, said in the article. "Such regulation remains necessary to ensure transparency and nondiscrimination."

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Articles We Think You'll Like

 

Featured Whitepapers

Featured Webinars