Race to Define Health AI Standards as Six Groups Step Forward
Government agencies provide minimal guidance on adoption
In recent years, the use of artificial intelligence in health care has grown well beyond reading radiology scans or identifying high-risk sepsis patients. AI is now being used to write medical visit notes, answer patient questions, handle phone calls, and even manage claims. Yet, figuring out which AI tools to adopt, how to implement them effectively, and how to ensure they function safely remains a major challenge for health systems and AI developers. With minimal guidance from government agencies—the FDA oversees only a small portion of these applications—health care organizations often face uncertainty about where to turn.
To fill this gap, several organizations have stepped in to provide guidance, hoping their frameworks will influence the broader industry. These groups range from newly formed collectives to established trade organizations that have been active in health care for decades.
If these organizations achieve widespread adoption, they could shape the standards health systems use to assess and oversee AI technologies, standardize the practices developers follow before releasing health AI products, guide the formation of future regulations, and help prevent adverse effects that might harm patients or hinder industry progress.

















