Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
27
28
29
30
31
1
2
3
4
5
6
7
8
9
10
12
13
14
15
16
19
21
22
24
26
27
28
29
30
11 Jun
2019-06-11 - 2019-06-13    
All Day
HIMSS and Health 2.0 European Conference Helsinki, Finland 11-13 June 2019 The HIMSS & Health 2.0 European Conference will be a unique three day event you [...]
7th Epidemiology and Public Health Conference
2019-06-17 - 2019-06-18    
All Day
Time : June 17-18, 2019 Dubai, UAE Theme: Global Health a major topic of concern in Epidemiology Research and Public Health study Epidemiology Meet 2019 in [...]
Inaugural Digital Health Pharma Congress
2019-06-17 - 2019-06-21    
All Day
Inaugural Digital Health Pharma Congress Join us for World Pharma Week 2019, where 15th Annual Biomarkers & Immuno-Oncology World Congress and 18th Annual World Preclinical Congress, two of Cambridge [...]
International Forum on Advancements in Healthcare - IFAH USA 2019
2019-06-18 - 2019-06-20    
All Day
International Forum on Advancements in Healthcare - IFAH (formerly Smart Health Conference) USA, will bring together 1000+ healthcare professionals from across the world on a [...]
Annual Congress on  Yoga and Meditation
2019-06-20 - 2019-06-21    
All Day
About Conference With the support of Organizing Committee Members, “Annual Congress on Yoga and Meditation” (Yoga Meditation 2019) is planned to be held in Dubai, [...]
Collaborative Care & Health IT Innovations Summit
2019-06-23 - 2019-06-25    
All Day
Technology Integrating Pre-Acute and LTPAC Services into the Healthcare and Payment EcosystemsHyatt Regency Inner Harbor 300 Light Street, Baltimore, Maryland, United States of America, 21202 [...]
2019 AHA LEADERSHIP SUMMIT
2019-06-25 - 2019-06-27    
All Day
Welcome Welcome to attendee registration for the 27th Annual AHA/AHA Center for Health Innovation Leadership Summit! The 2019 AHA Leadership Summit promotes a revolution in thinking [...]
Events on 2019-06-11
11 Jun
Events on 2019-06-17
Events on 2019-06-20
Events on 2019-06-23
Events on 2019-06-25
2019 AHA LEADERSHIP SUMMIT
25 Jun 19
San Diego
Articles Latest News

National Standard Unveiled for Scalable, Safe Healthcare AI

EMR Industry

Researchers at Duke University School of Medicine have developed two innovative frameworks to assess the performance, safety, and reliability of large language models in healthcare.

Published in npj Digital Medicine and the Journal of the American Medical Informatics Association (JAMIA), two new studies present a novel approach to ensuring that AI systems used in clinical environments adhere to the highest standards of quality, safety, and accountability.

As large language models become more integrated into healthcare—supporting tasks such as clinical note generation, conversation summarization, and patient communication—health systems face increasing challenges in evaluating these technologies in a rigorous yet scalable way. The Duke University-led research, headed by Chuan Hong, Ph.D., assistant professor in Biostatistics and Bioinformatics, aims to address this critical need.

The study published in npj Digital Medicine introduces SCRIBE, a structured evaluation framework for Ambient Digital Scribing tools. These AI-driven systems are designed to generate clinical documentation by capturing real-time conversations between patients and providers. SCRIBE combines expert clinical review, automated performance scoring, and simulated edge-case testing to assess tools across key metrics such as accuracy, fairness, coherence, and resilience.

“Ambient AI has significant potential to ease documentation burdens for clinicians,” Hong noted. “But careful evaluation is crucial. Without it, there’s a risk of deploying systems that introduce bias, omit vital details, or compromise care quality. SCRIBE is built to safeguard against those risks.”

A second, related study published in JAMIA introduces a complementary framework for evaluating large language models integrated into the Epic electronic medical record system, specifically those used to generate draft responses to patient messages. The study assesses these AI-generated replies by comparing clinician feedback with automated evaluation metrics, focusing on attributes such as clarity, completeness, and safety.

While the models demonstrated strong performance in tone and readability, the study identified notable gaps in response completeness—highlighting the critical need for ongoing evaluation in real-world settings.

“This research helps bridge the gap between cutting-edge algorithms and meaningful clinical application,” said Michael Pencina, Ph.D., Chief Data Scientist at Duke Health and co-author of both studies. “It underscores that responsible AI implementation requires rigorous, ongoing evaluation as part of the technology’s entire life cycle—not just as a final step.”

Together, these two frameworks provide a robust foundation for the responsible integration of AI in healthcare. They equip clinical leaders, developers, and regulators with the tools necessary to evaluate AI models prior to deployment and to continuously monitor their performance—ensuring that these technologies enhance care delivery without compromising patient safety or trust.