Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
26
27
28
29
30
31
2
3
4
5
6
7
8
9
10
8:30 AM - HIMSS Europe
11
12
13
14
15
16
17
18
19
20
21
22
26
27
28
29
1
2
3
4
5
6
e-Health 2025 Conference and Tradeshow
2025-06-01 - 2025-06-03    
10:00 am - 5:00 pm
The 2025 e-Health Conference provides an exciting opportunity to hear from your peers and engage with MEDITECH.
HIMSS Europe
2025-06-10 - 2025-06-12    
8:30 am - 5:00 pm
Transforming Healthcare in Paris From June 10-12, 2025, the HIMSS European Health Conference & Exhibition will convene in Paris to bring together Europe’s foremost health [...]
38th World Congress on  Pharmacology
2025-06-23 - 2025-06-24    
11:00 am - 4:00 pm
About the Conference Conference Series cordially invites participants from around the world to attend the 38th World Congress on Pharmacology, scheduled for June 23-24, 2025 [...]
2025 Clinical Informatics Symposium
2025-06-24 - 2025-06-25    
11:00 am - 4:00 pm
Virtual Event June 24th - 25th Explore the agenda for MEDITECH's 2025 Clinical Informatics Symposium. Embrace the future of healthcare at MEDITECH’s 2025 Clinical Informatics [...]
International Healthcare Medical Device Exhibition
2025-06-25 - 2025-06-27    
8:30 am - 5:00 pm
Japan Health will gather over 400 innovative healthcare companies from Japan and overseas, offering a unique opportunity to experience cutting-edge solutions and connect directly with [...]
Electronic Medical Records Boot Camp
2025-06-30 - 2025-07-01    
10:30 am - 5:30 pm
The Electronic Medical Records Boot Camp is a two-day intensive boot camp of seminars and hands-on analytical sessions to provide an overview of electronic health [...]
Events on 2025-06-01
Events on 2025-06-10
HIMSS Europe
10 Jun 25
France
Events on 2025-06-23
38th World Congress on  Pharmacology
23 Jun 25
Paris, France
Events on 2025-06-24
Events on 2025-06-25
International Healthcare Medical Device Exhibition
25 Jun 25
Suminoe-Ku, Osaka 559-0034
Events on 2025-06-30
Articles Latest News

National Standard Unveiled for Scalable, Safe Healthcare AI

EMR Industry

Researchers at Duke University School of Medicine have developed two innovative frameworks to assess the performance, safety, and reliability of large language models in healthcare.

Published in npj Digital Medicine and the Journal of the American Medical Informatics Association (JAMIA), two new studies present a novel approach to ensuring that AI systems used in clinical environments adhere to the highest standards of quality, safety, and accountability.

As large language models become more integrated into healthcare—supporting tasks such as clinical note generation, conversation summarization, and patient communication—health systems face increasing challenges in evaluating these technologies in a rigorous yet scalable way. The Duke University-led research, headed by Chuan Hong, Ph.D., assistant professor in Biostatistics and Bioinformatics, aims to address this critical need.

The study published in npj Digital Medicine introduces SCRIBE, a structured evaluation framework for Ambient Digital Scribing tools. These AI-driven systems are designed to generate clinical documentation by capturing real-time conversations between patients and providers. SCRIBE combines expert clinical review, automated performance scoring, and simulated edge-case testing to assess tools across key metrics such as accuracy, fairness, coherence, and resilience.

“Ambient AI has significant potential to ease documentation burdens for clinicians,” Hong noted. “But careful evaluation is crucial. Without it, there’s a risk of deploying systems that introduce bias, omit vital details, or compromise care quality. SCRIBE is built to safeguard against those risks.”

A second, related study published in JAMIA introduces a complementary framework for evaluating large language models integrated into the Epic electronic medical record system, specifically those used to generate draft responses to patient messages. The study assesses these AI-generated replies by comparing clinician feedback with automated evaluation metrics, focusing on attributes such as clarity, completeness, and safety.

While the models demonstrated strong performance in tone and readability, the study identified notable gaps in response completeness—highlighting the critical need for ongoing evaluation in real-world settings.

“This research helps bridge the gap between cutting-edge algorithms and meaningful clinical application,” said Michael Pencina, Ph.D., Chief Data Scientist at Duke Health and co-author of both studies. “It underscores that responsible AI implementation requires rigorous, ongoing evaluation as part of the technology’s entire life cycle—not just as a final step.”

Together, these two frameworks provide a robust foundation for the responsible integration of AI in healthcare. They equip clinical leaders, developers, and regulators with the tools necessary to evaluate AI models prior to deployment and to continuously monitor their performance—ensuring that these technologies enhance care delivery without compromising patient safety or trust.