Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
27
28
30
1
2
3
4
5
6
7
8
9
11
12
13
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Forbes Healthcare Summit
2017-11-29 - 2017-11-30    
All Day
ForbesLive leverages unique access to the world’s most influential leaders, policy-makers, entrepreneurs, and artists—uniting these global forces to harness their collective knowledge, address today’s critical [...]
29th Annual National Forum on Quality Improvement in Health Care
2017-12-10 - 2017-12-13    
All Day
PROGRAM OVERVIEW The IHI National Forum on December 10–13​, 2017, will bring more than 5,000 brilliant minds in health care to Orla​​ndo, Florida, to find meaningful connections [...]
Dallas Health IT Summit
2017-12-14 - 2017-12-15    
All Day
About Health IT Summits U.S. healthcare is at an inflection point right now, as policy mandates and internal healthcare system reform begin to take hold, [...]
Events on 2017-11-29
Forbes Healthcare Summit
29 Nov 17
New York
Events on 2017-12-14
Dallas Health IT Summit
14 Dec 17
Dallas
Articles

Large models identify social determinants in records

Social determinants of health (SDoH) significantly influence patient outcomes, yet their documentation is frequently incomplete or absent in the structured data of electronic health records (EHRs). The utilization of large language models (LLMs) holds promise in efficiently extracting SDoH from EHRs, contributing to both research and clinical care. However, challenges such as class imbalance and data limitations arise when handling this sparsely documented yet vital information.

In our investigation, we explored effective approaches to leverage LLMs for extracting six distinct SDoH categories from narrative EHR text. The standout performers included the fine-tuned Flan-T5 XL, achieving a macro-F1 of 0.71 for any SDoH mentions, and Flan-T5 XXL, attaining a macro-F1 of 0.70 for adverse SDoH mentions. The incorporation of LLM-generated synthetic data during training had varying effects across models and architectures but notably improved the performance of smaller Flan-T5 models (delta F1 + 0.12 to +0.23).

Our best-fine-tuned models outperformed zero- and few-shot performance of ChatGPT-family models in their respective settings, except for GPT4 with 10-shot prompting for adverse SDoH. These fine-tuned models exhibited a reduced likelihood of changing predictions when race/ethnicity and gender descriptors were introduced to the text, indicating diminished algorithmic bias (p < 0.05). Notably, our models identified 93.8% of patients with adverse SDoH, a significant improvement compared to the mere 2.0% captured by ICD-10 codes. These results highlight the potential of LLMs in enhancing real-world evidence related to SDoH and in identifying patients who could benefit from additional resource support.