Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
1
3
4
5
6
7
8
9
10
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1
2
3
4
iHealth 2017 Clinical Informatics Conference
2017-05-02 - 2017-05-04    
All Day
iHealth 2017 Clinical Informatics Conference May 02 - 04, 2017 Philadelphia, PA Loews Philadelphia Hotel Register Now About the ConferenceiHealth is where clinicians, informatics professionals [...]
Chicago Health IT Summit
2017-05-11 - 2017-05-12    
All Day
About the Health IT Summits Renowned leaders in U.S. and North American healthcare gather throughout the year to present important information and share insights at [...]
Events on 2017-05-02
Events on 2017-05-11
Chicago Health IT Summit
11 May 17
Chicago
Articles

Large models identify social determinants in records

Social determinants of health (SDoH) significantly influence patient outcomes, yet their documentation is frequently incomplete or absent in the structured data of electronic health records (EHRs). The utilization of large language models (LLMs) holds promise in efficiently extracting SDoH from EHRs, contributing to both research and clinical care. However, challenges such as class imbalance and data limitations arise when handling this sparsely documented yet vital information.

In our investigation, we explored effective approaches to leverage LLMs for extracting six distinct SDoH categories from narrative EHR text. The standout performers included the fine-tuned Flan-T5 XL, achieving a macro-F1 of 0.71 for any SDoH mentions, and Flan-T5 XXL, attaining a macro-F1 of 0.70 for adverse SDoH mentions. The incorporation of LLM-generated synthetic data during training had varying effects across models and architectures but notably improved the performance of smaller Flan-T5 models (delta F1 + 0.12 to +0.23).

Our best-fine-tuned models outperformed zero- and few-shot performance of ChatGPT-family models in their respective settings, except for GPT4 with 10-shot prompting for adverse SDoH. These fine-tuned models exhibited a reduced likelihood of changing predictions when race/ethnicity and gender descriptors were introduced to the text, indicating diminished algorithmic bias (p < 0.05). Notably, our models identified 93.8% of patients with adverse SDoH, a significant improvement compared to the mere 2.0% captured by ICD-10 codes. These results highlight the potential of LLMs in enhancing real-world evidence related to SDoH and in identifying patients who could benefit from additional resource support.