Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
27
28
29
30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
23
24
26
27
28
29
31
Biosensors and Bioelectronics 2021
2021-10-22 - 2021-10-23    
All Day
Biosensors and Bioelectronics 2021 conference explores new advances and recent updated technologies. It is your high eminence that you enhance your research work in this [...]
Petrochemistry and Chemical Engineering
2021-10-25 - 2021-10-26    
All Day
Petro chemistry 2021 directs towards addressing main issues as well as future strategies of global energy industry. This is going to be the largest and [...]
Cardiac Surgery and Medical Devices
2021-10-30 - 2021-10-31    
All Day
The main focus and theme of the conference is “Reconnoitring Challenges Concerning Prediction & Prevention of Heart Diseases”. CARDIAC SURGERY 2020 strives to bring renowned [...]
Events on 2021-10-22
Events on 2021-10-25
Events on 2021-10-30
Articles

Large models identify social determinants in records

Social determinants of health (SDoH) significantly influence patient outcomes, yet their documentation is frequently incomplete or absent in the structured data of electronic health records (EHRs). The utilization of large language models (LLMs) holds promise in efficiently extracting SDoH from EHRs, contributing to both research and clinical care. However, challenges such as class imbalance and data limitations arise when handling this sparsely documented yet vital information.

In our investigation, we explored effective approaches to leverage LLMs for extracting six distinct SDoH categories from narrative EHR text. The standout performers included the fine-tuned Flan-T5 XL, achieving a macro-F1 of 0.71 for any SDoH mentions, and Flan-T5 XXL, attaining a macro-F1 of 0.70 for adverse SDoH mentions. The incorporation of LLM-generated synthetic data during training had varying effects across models and architectures but notably improved the performance of smaller Flan-T5 models (delta F1 + 0.12 to +0.23).

Our best-fine-tuned models outperformed zero- and few-shot performance of ChatGPT-family models in their respective settings, except for GPT4 with 10-shot prompting for adverse SDoH. These fine-tuned models exhibited a reduced likelihood of changing predictions when race/ethnicity and gender descriptors were introduced to the text, indicating diminished algorithmic bias (p < 0.05). Notably, our models identified 93.8% of patients with adverse SDoH, a significant improvement compared to the mere 2.0% captured by ICD-10 codes. These results highlight the potential of LLMs in enhancing real-world evidence related to SDoH and in identifying patients who could benefit from additional resource support.