首頁(yè) 資訊 Evaluating Large Language Models for Health

Evaluating Large Language Models for Health

來(lái)源:泰然健康網(wǎng) 時(shí)間:2025年11月13日 09:52

View PDF

Abstract:Large language models (LLMs) have demonstrated remarkable success in NLP tasks. However, there is a paucity of studies that attempt to evaluate their performances on social media-based health-related natural language processing tasks, which have traditionally been difficult to achieve high scores in. We benchmarked one supervised classic machine learning model based on Support Vector Machines (SVMs), three supervised pretrained language models (PLMs) based on RoBERTa, BERTweet, and SocBERT, and two LLM based classifiers (GPT3.5 and GPT4), across 6 text classification tasks. We developed three approaches for leveraging LLMs for text classification: employing LLMs as zero-shot classifiers, us-ing LLMs as annotators to annotate training data for supervised classifiers, and utilizing LLMs with few-shot examples for augmentation of manually annotated data. Our comprehensive experiments demonstrate that employ-ing data augmentation using LLMs (GPT-4) with relatively small human-annotated data to train lightweight supervised classification models achieves superior results compared to training with human-annotated data alone. Supervised learners also outperform GPT-4 and GPT-3.5 in zero-shot settings. By leveraging this data augmentation strategy, we can harness the power of LLMs to develop smaller, more effective domain-specific NLP models. LLM-annotated data without human guidance for training light-weight supervised classification models is an ineffective strategy. However, LLM, as a zero-shot classifier, shows promise in excluding false negatives and potentially reducing the human effort required for data annotation. Future investigations are imperative to explore optimal training data sizes and the optimal amounts of augmented data. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2403.19031 [cs.CL]   (or arXiv:2403.19031v1 [cs.CL] for this version)   https://doi.org/10.48550/arXiv.2403.19031

arXiv-issued DOI via DataCite

Submission history

From: Yuting Guo [view email]
[v1] Wed, 27 Mar 2024 22:05:10 UTC (1,233 KB)

相關(guān)知識(shí)

HealthBench: Evaluating Large Language Models Towards Improved Human Health
Are Large Language Models True Healthcare Jacks
MentalGLM Series: Explainable Large Language Models for Mental Health Analysis on Chinese Social Media
PhysioLLM: Supporting Personalized Health Insights with Wearables and Large Language Models
ColaCare: Enhancing Electronic Health Record Modeling through Large Language Model
Do LLMs Provide Consistent Answers to Health
Towards a Personal Health Large Language Model
Breaking Down Abortion Language In Health Bill
Disrupting diagnostic hegemony: reimagining mental health language with British South Asian communities
We Care: Multimodal Depression Detection and Knowledge Infused Mental Health Therapeutic Response Generation

網(wǎng)址: Evaluating Large Language Models for Health http://m.u1s5d6.cn/newsview1840040.html

推薦資訊