Comparing LLMs for Disaster Tweet Analysis: RoBERTa, Llama 2, and Mistral with LoRA
AI Impact Summary
This analysis compares the performance of three large language models – RoBERTa, Llama 2, and Mistral 7B – for classifying disaster tweets using LoRA fine-tuning. The experiment leverages the Hugging Face Transformers and PEFT libraries, focusing on a dataset of Twitter disaster tweets. The use of LoRA, with its parameter-efficient approach, aims to achieve strong performance while minimizing computational costs, highlighting the importance of efficient fine-tuning techniques for large language models.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- medium