LoRA-based Fine-Tuning of RoBERTa, Llama-2, and Mistral for Disaster Tweet Classification
AI Impact Summary
The content documents a comparative study of RoBERTa, Llama-2-7B-hf, and Mistral-7B-v0.1 for classifying disaster-related tweets, using LoRA PEFT to adapt the models to a sequence classification task. It emphasizes reducing trainable parameters to enable fine-tuning on limited hardware while preserving downstream performance, highlighting a practical path for rapid domain adaptation. The workflow relies on Hugging Face tools (datasets, transformers) and Weights & Biases for experiment tracking, implying a reproducible blueprint for teams deploying crisis-monitoring classifiers.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- medium