LoFT: Parameter-Efficient Fine-Tuning for Long-tailed Semi-Supervised Learning

17/09/2025 17 min

Listen "LoFT: Parameter-Efficient Fine-Tuning for Long-tailed Semi-Supervised Learning"

Episode Synopsis

This September 2025 paper introduces LoFT, a novel framework designed to improve Long-Tailed Semi-Supervised Learning (LTSSL) by leveraging parameter-efficient fine-tuning of pre-trained foundation models. The core idea is to enhance confidence calibration and generate more reliable pseudo-labels, which are crucial for addressing the imbalance inherent in long-tailed datasets. Furthermore, the paper extends this approach to open-world scenarios with LoFT-OW, specifically incorporating mechanisms to detect and filter out-of-distribution (OOD) samples from unlabeled data. The authors demonstrate that these fine-tuned models achieve superior performance on various benchmarks, even when utilizing significantly less unlabeled data compared to previous methods.Source:https://arxiv.org/pdf/2509.09926

More episodes of the podcast AI: post transformers