Revisit Few-shot Intent Classification with PLMs: Direct Fine-tuning vs. Continual Pre-training
Fine-Tuning BERT using Hugging Face Transformers
What exactly happens when we fine-tune BERT? | by Samuel Flender | Towards Data Science
Fine-Tuning BERT using Hugging Face Transformers
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models | DeepAI
2202.12024] NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better
GitHub - asappresearch/revisit-bert-finetuning: For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).