<aside> ℹ️
By L. Zhang et al.
Stanford Univeristy
Marr Prize of ICCV 2023
[Hugging Face] [arXiv] [GitHub]
</aside>
One way to finetune a neural network is to directly continue training it with the additional training data. But this approach can lead to overfitting, mode collapse, and catastrophic for- getting. Extensive research has focused on developing fine- tuning strategies that avoid such issues.