finetuning-scheduler
A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules.
A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules.
To install this package, run one of the following:
The FinetuningScheduler callback accelerates and enhances foundational model experimentation with flexible fine-tuning schedules. Training with the FinetuningScheduler callback is simple and confers a host of benefits:
Fundamentally, the FinetuningScheduler callback enables multi-phase, scheduled fine-tuning of foundational models. Gradual unfreezing (i.e. thawing) can help maximize foundational model knowledge retention while allowing (typically upper layers of) the model to optimally adapt to new tasks during transfer learning.
FinetuningScheduler orchestrates the gradual unfreezing of models via a fine-tuning schedule that is either implicitly generated (the default) or explicitly provided by the user (more computationally efficient). Fine-tuning phase transitions are driven by FTSEarlyStopping criteria (a multi-phase extension of EarlyStopping), user-specified epoch transitions or a composition of the two (the default mode). A FinetuningScheduler training session completes when the final phase of the schedule has its stopping criteria met.
Summary
A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules.
Last Updated
Aug 15, 2024 at 16:46
License
Apache-2.0
Total Downloads
59.8K
Version Downloads
4.1K
Supported Platforms
GitHub Repository
https://github.com/speediedan/finetuning-schedulerDocumentation
https://finetuning-scheduler.readthedocs.io/en/stable/