CMD + K

finetuning-scheduler

Community

A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules.

Installation

To install this package, run one of the following:

Conda
$conda install conda-forge::finetuning-scheduler

Usage Tracking

2.4.0
2.3.3
2.3.2
2.2.4
2.2.1
5 / 8 versions selected
Downloads (Last 6 months): 0

Description

The FinetuningScheduler callback accelerates and enhances foundational model experimentation with flexible fine-tuning schedules. Training with the FinetuningScheduler callback is simple and confers a host of benefits:

  • it dramatically increases fine-tuning flexibility
  • expedites and facilitates exploration of model tuning dynamics
  • enables marginal performance improvements of finetuned models

Fundamentally, the FinetuningScheduler callback enables multi-phase, scheduled fine-tuning of foundational models. Gradual unfreezing (i.e. thawing) can help maximize foundational model knowledge retention while allowing (typically upper layers of) the model to optimally adapt to new tasks during transfer learning.

FinetuningScheduler orchestrates the gradual unfreezing of models via a fine-tuning schedule that is either implicitly generated (the default) or explicitly provided by the user (more computationally efficient). Fine-tuning phase transitions are driven by FTSEarlyStopping criteria (a multi-phase extension of EarlyStopping), user-specified epoch transitions or a composition of the two (the default mode). A FinetuningScheduler training session completes when the final phase of the schedule has its stopping criteria met.

Documentation

  • https://finetuning-scheduler.readthedocs.io/en/stable/
  • https://finetuning-scheduler.readthedocs.io/en/latest/

About

Summary

A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules.

Last Updated

Aug 15, 2024 at 16:46

License

Apache-2.0

Total Downloads

59.8K

Version Downloads

4.1K

Supported Platforms

noarch