About Anaconda Help Download Anaconda

conda-forge / packages / finetuning-scheduler 2.4.0

A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules.

copied from cf-staging / finetuning-scheduler

Installers

  • noarch v2.4.0

conda install

To install this package run one of the following:
conda install conda-forge::finetuning-scheduler

Description

The FinetuningScheduler callback accelerates and enhances foundational model experimentation with flexible fine-tuning schedules. Training with the FinetuningScheduler callback is simple and confers a host of benefits:

  • it dramatically increases fine-tuning flexibility
  • expedites and facilitates exploration of model tuning dynamics
  • enables marginal performance improvements of finetuned models

Fundamentally, the FinetuningScheduler callback enables multi-phase, scheduled fine-tuning of foundational models. Gradual unfreezing (i.e. thawing) can help maximize foundational model knowledge retention while allowing (typically upper layers of) the model to optimally adapt to new tasks during transfer learning.

FinetuningScheduler orchestrates the gradual unfreezing of models via a fine-tuning schedule that is either implicitly generated (the default) or explicitly provided by the user (more computationally efficient). Fine-tuning phase transitions are driven by FTSEarlyStopping criteria (a multi-phase extension of EarlyStopping), user-specified epoch transitions or a composition of the two (the default mode). A FinetuningScheduler training session completes when the final phase of the schedule has its stopping criteria met.

Documentation

  • https://finetuning-scheduler.readthedocs.io/en/stable/
  • https://finetuning-scheduler.readthedocs.io/en/latest/

© 2024 Anaconda, Inc. All Rights Reserved. (v4.0.4) Legal | Privacy Policy