huggingface_accelerate
Training loop of PyTorch without boilerplate code
Training loop of PyTorch without boilerplate code
To install this package, run one of the following:
Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16.
Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged.
Summary
Training loop of PyTorch without boilerplate code
Last Updated
Mar 7, 2025 at 16:31
License
Apache-2.0
Total Downloads
2.2K
Supported Platforms
Unsupported Platforms
GitHub Repository
https://github.com/huggingface/accelerateDocumentation
https://huggingface.co/docs/accelerate/index