×

Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16. Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged.

Uploaded Mon Mar 31 21:55:17 2025
md5 checksum 69a85724c664213ddc2f66f835427c82
arch x86_64
build py310h06a4308_0
depends huggingface_hub >=0.21.0, numpy >=1.17,<3.0.0, packaging >=20.0, psutil, python >=3.10,<3.11.0a0, pytorch >=2.0.0, pyyaml, safetensors >=0.4.3
license Apache-2.0
license_family Apache
md5 69a85724c664213ddc2f66f835427c82
name huggingface_accelerate
platform linux
sha1 756fe19c8d28c189dab49dded9beaded915af3f6
sha256 e7ab4a45d39049f0dffe2b2e1a03e33006d6d1ca3c52b49b5e3af2bf9b75f926
size 479888
subdir linux-64
timestamp 1741361562699
version 1.4.0