×

Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16. Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged.

Uploaded Sun Mar 30 23:20:47 2025
md5 checksum c6b69bdd75f7b98ee11c34039a68cc0c
arch x86_64
build py311h06a4308_0
depends numpy >=1.17,<2.0a0, packaging >=20.0, psutil, python >=3.11,<3.12.0a0, pytorch >=1.6.0, pyyaml, setuptools
license Apache-2.0
license_family Apache
md5 c6b69bdd75f7b98ee11c34039a68cc0c
name huggingface_accelerate
platform linux
sha1 e42b6c735c15e65af3bbc7e2a4a4d4c47f90fc7f
sha256 ec242c5d1c03e9830d44846ca992b08112182b00c4fb7e57f134623e1fb23e51
size 432790
subdir linux-64
timestamp 1686666665018
version 0.20.3