×

Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16. Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged.

Uploaded Sun Mar 30 23:20:45 2025
md5 checksum d67ef770570a37f0dd0344711af4280d
arch x86_64
build py312h06a4308_0
depends numpy >=1.17,<2.0a0, packaging >=20.0, psutil, python >=3.12,<3.13.0a0, pytorch >=1.10.0, pyyaml, setuptools
license Apache-2.0
license_family Apache
md5 d67ef770570a37f0dd0344711af4280d
name huggingface_accelerate
platform linux
sha1 f221a66288e676e8e5a5844d5cd7e5b5223e35f3
sha256 c6b80eae9674e9f8f6f8f08915fb941848aa5271b9a37e462f745c79604c4674
size 425231
subdir linux-64
timestamp 1709060734661
version 0.21.0