×

Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16. Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged.

Uploaded Mon Mar 31 21:55:21 2025
md5 checksum edbc2f9cb2064296be9c1a8011b683e4
arch x86_64
build py312h06a4308_0
depends huggingface_hub >=0.21.0, numpy >=1.17,<2.0.0, packaging >=20.0, psutil, python >=3.12,<3.13.0a0, pytorch >=1.10.0, pyyaml, safetensors >=0.3.1, setuptools
license Apache-2.0
license_family Apache
md5 edbc2f9cb2064296be9c1a8011b683e4
name huggingface_accelerate
platform linux
sha1 d594d7acf1643906ed70aba6ed80b6ccbacd8cdf
sha256 0e9103f60b98cdb89ec8bcf77a1ce528578c6cacb4c63753655faf615063bfab
size 577234
subdir linux-64
timestamp 1725023433695
version 0.33.0