×

Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16. Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged.

Uploaded Sun Mar 30 23:20:48 2025
md5 checksum 26d6406c2d331327ce8cd9f3df328ea8
arch x86_64
build py310h06a4308_0
depends numpy >=1.17,<2.0a0, packaging >=20.0, psutil, python >=3.10,<3.11.0a0, pytorch >=1.4.0,<3, pyyaml
license Apache-2.0
license_family Apache
md5 26d6406c2d331327ce8cd9f3df328ea8
name huggingface_accelerate
platform linux
sha1 9db83c30834f474b35b3590419e5ca7e242b128d
sha256 8de6c7900002e4eb6f02dafe21018fd7592bbdcd3a2d66483a534176d9ba98e3
size 259042
subdir linux-64
timestamp 1670426367321
version 0.15.0