Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16. Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged.
| Uploaded | Mon Mar 31 21:55:18 2025 |
| md5 checksum | 519f67eddf0b464b2f717fa6503a8a77 |
| arch | x86_64 |
| build | py312h06a4308_0 |
| depends | huggingface_hub >=0.21.0, numpy >=1.17,<3.0.0, packaging >=20.0, psutil, python >=3.12,<3.13.0a0, pytorch >=2.0.0, pyyaml, safetensors >=0.4.3 |
| license | Apache-2.0 |
| license_family | Apache |
| md5 | 519f67eddf0b464b2f717fa6503a8a77 |
| name | huggingface_accelerate |
| platform | linux |
| sha1 | 3b775cf2b6dc164e5c8660ed87625d1992659eb9 |
| sha256 | f51761cd70588099b578290e643646cb235c9ddaa6ab8c27387dbe646e710078 |
| size | 626209 |
| subdir | linux-64 |
| timestamp | 1741361464942 |
| version | 1.4.0 |