Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16. Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged.
Uploaded | Sun Mar 30 23:20:45 2025 |
md5 checksum | 9d6fbdd6c65e2878ffad6edd24a4be82 |
arch | x86_64 |
build | py311h06a4308_0 |
depends | numpy >=1.17,<2.0a0, packaging >=20.0, psutil, python >=3.11,<3.12.0a0, pytorch >=1.10.0, pyyaml, setuptools |
license | Apache-2.0 |
license_family | Apache |
md5 | 9d6fbdd6c65e2878ffad6edd24a4be82 |
name | huggingface_accelerate |
platform | linux |
sha1 | d1cd4c15201cdc88bc1dfc471ca7a248547a6cf9 |
sha256 | 00365b9f614c19115b2bd2866bdbc3af6efaf5f1ff934648ac5dbcc5eafff3ff |
size | 455641 |
subdir | linux-64 |
timestamp | 1689792064367 |
version | 0.21.0 |