Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16. Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged.
Uploaded | Sun Mar 30 23:20:48 2025 |
md5 checksum | 3b9e1e997085e78ac85d6b48dc5d11cc |
arch | x86_64 |
build | py39h06a4308_0 |
depends | numpy >=1.17,<2.0a0, packaging >=20.0, psutil, python >=3.9,<3.10.0a0, pytorch >=1.4.0,<3, pyyaml |
license | Apache-2.0 |
license_family | Apache |
md5 | 3b9e1e997085e78ac85d6b48dc5d11cc |
name | huggingface_accelerate |
platform | linux |
sha1 | 640e9cb33a07fd899139a30e2b9c13d7f647254b |
sha256 | 05d83cb983023da6cd869555340fd69a6fe631096ff528d7b424cf9401ac85e7 |
size | 256956 |
subdir | linux-64 |
timestamp | 1670426329246 |
version | 0.15.0 |