SentencePiece implements subword units (e.g., byte-pair-encoding (BPE) [Sennrich et al.]) and unigram language model [Kudo.]) with the extension of direct training from raw sentences. SentencePiece allows us to make a purely end-to-end system that does not depend on language-specific pre/postprocessing.
Uploaded | Tue Apr 1 00:08:06 2025 |
md5 checksum | 6b98678138daae9737cd891b3756c16b |
arch | x86_64 |
build | py310hd09550d_0 |
depends | libgcc-ng >=7.5.0, libstdcxx-ng >=7.5.0, python >=3.10,<3.11.0a0 |
license | Apache-2.0 |
license_family | Apache |
md5 | 6b98678138daae9737cd891b3756c16b |
name | sentencepiece |
platform | linux |
sha1 | eeab6f95ad76f5ce49cd82b4845a46dce728c38f |
sha256 | c01cb9cfc6ba9aa30f1ceadb66e2ccee103b613ffef15217d8ed1d58c7b87eb0 |
size | 2845525 |
subdir | linux-64 |
timestamp | 1640794599235 |
version | 0.1.95 |