spacy
Industrial-strength NLP
Industrial-strength NLP
To install this package, run one of the following:
spaCy: Industrial-strength NLP
spaCy is a library for advanced natural language processing in Python and
Cython. spaCy is built on the very latest research, but it isn't researchware.
It was designed from day one to be used in real products. spaCy currently supports
English and German, as well as tokenization for Chinese, Spanish, Italian, French,
Portuguese, Dutch, Swedish and Hungarian. It's commercial open-source software,
released under the MIT license.
Version 1.6 out now! Read the release notes here. <https://github.com/explosion/spaCy/releases/>_
.. image:: https://travis-ci.org/explosion/spaCy.svg?branch=master :target: https://travis-ci.org/explosion/spaCy :alt: Build Status
.. image:: https://img.shields.io/github/release/explosion/spacy.svg
:target: https://github.com/explosion/spaCy/releases
:alt: Current Release Version
.. image:: https://img.shields.io/pypi/v/spacy.svg
:target: https://pypi.python.org/pypi/spacy
:alt: pypi Version
.. image:: https://img.shields.io/badge/gitter-join%20chat%20%E2%86%92-09a3d5.svg :target: https://gitter.im/explosion/spaCy :alt: spaCy on Gitter
.. image:: https://img.shields.io/twitter/follow/spacyio.svg?style=social&label=Follow
:target: https://twitter.com/spacyio
:alt: spaCy on Twitter
+--------------------------------------------------------------------------------+---------------------------------------------------------+
| Usage Workflows <https://spacy.io/docs/usage/>_ | How to use spaCy and its features. |
+--------------------------------------------------------------------------------+---------------------------------------------------------+
| API Reference <https://spacy.io/docs/api/>_ | The detailed reference for spaCy's API. |
+--------------------------------------------------------------------------------+---------------------------------------------------------+
| Tutorials <https://spacy.io/docs/usage/tutorials>_ | End-to-end examples, with code you can modify and run. |
+--------------------------------------------------------------------------------+---------------------------------------------------------+
| Showcase & Demos <https://spacy.io/docs/usage/showcase>_ | Demos, libraries and products from the spaCy community. |
+--------------------------------------------------------------------------------+---------------------------------------------------------+
| Contribute <https://github.com/explosion/spaCy/blob/master/CONTRIBUTING.md>_ | How to contribute to the spaCy project and code base. |
+--------------------------------------------------------------------------------+---------------------------------------------------------+
+---------------------------+------------------------------------------------------------------------------------------------------------+
| Bug reports | GitHub Issue tracker <https://github.com/explosion/spaCy/issues>_ |
+---------------------------+------------------------------------------------------------------------------------------------------------+
|Usage questions | StackOverflow <http://stackoverflow.com/questions/tagged/spacy>, Reddit usergroup |
| | <https://www.reddit.com/r/spacynlp>, Gitter chat <https://gitter.im/explosion/spaCy>_ |
+---------------------------+------------------------------------------------------------------------------------------------------------+
|General discussion | Reddit usergroup <https://www.reddit.com/r/spacynlp>, |
| | Gitter chat <https://gitter.im/explosion/spaCy> |
+---------------------------+------------------------------------------------------------------------------------------------------------+
| Commercial support | [email protected] |
+---------------------------+------------------------------------------------------------------------------------------------------------+
See facts, figures and benchmarks <https://spacy.io/docs/api/>_.
spaCy is compatible with 64-bit CPython 2.6+/3.3+ and runs on Unix/Linux, OS X
and Windows. Source packages are available via
pip <https://pypi.python.org/pypi/spacy>_. Please make sure that
you have a working build enviroment set up. See notes on Ubuntu, macOS/OS X and Windows
for details.
When using pip it is generally recommended to install packages in a virtualenv to avoid modifying system state:
.. code:: bash
pip install spacy
Python packaging is awkward at the best of times, and it's particularly tricky with C extensions, built via Cython, requiring large data files. So, please report issues as you encounter them.
After installation you need to download a language model. Currently only models for
English and German, named en and de, are available.
.. code:: bash
python -m spacy.en.download all
python -m spacy.de.download all
The download command fetches about 1 GB of data which it installs
within the spacy package directory.
To upgrade spaCy to the latest release:
.. code:: bash
pip install -U spacy
Sometimes new releases require a new language model. Then you will have to upgrade to a new model, too. You can also force re-downloading and installing a new language model:
.. code:: bash
python -m spacy.en.download --force
The other way to install spaCy is to clone its GitHub repository and build it from source. That is the common way if you want to make changes to the code base.
You'll need to make sure that you have a development enviroment consisting of a Python distribution including header files, a compiler, pip, virtualenv and git installed. The compiler part is the trickiest. How to do that depends on your system. See notes on Ubuntu, OS X and Windows for details.
.. code:: bash
# make sure you are using recent pip/virtualenv versions
python -m pip install -U pip virtualenv
# find git install instructions at https://git-scm.com/downloads
git clone https://github.com/explosion/spaCy.git
cd spaCy
virtualenv .env && source .env/bin/activate
pip install -r requirements.txt
pip install -e .
Compared to regular install via pip requirements.txt <requirements.txt>_
additionally installs developer dependencies such as cython.
Install system-level dependencies via apt-get:
.. code:: bash
sudo apt-get install build-essential python-dev git
Install a recent version of XCode <https://developer.apple.com/xcode/>_,
including the so-called "Command Line Tools". macOS and OS X ship with Python
and git preinstalled.
Install a version of Visual Studio Express <https://www.visualstudio.com/vs/visual-studio-express/>_
or higher that matches the version that was used to compile your Python
interpreter. For official distributions these are VS 2008 (Python 2.7),
VS 2010 (Python 3.4) and VS 2015 (Python 3.5).
spaCy comes with an extensive test suite. First, find out where spaCy is installed:
.. code:: bash
python -c "import os; import spacy; print(os.path.dirname(spacy.__file__))"
Then run pytest on that directory. The flags --vectors, --slow
and --model are optional and enable additional tests:
.. code:: bash
# make sure you are using recent pytest version
python -m pip install -U pytest
python -m pytest <spacy-directory> --vectors --model --slow
You can specify where spacy.en.download and spacy.de.download download the language model
to using the --data-path or -d argument:
.. code:: bash
python -m spacy.en.download all --data-path /some/dir
If you choose to download to a custom location, you will need to tell spaCy where to load the model
from in order to use it. You can do this either by calling spacy.util.set_data_path() before
calling spacy.load(), or by passing a path argument to the spacy.en.English or
spacy.de.German constructors.
v1.6.0 <https://github.com/explosion/spaCy/releases/>_: Improvements to tokenizer and tests* Major features and improvements*
Thinc v6 <https://github.com/explosion/thinc/>_ to prepare for spaCy v2.0 <https://github.com/explosion/spaCy/projects/3>_.* Bug fixes*
#326 <https://github.com/explosion/spaCy/issues/326>_: Tokenizer is now more consistent and handles abbreviations correctly.#344 <https://github.com/explosion/spaCy/issues/344>_: Tokenizer now handles URLs correctly.#483 <https://github.com/explosion/spaCy/issues/483>_: Period after two or more uppercase letters is split off in tokenizer exceptions.#631 <https://github.com/explosion/spaCy/issues/631>_: Add richcmp method to Token.#718 <https://github.com/explosion/spaCy/issues/718>_: Contractions with She are now handled correctly.#736 <https://github.com/explosion/spaCy/issues/736>_: Times are now tokenized with correct string values.#743 <https://github.com/explosion/spaCy/issues/743>_: Token is now hashable.#744 <https://github.com/explosion/spaCy/issues/744>_: were and Were are now excluded correctly from contractions.* Tests*
Doc object manually.documentation for tests <https://github.com/explosion/spaCy/tree/master/spacy/tests>_ to explain conventions and organisation.* Contributors*
Thanks to @oroszgy <https://github.com/oroszgy>, @magnusburton <https://github.com/magnusburton>, @guyrosin <https://github.com/guyrosin>_ and @danielhers <https://github.com/danielhers>_ for the pull requests!
v1.5.0 <https://github.com/explosion/spaCy/releases/tag/v1.5.0>_: Alpha support for Swedish and Hungarian* Major features and improvements*
* Bug fixes*
language_data package in the setup.py.vec_path declaration that was failing if add_vectors was set.Vocab to load without serializer_freqs.* Documentation and examples*
spaCy Jupyter notebooks <https://github.com/explosion/spacy-notebooks>_ repo: ongoing collection of easy-to-run spaCy examples and tutorials.#657 <https://github.com/explosion/spaCy/issues/657>: Generalise dependency parsing annotation specs <https://spacy.io/docs/api/annotation> beyond English.* Contributors*
Thanks to @oroszgy <https://github.com/oroszgy>, @magnusburton <https://github.com/magnusburton>, @jmizgajski <https://github.com/jmizgajski>, @aikramer2 <https://github.com/aikramer2>, @fnorf <https://github.com/fnorf>_ and @bhargavvader <https://github.com/bhargavvader>_ for the pull requests!
v1.4.0 <https://github.com/explosion/spaCy/releases/tag/v1.4.0>_: Improved language data and alpha Dutch support* Major features and improvements*
* Bug fixes*
#649 <https://github.com/explosion/spaCy/issues/649>_: Update and reorganise stop lists.#672 <https://github.com/explosion/spaCy/issues/672>_: Make token.ent_iob_ return unicode.#674 <https://github.com/explosion/spaCy/issues/674>_: Add missing lemmas for contracted forms of "be" to TOKENIZER_EXCEPTIONS.#683 <https://github.com/explosion/spaCy/issues/683>_ Morphology class now supplies tag map value for the special space tag if it's missing.#684 <https://github.com/explosion/spaCy/issues/684>_: Ensure spacy.en.English() loads the Glove vector data if available. Previously was inconsistent with behaviour of spacy.load('en').#685 <https://github.com/explosion/spaCy/issues/685>_: Expand TOKENIZER_EXCEPTIONS with unicode apostrophe (````).#689 <https://github.com/explosion/spaCy/issues/689>_: Correct typo in STOP_WORDS.#691 <https://github.com/explosion/spaCy/issues/691>_: Add tokenizer exceptions for "gonna" and "Gonna".* Backwards incompatibilities*
No changes to the public, documented API, but the previously undocumented language data and model initialisation processes have been refactored and reorganised. If you were relying on the bin/init_model.py script, see the new spaCy Developer Resources <https://github.com/explosion/spacy-dev-resources>_ repo. Code that references internals of the spacy.en or spacy.de packages should also be reviewed before updating to this version.
* Documentation and examples*
"Adding languages" <https://spacy.io/docs/usage/adding-languages>_ workflow."Part-of-speech tagging" <https://spacy.io/docs/usage/pos-tagging>_ workflow.spaCy Developer Resources <https://github.com/explosion/spacy-dev-resources>_ repo scripts, tools and resources for developing spaCy.* Contributors*
Thanks to @dafnevk <https://github.com/dafnevk>, @jvdzwaan <https://github.com/jvdzwaan>, @RvanNieuwpoort <https://github.com/RvanNieuwpoort>, @wrvhage <https://github.com/wrvhage>, @jaspb <https://github.com/jaspb>, @savvopoulos <https://github.com/savvopoulos> and @davedwards <https://github.com/davedwards>_ for the pull requests!
v1.3.0 <https://github.com/explosion/spaCy/releases/tag/v1.3.0>_: Improve API consistency* API improvements*
Span.sentiment attribute.#658 <https://github.com/explosion/spaCy/pull/658>: Add Span.noun_chunks iterator (thanks @pokey <https://github.com/pokey>).#642 <https://github.com/explosion/spaCy/pull/642>: Let --data-path be specified when running download.py scripts (thanks @ExplodingCabbage <https://github.com/ExplodingCabbage>).#638 <https://github.com/explosion/spaCy/pull/638>: Add German stopwords (thanks @souravsingh <https://github.com/souravsingh>).#614 <https://github.com/explosion/spaCy/pull/614>: Fix PhraseMatcher to work with new Matcher (thanks @sadovnychyi <https://github.com/sadovnychyi>).* Bug fixes*
#605 <https://github.com/explosion/spaCy/issues/605>_: accept argument to Matcher now rejects matches as expected.#617 <https://github.com/explosion/spaCy/issues/617>_: Vocab.load() now works with string paths, as well as Path objects.#639 <https://github.com/explosion/spaCy/issues/639>_: Stop words in Language class now used as expected.#656 <https://github.com/explosion/spaCy/issues/656>, #624 <https://github.com/explosion/spaCy/issues/624>: Tokenizer special-case rules now support arbitrary token attributes.* Documentation and examples*
"Customizing the tokenizer" <https://spacy.io/docs/usage/customizing-tokenizer>_ workflow."Training the tagger, parser and entity recognizer" <https://spacy.io/docs/usage/training>_ workflow."Entity recognition" <https://spacy.io/docs/usage/entity-recognition>_ workflow.* Contributors*
Thanks to @pokey <https://github.com/pokey>, @ExplodingCabbage <https://github.com/ExplodingCabbage>, @souravsingh <https://github.com/souravsingh>, @sadovnychyi <https://github.com/sadovnychyi>, @manojsakhwar <https://github.com/manojsakhwar>, @TiagoMRodrigues <https://github.com/TiagoMRodrigues>, @savkov <https://github.com/savkov>, @pspiegelhalter <https://github.com/pspiegelhalter>, @chenb67 <https://github.com/chenb67>, @kylepjohnson <https://github.com/kylepjohnson>, @YanhaoYang <https://github.com/YanhaoYang>, @tjrileywisc <https://github.com/tjrileywisc>, @dechov <https://github.com/dechov>, @wjt <https://github.com/wjt>, @jsmootiv <https://github.com/jsmootiv>_ and @blarghmatey <https://github.com/blarghmatey>_ for the pull requests!
v1.2.0 <https://github.com/explosion/spaCy/releases/tag/v1.2.0>_: Alpha tokenizers for Chinese, French, Spanish, Italian and Portuguese* Major features and improvements*
Jieba <https://github.com/fxsjy/jieba>_.* Bug fixes*
#376 <https://github.com/explosion/spaCy/issues/376>_: POS tags for "and/or" are now correct.#578 <https://github.com/explosion/spaCy/issues578/>_: --force argument on download command now operates correctly.#595 <https://github.com/explosion/spaCy/issues/595>_: Lemmatization corrected for some base forms.#588 <https://github.com/explosion/spaCy/issues/588>_: Matcher now rejects empty patterns.#592 <https://github.com/explosion/spaCy/issues/592>_: Added exception rule for tokenization of "Ph.D."#599 <https://github.com/explosion/spaCy/issues/599>_: Empty documents now considered tagged and parsed.#600 <https://github.com/explosion/spaCy/issues/600>_: Add missing token.tag and token.tag_ setters.#596 <https://github.com/explosion/spaCy/issues/596>_: Added missing unicode import when compiling regexes that led to incorrect tokenization.#587 <https://github.com/explosion/spaCy/issues/587>_: Resolved bug that caused Matcher to sometimes segfault.#429 <https://github.com/explosion/spaCy/issues/429>_: Ensure missing entity types are added to the entity recognizer.v1.1.0 <https://github.com/explosion/spaCy/releases/tag/v1.1.0>_: Bug fixes and adjustmentspipeline keyword argument of spacy.load() to create_pipeline.vectors keyword argument of spacy.load() to add_vectors.* Bug fixes*
#544 <https://github.com/explosion/spaCy/issues/544>_: Add vocab.resize_vectors() method, to support changing to vectors of different dimensionality.#536 <https://github.com/explosion/spaCy/issues/536>_: Default probability was incorrect for OOV words.#539 <https://github.com/explosion/spaCy/issues/539>_: Unspecified encoding when opening some JSON files.#541 <https://github.com/explosion/spaCy/issues/541>_: GloVe vectors were being loaded incorrectly.#522 <https://github.com/explosion/spaCy/issues/522>_: Similarities and vector norms were calculated incorrectly.#461 <https://github.com/explosion/spaCy/issues/461>_: ent_iob attribute was incorrect after setting entities via doc.ents#459 <https://github.com/explosion/spaCy/issues/459>_: Deserialiser failed on empty doc#514 <https://github.com/explosion/spaCy/issues/514>_: Serialization failed after adding a new entity label.v1.0.0 <https://github.com/explosion/spaCy/releases/tag/v1.0.0>_: Support for deep learning workflows and entity-aware rule matcher* Major features and improvements*
custom processing pipelines <https://spacy.io/docs/usage/customizing-pipeline>_, to support deep learning workflowsRule matcher <https://spacy.io/docs/usage/rule-based-matching>_ now supports entity IDs and attributestraining APIs <https://github.com/explosion/spaCy/tree/master/examples/training>_ and GoldParse classPath objects. You can now load resources over your network, or do similar trickery, by passing any object that supports the Path protocol.* Backwards incompatibilities*
Language.__init__ (and its subclasses English.__init__ and German.__init__) has been renamed to path.token.repvec name has been removed..train() method of Tagger and Parser has been renamed to .update()GoldParse class has a new __init__() method. The old method has been preserved in GoldParse.from_annot_tuples().Parser class have changed.get_package and get_package_by_name helper functions have been moved into a new module, spacy.deprecated, in case you still need them while you update.* Bug fixes*
get_lang_class bug when GloVe vectors are used.#411 <https://github.com/explosion/spaCy/issues/411>_: doc.sents raised IndexError on empty string.#455 <https://github.com/explosion/spaCy/issues/455>_: Correct lemmatization logic#371 <https://github.com/explosion/spaCy/issues/371>_: Make Lexeme objects hashable#469 <https://github.com/explosion/spaCy/issues/469>_: Make noun_chunks detect root NPs* Contributors*
Thanks to @daylen <https://github.com/daylen>, @RahulKulhari <https://github.com/RahulKulhari>, @stared <https://github.com/stared>, @adamhadani <https://github.com/adamhadani>, @izeye <https://github.com/adamhadani>_ and @crawfordcomeaux <https://github.com/adamhadani>_ for the pull requests!
v0.101.0 <https://github.com/explosion/spaCy/releases/tag/0.101.0>_: Fixed German modelDoc.has_vector and Span.has_vector properties.Span.sent property.v0.100.7 <https://github.com/explosion/spaCy/releases/tag/0.100.7>_: German!spaCy finally supports another language, in addition to English. We're lucky
to have Wolfgang Seeker on the team, and the new German model is just the
beginning. Now that there are multiple languages, you should consider loading
spaCy via the load() function. This function also makes it easier to load extra
word vector data for English:
.. code:: python
import spacy
en_nlp = spacy.load('en', vectors='en_glove_cc_300_1m_vectors')
de_nlp = spacy.load('de')
To support use of the load function, there are also two new helper functions:
spacy.get_lang_class and spacy.set_lang_class. Once the German model is
loaded, you can use it just like the English model:
.. code:: python
doc = nlp(u'''Wikipedia ist ein Projekt zum Aufbau einer Enzyklopdie aus freien Inhalten, zu dem du mit deinem Wissen beitragen kannst. Seit Mai 2001 sind 1.936.257 Artikel in deutscher Sprache entstanden.''')
for sent in doc.sents:
print(sent.root.text, sent.root.n_lefts, sent.root.n_rights)
# (u'ist', 1, 2)
# (u'sind', 1, 3)
The German model provides tokenization, POS tagging, sentence boundary detection, syntactic dependency parsing, recognition of organisation, location and person entities, and word vector representations trained on a mix of open subtitles and Wikipedia data. It doesn't yet provide lemmatisation or morphological analysis, and it doesn't yet recognise numeric entities such as numbers and dates.
Bugfixes
Token.__str__ and Token.__unicode__ built-ins: they included a trailing space.v0.100.6 <https://github.com/explosion/spaCy/releases/tag/0.100.6>_: Add support for GloVe vectorsThis release offers improved support for replacing the word vectors used by spaCy. To install Stanford's GloVe vectors, trained on the Common Crawl, just run:
.. code:: bash
sputnik --name spacy install en_glove_cc_300_1m_vectors
To reduce memory usage and loading time, we've trimmed the vocabulary down to 1m entries.
This release also integrates all the code necessary for German parsing. A German model
will be released shortly. To assist in multi-lingual processing, we've added a load()
function. To load the English model with the GloVe vectors:
.. code:: python
spacy.load('en', vectors='en_glove_cc_300_1m_vectors')
v0.100.5 <https://github.com/explosion/spaCy/releases/tag/0.100.5>_Fix incorrect use of header file, caused from problem with thinc
v0.100.4 <https://github.com/explosion/spaCy/releases/tag/0.100.4>_: Fix OSX problem introduced in 0.100.3Small correction to right_edge calculation
v0.100.3 <https://github.com/explosion/spaCy/releases/tag/0.100.3>_Support multi-threading, via the .pipe method. spaCy now releases the GIL around the
parser and entity recognizer, so systems that support OpenMP should be able to do
shared memory parallelism at close to full efficiency.
We've also greatly reduced loading time, and fixed a number of bugs.
v0.100.2 <https://github.com/explosion/spaCy/releases/tag/0.100.2>_Fix data version lock that affected v0.100.1
v0.100.1 <https://github.com/explosion/spaCy/releases/tag/0.100.1>_: Fix install for OSXv0.100 included header files built on Linux that caused installation to fail on OSX. This should now be corrected. We also update the default data distribution, to include a small fix to the tokenizer.
v0.100 <https://github.com/explosion/spaCy/releases/tag/0.100>_: Revise setup.py, better model downloads, bug fixesMatcher. This should work by default when using the
English.__call__ method of running the pipeline. If invoking Parser.__call__ directly to do NER,
you should call the Parser.add_label() method to register your entity type.Span.doc.merge() to sometimes hangv0.99 <https://github.com/explosion/spaCy/releases/tag/0.99>_: Improve span merging, internal refactoringdoc.merge() and span.merge() methods, no longer invalidates existing Span objects. This makes it much easier to merge multiple spans, e.g. to merge all named entities, or all base noun phrases. Thanks to @andreasgrv for help on this patch..rank, is added to Token and Lexeme objects, giving the frequency rank of the word.v0.98 <https://github.com/explosion/spaCy/releases/tag/0.98>_: Smaller package, bug fixes__str__ methods for Python2v0.97 <https://github.com/explosion/spaCy/releases/tag/0.97>_: Load the StringStore from a json list, instead of a text file--force to over-write the data directory in download.pyMatcher and doc.merge()v0.96 <https://github.com/explosion/spaCy/releases/tag/0.96>_: Hotfix to .merge method.mergev0.95 <https://github.com/explosion/spaCy/releases/tag/0.95>_: BugfixesMatcherSpantoken.conjunctsv0.94 <https://github.com/explosion/spaCy/releases/tag/0.94>_v0.93 <https://github.com/explosion/spaCy/releases/tag/0.93>_Bug fixes to word vectors
Summary
Industrial-strength NLP
Last Updated
Feb 18, 2017 at 04:47
License
MIT
Total Downloads
131