Skip to main content
2025 Python Packaging Survey is now live!  Take the survey now

A set of tools to compress gensim fasttext models

Project description

Compress-fastText

This Python 3 package allows to compress fastText word embedding models (from the gensim package) by orders of magnitude, without seriously affecting their quality. It can be installed with pip:

pip install compress-fasttext

This blogpost (in Russian) gives more details about the motivation and methods for compressing fastText models.

Model compression

You can use this package to compress your own fastText model (or one downloaded e.g. from RusVectores):

import gensim
import compress_fasttext
big_model = gensim.models.fasttext.FastTextKeyedVectors.load('path-to-original-model')
small_model = compress_fasttext.prune_ft_freq(big_model, pq=True)
small_model.save('path-to-new-model')

Different compression methods include:

  • matrix decomposition (svd_ft)
  • product quantization (quantize_ft)
  • optimization of feature hashing (prune_ft)
  • feature selection (prune_ft_freq)

The recommended approach is combination of feature selection and quantization (prune_ft_freq with pq=True).

Model usage

If you just need a tiny fastText model for Russian, you can download this 28-megabyte model. It's a compressed version of ruscorpora_none_fasttextskipgram_300_2_2019 model from RusVectores.

If compress-fasttext is already installed, you can download and use this tiny model

import gensim
small_model = gensim.models.fasttext.FastTextKeyedVectors.load(
    'https://github.com/avidale/compress-fasttext/releases/download/v0.0.1/ft_freqprune_100K_20K_pq_100.bin'
)
print(small_model['спасибо'])

Notes

This code is heavily based on the navec package by Alexander Kukushkin and the blogpost by Andrey Vasnetsov about shrinking fastText embeddings.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page