NEW

DeployLLama 2(Chat 7Band13B) in a few clicks onInference Endpoints

The AI community building the future.

Build, train and deploy state of the art models powered by the reference open source in machine learning.

Hub

Home of Machine Learning

Create, discover and collaborate on ML better.
Join the community to start your ML journey.

Sign Up
Hugging Face Hub dashboard

Open Source

Transformers

Transformers is our natural language processing library and our hub is now open to all ML models, with support from libraries likeFlair,Asteroid,ESPnet,Pyannote, and more to come.

Read documentation
huggingface@transformers:~
fromtransformersimportAutoTokenizer,AutoModelForMaskedLMtokenizer= AutoTokenizer.from_pretrained("bert-base-uncased")model=AutoModelForMaskedLM.from_pretrained("bert-base-uncased")

On demand

Inference API

Serve your models directly from Hugging Face infrastructure and run large scale NLP models in milliseconds with just a few lines of code.

Learn more
Fill-Mask
Examples
Mask token: [MASK]
Computation time on Intel Xeon 3rd Gen Scalable cpu: cached
happiness
0.036
survival
0.031
salvation
0.017
freedom
0.017
unity
0.015
Computation time on Intel Xeon 3rd Gen Scalable cpu: cached
My name isClaraPERand I live inBerkeleyLOC,CaliforniaLOC. I work at this cool company calledHugging FaceORG.
Science

Our Research contributions

We’re on a journey to advance and democratize NLP for everyone. Along the way, we contribute to the development of technology for the better.

🌸

T0

Multitask Prompted Training Enables Zero-Shot Task Generalization

Open source state-of-the-art zero-shot language model out ofBigScience.

Read more

🐎

DistilBERT

DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

A smaller, faster, lighter, cheaper version of BERT obtained via model distillation.

Read more

📚

HMTL

Hierarchical Multi-Task Learning

Learning embeddings from semantic tasks for multi-task learning. We have open-sourced code and a demo.

Read more

🐸

Dynamical Language Models

Meta-learning for language modeling

A meta learner is trained via gradient descent to continuously and dynamically update language model weights.

Read more

🤖

State of the art

Neuralcoref

Our open source coreference resolution library for coreference. You can train it on your own dataset and language.

Read more

🦄

Auto-complete your thoughts

Write with Transformers

This web app is the official demo of the Transformers repository 's text generation capabilities.

Start writing