category
这个存储库包含我用HuggingFace的Transformers库制作的演示。
这个存储库包含我使用Transformers库制作的演示🤗 拥抱脸。目前,所有这些都在PyTorch中实现。
注:如果您不熟悉HuggingFace和/或Transformers,我强烈建议您查看我们的免费课程,该课程向您介绍了几种Transformer架构(如BERT、GPT-2、T5、BART等),以及HuggingFace库的概述,包括Transformers、Tokenizer、Dataset、Accelerate和hub。
For an overview of the ecosystem of HuggingFace for computer vision (June 2022), refer to this notebook with corresponding video.
Currently, it contains the following demos:
- Audio Spectrogram Transformer (paper):
- BERT (paper):
- BEiT (paper):
- CANINE (paper):
- CLIPSeg (paper):
- Conditional DETR (paper):
- ConvNeXT (paper):
- DINO (paper):
- DETR (paper):
- DPT (paper):
- Deformable DETR (paper):
- DiT (paper):
- Donut (paper):
- performing inference with Donut for document image classification
- fine-tuning Donut for document image classification
- performing inference with Donut for document visual question answering (DocVQA)
- performing inference with Donut for document parsing
- fine-tuning Donut for document parsing with PyTorch Lightning
- GIT (paper):
- GLPN (paper):
- GPT-J-6B (repository):
- GroupViT (repository):
- ImageGPT (blog post):
- LUKE (paper):
- LayoutLM (paper):
- LayoutLMv2 (paper):
- fine-tuning
LayoutLMv2ForSequenceClassification
on RVL-CDIP - fine-tuning
LayoutLMv2ForTokenClassification
on FUNSD - fine-tuning
LayoutLMv2ForTokenClassification
on FUNSD using the 🤗 Trainer - performing inference with
LayoutLMv2ForTokenClassification
on FUNSD - true inference with
LayoutLMv2ForTokenClassification
(when no labels are available) + Gradio demo - fine-tuning
LayoutLMv2ForTokenClassification
on CORD - fine-tuning
LayoutLMv2ForQuestionAnswering
on DOCVQA
- fine-tuning
- LayoutLMv3 (paper):
- fine-tuning
LayoutLMv3ForTokenClassification
on the FUNSD dataset
- fine-tuning
- LayoutXLM (paper):
- MarkupLM (paper):
- Mask2Former (paper):
- MaskFormer (paper):
- OneFormer (paper):
- Perceiver IO (paper):
- showcasing masked language modeling and image classification with the Perceiver
- fine-tuning the Perceiver for image classification
- fine-tuning the Perceiver for text classification
- predicting optical flow between a pair of images with
PerceiverForOpticalFlow
- auto-encoding a video (images, audio, labels) with
PerceiverForMultimodalAutoencoding
- SAM (paper):
- SegFormer (paper):
- T5 (paper):
- TAPAS (paper):
- fine-tuning
TapasForQuestionAnswering
on the Microsoft Sequential Question Answering (SQA) dataset - evaluating
TapasForSequenceClassification
on the Table Fact Checking (TabFact) dataset
- fine-tuning
- Table Transformer (paper):
- TrOCR (paper):
- UPerNet (paper):
- VideoMAE (paper):
- ViLT (paper):
- fine-tuning
ViLT
for visual question answering (VQA) - performing inference with
ViLT
to illustrate visual question answering (VQA) - masked language modeling (MLM) with a pre-trained
ViLT
model - performing inference with
ViLT
for image-text retrieval - performing inference with
ViLT
to illustrate natural language for visual reasoning (NLVR)
- fine-tuning
- ViTMAE (paper):
- Vision Transformer (paper):
- X-CLIP (paper):
- YOLOS (paper):
... more to come! 🤗
If you have any questions regarding these demos, feel free to open an issue on this repository.
Btw, I was also the main contributor to add the following algorithms to the library:
- TAbular PArSing (TAPAS) by Google AI
- Vision Transformer (ViT) by Google AI
- DINO by Facebook AI
- Data-efficient Image Transformers (DeiT) by Facebook AI
- LUKE by Studio Ousia
- DEtection TRansformers (DETR) by Facebook AI
- CANINE by Google AI
- BEiT by Microsoft Research
- LayoutLMv2 (and LayoutXLM) by Microsoft Research
- TrOCR by Microsoft Research
- SegFormer by NVIDIA
- ImageGPT by OpenAI
- Perceiver by Deepmind
- MAE by Facebook AI
- ViLT by NAVER AI Lab
- ConvNeXT by Facebook AI
- DiT By Microsoft Research
- GLPN by KAIST
- DPT by Intel Labs
- YOLOS by School of EIC, Huazhong University of Science & Technology
- TAPEX by Microsoft Research
- LayoutLMv3 by Microsoft Research
- VideoMAE by Multimedia Computing Group, Nanjing University
- X-CLIP by Microsoft Research
- MarkupLM by Microsoft Research
All of them were an incredible learning experience. I can recommend anyone to contribute an AI algorithm to the library!
Data preprocessing
Regarding preparing your data for a PyTorch model, there are a few options:
- a native PyTorch dataset + dataloader. This is the standard way to prepare data for a PyTorch model, namely by subclassing
torch.utils.data.Dataset
, and then creating a correspondingDataLoader
(which is a Python generator that allows to loop over the items of a dataset). When subclassing theDataset
class, one needs to implement 3 methods:__init__
,__len__
(which returns the number of examples of the dataset) and__getitem__
(which returns an example of the dataset, given an integer index). Here's an example of creating a basic text classification dataset (assuming one has a CSV that contains 2 columns, namely "text" and "label"):
from torch.utils.data import Dataset class CustomTrainDataset(Dataset): def __init__(self, df, tokenizer): self.df = df self.tokenizer = tokenizer def __len__(self): return len(self.df) def __getitem__(self, idx): # get item item = df.iloc[idx] text = item['text'] label = item['label'] # encode text encoding = self.tokenizer(text, padding="max_length", max_length=128, truncation=True, return_tensors="pt") # remove batch dimension which the tokenizer automatically adds encoding = {k:v.squeeze() for k,v in encoding.items()} # add label encoding["label"] = torch.tensor(label) return encoding
Instantiating the dataset then happens as follows:
from transformers import BertTokenizer import pandas as pd tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") df = pd.read_csv("path_to_your_csv") train_dataset = CustomTrainDataset(df=df, tokenizer=tokenizer)
Accessing the first example of the dataset can then be done as follows:
encoding = train_dataset[0]
In practice, one creates a corresponding DataLoader
, that allows to get batches from the dataset:
from torch.utils.data import DataLoader train_dataloader = DataLoader(train_dataset, batch_size=4, shuffle=True)
I often check whether the data is created correctly by fetching the first batch from the data loader, and then printing out the shapes of the tensors, decoding the input_ids back to text, etc.
batch = next(iter(train_dataloader)) for k,v in batch.items(): print(k, v.shape) # decode the input_ids of the first example of the batch print(tokenizer.decode(batch['input_ids'][0].tolist())
- HuggingFace Datasets. Datasets is a library by HuggingFace that allows to easily load and process data in a very fast and memory-efficient way. It is backed by Apache Arrow, and has cool features such as memory-mapping, which allow you to only load data into RAM when it is required. It only has deep interoperability with the HuggingFace hub, allowing to easily load well-known datasets as well as share your own with the community.
Loading a custom dataset as a Dataset object can be done as follows (you can install datasets using pip install datasets
):
from datasets import load_dataset dataset = load_dataset('csv', data_files={'train': ['my_train_file_1.csv', 'my_train_file_2.csv'] 'test': 'my_test_file.csv'})
Here I'm loading local CSV files, but there are other formats supported (including JSON, Parquet, txt) as well as loading data from a local Pandas dataframe or dictionary for instance. You can check out the docs for all details.
Training frameworks
Regarding fine-tuning Transformer models (or more generally, PyTorch models), there are a few options:
- using native PyTorch. This is the most basic way to train a model, and requires the user to manually write the training loop. The advantage is that this is very easy to debug. The disadvantage is that one needs to implement training him/herself, such as setting the model in the appropriate mode (
model.train()
/model.eval()
), handle device placement (model.to(device)
), etc. A typical training loop in PyTorch looks as follows (inspired by this great PyTorch intro tutorial):
import torch from transformers import BertForSequenceClassification # Instantiate pre-trained BERT model with randomly initialized classification head model = BertForSequenceClassification.from_pretrained("bert-base-uncased") # I almost always use a learning rate of 5e-5 when fine-tuning Transformer based models optimizer = torch.optim.Adam(model.parameters(), lr=5e-5) # put model on GPU, if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) for epoch in range(epochs): model.train() train_loss = 0.0 for batch in train_dataloader: # put batch on device batch = {k:v.to(device) for k,v in batch.items()} # forward pass outputs = model(**batch) loss = outputs.loss train_loss += loss.item() loss.backward() optimizer.step() optimizer.zero_grad() print("Loss after epoch {epoch}:", train_loss/len(train_dataloader)) model.eval() val_loss = 0.0 with torch.no_grad(): for batch in eval_dataloader: # put batch on device batch = {k:v.to(device) for k,v in batch.items()} # forward pass outputs = model(**batch) loss = outputs.logits val_loss += loss.item() print("Validation loss after epoch {epoch}:", val_loss/len(eval_dataloader))
- PyTorch Lightning (PL). PyTorch Lightning is a framework that automates the training loop written above, by abstracting it away in a Trainer object. Users don't need to write the training loop themselves anymore, instead they can just do
trainer = Trainer()
and thentrainer.fit(model)
. The advantage is that you can start training models very quickly (hence the name lightning), as all training-related code is handled by theTrainer
object. The disadvantage is that it may be more difficult to debug your model, as the training and evaluation is now abstracted away. - HuggingFace Trainer. The HuggingFace Trainer API can be seen as a framework similar to PyTorch Lightning in the sense that it also abstracts the training away using a Trainer object. However, contrary to PyTorch Lightning, it is not meant not be a general framework. Rather, it is made especially for fine-tuning Transformer-based models available in the HuggingFace Transformers library. The Trainer also has an extension called
Seq2SeqTrainer
for encoder-decoder models, such as BART, T5 and theEncoderDecoderModel
classes. Note that all PyTorch example scripts of the Transformers library make use of the Trainer. - HuggingFace Accelerate: Accelerate is a new project, that is made for people who still want to write their own training loop (as shown above), but would like to make it work automatically irregardless of the hardware (i.e. multiple GPUs, TPU pods, mixed precision, etc.).
- 登录 发表评论