[D] Benchmarking 🤗/Transformers on both PyTorch and TensorFlow
Since our recent release of Transformers (previously known as pytorch-pretrained-BERT and pytorch-transformers), we’ve been working on a comparison between the implementation of our models in PyTorch and in TensorFlow.
We’ve released a detailed report where we benchmark each of the architectures hosted on our repository (BERT, GPT-2, DistilBERT, …) in PyTorch with and without TorchScript, and in TensorFlow with and without XLA. We benchmark them for inference and the results are visible in the following spreadsheet.
We would love to hear your thoughts on the process.