Instructions to use ModelTC/roberta-base-mrpc with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ModelTC/roberta-base-mrpc with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="ModelTC/roberta-base-mrpc")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("ModelTC/roberta-base-mrpc") model = AutoModelForSequenceClassification.from_pretrained("ModelTC/roberta-base-mrpc") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- b87d731a30903cae01ae66e35650aa16a15796714ba6651d147dfaf86722b03a
- Size of remote file:
- 997 MB
- SHA256:
- 067cd089de4dcc88812b86b91426d3a755bbda66d34cb2d62ea14027f770449a
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.