WillHeld/hinglish_top
Viewer • Updated • 10.9k • 66 • 3
How to use SRDdev/MaskedLM with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("fill-mask", model="SRDdev/MaskedLM") # Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("SRDdev/MaskedLM")
model = AutoModelForMaskedLM.from_pretrained("SRDdev/MaskedLM")# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("SRDdev/MaskedLM")
model = AutoModelForMaskedLM.from_pretrained("SRDdev/MaskedLM")This is a BERT model trained for Masked Language Modeling for English Data.
Hinglish-Top Dataset columns
| Epoch | Loss |
|---|---|
| 1 | 0.0485 |
| 2 | 0.00837 |
| 3 | 0.00812 |
| 4 | 0.0029 |
| 5 | 0.014 |
| 6 | 0.00748 |
| 7 | 0.0041 |
| 8 | 0.00543 |
| 9 | 0.00304 |
| 10 | 0.000574 |
from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("SRDdev/SRDBerta")
model = AutoModelForMaskedLM.from_pretrained("SRDdev/SRDBerta")
fill = pipeline('fill-mask', model='SRDberta', tokenizer='SRDberta')
fill_mask = fill.tokenizer.mask_token
fill(f'Aap {fill_mask} ho?')
Author: @SRDdev
Name : Shreyas Dixit
framework : Pytorch
Year: Jan 2023
Pipeline : fill-mask
Github : https://github.com/SRDdev
LinkedIn : https://www.linkedin.com/in/srddev/
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="SRDdev/MaskedLM")