Universal-NER/Pile-NER-type
Viewer • Updated • 45.9k • 473 • 29
How to use LR-AI-Labs/tiny-universal-NER with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="LR-AI-Labs/tiny-universal-NER") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("LR-AI-Labs/tiny-universal-NER")
model = AutoModelForCausalLM.from_pretrained("LR-AI-Labs/tiny-universal-NER")How to use LR-AI-Labs/tiny-universal-NER with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "LR-AI-Labs/tiny-universal-NER"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "LR-AI-Labs/tiny-universal-NER",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/LR-AI-Labs/tiny-universal-NER
How to use LR-AI-Labs/tiny-universal-NER with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "LR-AI-Labs/tiny-universal-NER" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "LR-AI-Labs/tiny-universal-NER",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "LR-AI-Labs/tiny-universal-NER" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "LR-AI-Labs/tiny-universal-NER",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use LR-AI-Labs/tiny-universal-NER with Docker Model Runner:
docker model run hf.co/LR-AI-Labs/tiny-universal-NER
This model is finetuned from TinyLLama.
It is trained on ChatGPT-generated Pile-NER-type data.
Check this paper for more information.
You will need the transformers>=4.34 Do check the TinyLlama github page for more information.
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="LR-AI-Labs/tiny-universal-NER",
torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{
"role": "system",
"content": "A virtual assistant answers questions from a user based on the provided text.",
},
{
"role": "user",
"content": "Text: VinBigData Joint Stock Company provides platform technology solutions and advanced products based on Big Data and Artificial Intelligence. With a staff of professors, doctors, and global technology experts, VinBigData is currently developing and deploying products such as ViVi virtual assistant, VinBase the comprehensive multi-cognitive artificial intelligence ecosystem, Vizone the ecosystem of smart image analysis solutions, VinDr the medical image digitization platform,..."
},
{
"role": "assistant",
"content": "I've read this text."
},
{
"role": "user",
"content": "What describes products in the text?"
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])
# <|system|>
# A virtual assistant answers questions from a user based on the provided text.</s>
# <|user|>
# Text: The American Bank Note Company Printing Plant is a repurposed complex of three interconnected buildings in the Hunts Point neighborhood of the Bronx in New York City. The innovative Kirby, Petit & Green design was built in 1909–1911 by the American Bank Note Company on land which had previously been part of Edward G. Faile's country estate. A wide variety of financial instruments were printed there; at one point, over five million documents were produced per day, including half the securities traded on the New York Stock Exchange.</s>
# <|assistant|>
# I've read this text.</s>
# <|user|>
# What describes location in the text?</s>
# <|assistant|>
# ["ViVi", "VinBase", "Vizone", "VinDr"]
This model and its associated data are released under the CC BY-NC 4.0 license. They are primarily used for research purposes.