PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression
Paper • 2405.14852 • Published • 2
How to use justheuristic/test-1bit with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="justheuristic/test-1bit")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("justheuristic/test-1bit")
model = AutoModelForCausalLM.from_pretrained("justheuristic/test-1bit")How to use justheuristic/test-1bit with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "justheuristic/test-1bit"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "justheuristic/test-1bit",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/justheuristic/test-1bit
How to use justheuristic/test-1bit with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "justheuristic/test-1bit" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "justheuristic/test-1bit",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "justheuristic/test-1bit" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "justheuristic/test-1bit",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use justheuristic/test-1bit with Docker Model Runner:
docker model run hf.co/justheuristic/test-1bit
An official quantization of meta-llama/Llama-2-7b using PV-Tuning on top of AQLM.
For this quantization, we used 1 codebook of 16 bits for groups of 8 weights.
| Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
|---|---|---|---|---|
| Llama-2-7b (this) | 1x16 | 5.68 | 2.4 | Link |
| Llama-2-7b | 2x8 | 5.90 | 2.2 | Link |
| Llama-2-13b | 1x16 | 5.05 | 4.1 | Link |
| Llama-2-70b | 1x16 | 3.78 | 18.8 | Link |
The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.
To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the official GitHub repo. The original code for PV-Tuning can be found in the AQLM@pv-tuning branch.