Instructions to use cloudyu/Mixtral_7Bx2_MoE with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use cloudyu/Mixtral_7Bx2_MoE with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="cloudyu/Mixtral_7Bx2_MoE")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("cloudyu/Mixtral_7Bx2_MoE") model = AutoModelForCausalLM.from_pretrained("cloudyu/Mixtral_7Bx2_MoE") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use cloudyu/Mixtral_7Bx2_MoE with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "cloudyu/Mixtral_7Bx2_MoE" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "cloudyu/Mixtral_7Bx2_MoE", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/cloudyu/Mixtral_7Bx2_MoE
- SGLang
How to use cloudyu/Mixtral_7Bx2_MoE with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "cloudyu/Mixtral_7Bx2_MoE" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "cloudyu/Mixtral_7Bx2_MoE", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "cloudyu/Mixtral_7Bx2_MoE" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "cloudyu/Mixtral_7Bx2_MoE", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use cloudyu/Mixtral_7Bx2_MoE with Docker Model Runner:
docker model run hf.co/cloudyu/Mixtral_7Bx2_MoE
Mixtral MOE 2x7B
MoE of the following models :
metrics: Average 73.43 ARC 71.25 HellaSwag 87.45
gpu code example
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_7Bx2_MoE"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
CPU example
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_7Bx2_MoE"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 73.43 |
| AI2 Reasoning Challenge (25-Shot) | 71.25 |
| HellaSwag (10-Shot) | 87.45 |
| MMLU (5-Shot) | 64.98 |
| TruthfulQA (0-shot) | 67.23 |
| Winogrande (5-shot) | 81.22 |
| GSM8k (5-shot) | 68.46 |
- Downloads last month
- 237
Model tree for cloudyu/Mixtral_7Bx2_MoE
Spaces using cloudyu/Mixtral_7Bx2_MoE 22
π
wenjiao/test_llm_leaderboard
π
lvkaokao/low_bit_open_llm_leaderboard
π
open-llm-leaderboard-old/open_llm_leaderboard
π
Vikhrmodels/small-shlepa-lb
π
kz-transformers/kaz-llm-lb
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard71.250
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard87.450
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.980
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard67.230
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard81.220
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard68.460