Instructions to use OpenGVLab/InternVL3_5-38B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/InternVL3_5-38B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="OpenGVLab/InternVL3_5-38B", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("OpenGVLab/InternVL3_5-38B", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OpenGVLab/InternVL3_5-38B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OpenGVLab/InternVL3_5-38B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/InternVL3_5-38B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/OpenGVLab/InternVL3_5-38B
- SGLang
How to use OpenGVLab/InternVL3_5-38B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OpenGVLab/InternVL3_5-38B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/InternVL3_5-38B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OpenGVLab/InternVL3_5-38B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/InternVL3_5-38B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use OpenGVLab/InternVL3_5-38B with Docker Model Runner:
docker model run hf.co/OpenGVLab/InternVL3_5-38B
vLLM compatiblility
Hi team β thanks for the great work on InternVL3.5!
I noticed the checkpoints use processor_class: "InternVLProcessor", which expects extra special tokens (e.g., start_image_token). Those entries arenβt present in tokenizer_config.json, which prevents serving with vLLM. Is there a plan to make the releases vLLM-compatible (e.g., by including these tokens or publishing guidance/workarounds)?
Thanks!
vllm tool calls not working
vllm serve '/mnt/models/vllm_models/InternVL3_5-38B-AWQ-8bit/' --max_model_len 32000 --tensor-parallel-size 8 --gpu_memory_utilization 0.95 --trust-remote-code --dtype float16 --enable-auto-tool-choice --tool-call-parser internlm
is this issue resolved ? I'm also facing this
Can you provide the version of vLLM you used? I tested it with 0.8.5.post1 and 0.10.1 and found that it works well.
Version: 0.10.1rc2.dev294+gc9c3a7856.cu124
Also is the HF version for 3.5 not supported on VLLM for llm. generate ? THe same is happening for internVL 3 38B hf in the same vllm version
I can share the logs if you want
Also is the HF version for 3.5 not supported on VLLM for llm. generate ? THe same is happening for internVL 3 38B hf in the same vllm version
Can you provide the version of vLLM you used? I tested it with 0.8.5.post1 and 0.10.1 and found that it works well.
When I tried to serve the model, it did work because VLLM holds multiple strategies to get the chat template here. However, cached_get_processor always raises errors due to the missing special tokens, which breaks the cache and therefore, executes AutoProcessor.from_pretrained(...) for every request. The service will then be down due to HTTP Error 429 caused by repetitive HEAD operations to HuggingFace.
Any fresh experience with making tool calling work with vLLM? Thanks!