Instructions to use JetBrains/CodeLlama-7B-KStack with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use JetBrains/CodeLlama-7B-KStack with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="JetBrains/CodeLlama-7B-KStack")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("JetBrains/CodeLlama-7B-KStack") model = AutoModelForCausalLM.from_pretrained("JetBrains/CodeLlama-7B-KStack") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use JetBrains/CodeLlama-7B-KStack with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "JetBrains/CodeLlama-7B-KStack" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "JetBrains/CodeLlama-7B-KStack", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/JetBrains/CodeLlama-7B-KStack
- SGLang
How to use JetBrains/CodeLlama-7B-KStack with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "JetBrains/CodeLlama-7B-KStack" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "JetBrains/CodeLlama-7B-KStack", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "JetBrains/CodeLlama-7B-KStack" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "JetBrains/CodeLlama-7B-KStack", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use JetBrains/CodeLlama-7B-KStack with Docker Model Runner:
docker model run hf.co/JetBrains/CodeLlama-7B-KStack
π© Report: Ethical issue(s)
around 3 months ago, I sent Jetbrains a request to remove my data from the KStack dataset and the KStack models. They informed me that they had removed it from the dataset (note though that I don't see any updates to the KStack and KStack-Clean datasets), however there has not been an update to the model, which was trained on my code.
As such, I'd like to request removal of this model (and quantizations by other users) until it is re-trained on the cleaned dataset.
Dear Martmists!
We apologize for the inconvenience and for causing you the worry. We never forgot or missed your request, the updated dataset is already done, and the models are being re-trained literally right now. We will update both the dataset and the models very soon.
Sorry again.
Yours faithfully,
Sergey.