Instructions to use krplt/GPT-Sponge with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use krplt/GPT-Sponge with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="krplt/GPT-Sponge")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("krplt/GPT-Sponge") model = AutoModelForCausalLM.from_pretrained("krplt/GPT-Sponge") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use krplt/GPT-Sponge with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "krplt/GPT-Sponge" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "krplt/GPT-Sponge", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/krplt/GPT-Sponge
- SGLang
How to use krplt/GPT-Sponge with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "krplt/GPT-Sponge" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "krplt/GPT-Sponge", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "krplt/GPT-Sponge" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "krplt/GPT-Sponge", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use krplt/GPT-Sponge with Docker Model Runner:
docker model run hf.co/krplt/GPT-Sponge
π§½ GPT-sponge π
GPT-sponge is a language model based on the GPT-neo 1.3B model and trained on the SpongeBob SquarePants transcripts. It is capable of generating dialogues and scenarios similar to the original cartoon. For more information about how Text Generation functions, please have a look at How π€ Transformers solve tasks.
π€ Model Details
- Model Name: GPT-sponge π§½
- Base Model: GPT-neo 1.3B π€
- Training Steps: 10,000 βοΈ (as for current v2 model)
- Training Time: ~7 hours (2x NVIDIA A40) β° (as for current v2 model)
π Example Outputs
Patrick: Oh, great! Who are they? [it's revealed that the two were phoning each other with jellyfish and Patrick was holding up a bunch of phone books]. ππ
Prompt: "Patrick:"
The episode starts with a view of the houses of Patrick, Squidward, and SpongeBob. The screen zooms in on to show a large pile of sand on the lawn of all three of them. Sandy, who is sitting on top of her rock, is covered in the sand and has a banner of letters that reads "Sandy Wanna blow some bubbles? Only 25 cents." πποΈπ°
Prompt: "The episode starts"
β οΈ Disclaimer
This model is intended for entertainment purposes only and should not be used for any commercial or business purposes. The output of the model may contain errors or offensive content, and I am not responsible for any consequences arising from the use of the model.
- Downloads last month
- 12