Instructions to use SRDdev/ScriptForge-small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SRDdev/ScriptForge-small with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="SRDdev/ScriptForge-small")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("SRDdev/ScriptForge-small") model = AutoModelForCausalLM.from_pretrained("SRDdev/ScriptForge-small") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use SRDdev/ScriptForge-small with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SRDdev/ScriptForge-small" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SRDdev/ScriptForge-small", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/SRDdev/ScriptForge-small
- SGLang
How to use SRDdev/ScriptForge-small with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "SRDdev/ScriptForge-small" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SRDdev/ScriptForge-small", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "SRDdev/ScriptForge-small" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SRDdev/ScriptForge-small", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use SRDdev/ScriptForge-small with Docker Model Runner:
docker model run hf.co/SRDdev/ScriptForge-small
Update README.md
Browse files
README.md
CHANGED
|
@@ -13,25 +13,25 @@ widget:
|
|
| 13 |
tags:
|
| 14 |
- text-generation
|
| 15 |
---
|
| 16 |
-
#
|
| 17 |
|
| 18 |
## ποΈ Model description
|
| 19 |
-
|
| 20 |
-
|
| 21 |
It generates a probability distribution over the next word given the previous words, without incorporating future words.
|
| 22 |
|
| 23 |
-
The goal of
|
| 24 |
This can be useful for content creators who are looking for inspiration or who want to automate the process of generating video scripts.
|
| 25 |
To use ScriptGPT-small, users can provide a prompt or a starting sentence, and the model will generate a sequence of words that follow the context and style of the training data.
|
| 26 |
|
| 27 |
Models
|
| 28 |
-
- [Script_GPT](https://huggingface.co/SRDdev/
|
| 29 |
-
- [ScriptGPT-small](https://huggingface.co/SRDdev/
|
| 30 |
|
| 31 |
More models are coming soon...
|
| 32 |
|
| 33 |
## π Intended uses
|
| 34 |
-
The intended uses of
|
| 35 |
|
| 36 |
|
| 37 |
## π How to use
|
|
@@ -41,8 +41,8 @@ You can use this model directly with a pipeline for text generation.
|
|
| 41 |
```python
|
| 42 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 43 |
|
| 44 |
-
tokenizer = AutoTokenizer.from_pretrained("SRDdev/
|
| 45 |
-
model = AutoModelForCausalLM.from_pretrained("SRDdev/
|
| 46 |
```
|
| 47 |
|
| 48 |
2. __Pipeline__
|
|
|
|
| 13 |
tags:
|
| 14 |
- text-generation
|
| 15 |
---
|
| 16 |
+
# ScriptForge-small
|
| 17 |
|
| 18 |
## ποΈ Model description
|
| 19 |
+
ScriptForge-small is a language model trained on a dataset of 100 YouTube videos that cover different domains of Youtube videos.
|
| 20 |
+
ScriptForge-small is a Causal language transformer. The model resembles the GPT2 architecture, the model is a Causal Language model meaning it predicts the probability of a sequence of words based on the preceding words in the sequence.
|
| 21 |
It generates a probability distribution over the next word given the previous words, without incorporating future words.
|
| 22 |
|
| 23 |
+
The goal of ScriptForge-small is to generate scripts for Youtube videos that are coherent, informative, and engaging.
|
| 24 |
This can be useful for content creators who are looking for inspiration or who want to automate the process of generating video scripts.
|
| 25 |
To use ScriptGPT-small, users can provide a prompt or a starting sentence, and the model will generate a sequence of words that follow the context and style of the training data.
|
| 26 |
|
| 27 |
Models
|
| 28 |
+
- [Script_GPT](https://huggingface.co/SRDdev/ScriptForge) : AI content Model
|
| 29 |
+
- [ScriptGPT-small](https://huggingface.co/SRDdev/ScriptForge-small) : Generalized Content Model
|
| 30 |
|
| 31 |
More models are coming soon...
|
| 32 |
|
| 33 |
## π Intended uses
|
| 34 |
+
The intended uses of ScriptForge-small include generating scripts for videos, providing inspiration for content creators, and automating the process of generating video scripts.
|
| 35 |
|
| 36 |
|
| 37 |
## π How to use
|
|
|
|
| 41 |
```python
|
| 42 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 43 |
|
| 44 |
+
tokenizer = AutoTokenizer.from_pretrained("SRDdev/ScriptForge-small")
|
| 45 |
+
model = AutoModelForCausalLM.from_pretrained("SRDdev/ScriptForge-small")
|
| 46 |
```
|
| 47 |
|
| 48 |
2. __Pipeline__
|