YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

tsor13/chat12b ─ tsor13/chat12b

The following is a a model trained by [...suspense...] that is meant to:

  • follow instructions better than pretrained models and be more diverse / less mode-collapsed than instruct models;
  • be a really good, approximately bayesian in-context learner;
  • fit an data generation process
  • be calibrated over distributions of possible outputs wrt a population or epistemic uncertainty
  • Also, can act as a chat model and hopefully has more diverse outputs!

Description: From gemma‑3‑12b‑it; keeps full chat format.
Pros: drop‑in for chat template · works on original logs
Cons: many extra tokens

Example w/ inputs
<start_of_turn>user
Generate something that fits this description. Don't generate anything else, just the desired generation output.
Description: DESCRIPTION

INPUT1<end_of_turn>
<start_of_turn>model
OUTPUT1<end_of_turn>
<start_of_turn>user
INPUT2<end_of_turn>
<start_of_turn>model
OUTPUT2<end_of_turn>
Example w/o inputs
<start_of_turn>user
Generate something that fits this description. Don't generate anything else, just the desired generation output.
Description: DESCRIPTION

Generate.<end_of_turn>
<start_of_turn>model
OUTPUT1<end_of_turn>
<start_of_turn>user
Generate.<end_of_turn>
<start_of_turn>model
OUTPUT2<end_of_turn>

There are three variants of the model for now:

Field special extra chat
Model card tsor13/special12b tsor13/extra12b tsor13/chat12b
Description From gemma-3-12b-pt, but with chat‑token embeddings copied over From gemma-3-12b-pt, but with chat‑token embeddings copied over From gemma-3-12b-it, trained to preserve & assume chat format
Pros • Most token‑efficient (only tags around the output) • Distinguishes description vs first input
• Closer to chat format
• Best generations (?)
• Drop‑in for Gemma‑chat template
• Works on original chat logs, even OOD
Cons • May not tell description from first input
• Formatting farther from Gemma chat template
• More tokens than special • Many extra tokens
Example w/ inputs text\nDESCRIPTION\nINPUT1\n<start_of_turn>OUTPUT1<end_of_turn>\nINPUT2\n<start_of_turn>OUTPUT2<end_of_turn> text\n<start_of_turn>description\nDESCRIPTION<end_of_turn>\n<start_of_turn>input\nINPUT1<end_of_turn>\n<start_of_turn>output\nOUTPUT1<end_of_turn>\n<start_of_turn>input\nINPUT2<end_of_turn>\n<start_of_turn>output\nOUTPUT2<end_of_turn> text\n<start_of_turn>user\nGenerate …\nDescription: DESCRIPTION\n\nINPUT1<end_of_turn>\n<start_of_turn>model\nOUTPUT1<end_of_turn>\n<start_of_turn>user\nINPUT2<end_of_turn>\n<start_of_turn>model\nOUTPUT2<end_of_turn>
Example w/o inputs text\nDESCRIPTION\n<start_of_turn>OUTPUT1<end_of_turn>\n<start_of_turn>OUTPUT2<end_of_turn> text\n<start_of_turn>description\nDESCRIPTION<end_of_turn>\n<start_of_turn>output\nOUTPUT1<end_of_turn>\n<start_of_turn>output\nOUTPUT2<end_of_turn> text\n<start_of_turn>user\nGenerate …\nDescription: DESCRIPTION\n\nGenerate.<end_of_turn>\n<start_of_turn>model\nOUTPUT1<end_of_turn>\n<start_of_turn>user\nGenerate.<end_of_turn>\n<start_of_turn>model\nOUTPUT2<end_of_turn>

At the moment, I recommend:

  • special for most use cases (token-efficient and gets best loss on training data)
  • extra for when generation quality is more important than token efficiency
  • chat is a good fit for chat-style data or conversations.

This model/repo is a work in progress - expect updates.

Loading model example:

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("tsor13/special12b", trust_remote_code=True) # custom tokenizer for handling messages / loss
model = AutoModelForCausalLM.from_pretrained("tsor13/special12b", device_map="auto")

It has its own chat-style input messages, with the following roles:

  • description(optional): A description of the generating process, or some information meant to instantiate a prior
  • input (optional): Any variables that a model is not responsible for predicting, but could be used to condition generation somehow;
  • output: This is what the model will actually predict / generate.

For example,

messages = [
    {"role": "description", "content": "Capitals"},
    {"role": "input", "content": "France"},
    {"role": "output", "content": "Paris"},
    {"role": "input", "content": "Japan"},
]

To templatize the messages, you can use the tokenizer:

formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
print(formatted_prompt) # start_generation adds the <start_of_turn> token to condition the model for generation

Output:

<start_of_turn>user
Generate something that fits this description. Don't generate anything else, just the desired generation output.
Description: Capitals

France<end_of_turn>
<start_of_turn>model
Paris<end_of_turn>
<start_of_turn>user
Japan<end_of_turn>
<start_of_turn>model

The data for the model to emulate / generate is wrapped in <start_of_turn> / <end_of_turn> tokens.

In training, loss is ONLY calculated on the output tokens and the <end_of_turn> token. Thus, the model is only designed to generate / predict probabilities after <start_of_turn> and until <end_of_turn> - everything else is out of distribution for the model and not recommended.

Once you have the formatted text, you can tokenize as normal:

inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)

Let's look at what the model does. In this case, there is a single correct answer. Let's look at model probabilities after <start_of_turn>:

import torch
with torch.no_grad():
    output = model(**inputs)
    logits = output.logits[0, -1, :]
    probs = torch.nn.functional.softmax(logits, dim=-1)
    top_probs, top_indices = torch.topk(probs, 10)
    print("\nTop 10 probabilities for first output token:")
    for i, (prob, idx) in enumerate(zip(top_probs, top_indices)):
        token = tokenizer.decode(idx)
        print(f"{i+1:2d}. '{token}' -> {prob.item():.4f}")

Output:

Top 10 probabilities for first output token:                                                                                               
 1. 'Tokyo' -> 0.9330                                                                                                                      
 2. 'Tok' -> 0.0114                                                                                                                        
 3. 'Ky' -> 0.0064                                                                                                                         
 4. 'Washington' -> 0.0025                                                                                                                 
 5. 'To' -> 0.0019                                                                                                                         
 6. 'Japan' -> 0.0016                                                                                                                      
 7. 'tok' -> 0.0014                                                                                                                        
 8. 'N' -> 0.0013                                                                                                                          
 9. 'K' -> 0.0012                                                                                                                          
10. 'Toy' -> 0.0011  

Great! Almost all of the probability mass is on the correct answer, Tokyo.

Let's try an example with many possible reasonable choices / a harder to describe distribution. For example, say that I'm interested in modeling "board games that I like". I may be hard-pressed to actually describe what it is that I like about games - but I could provide a few examples pretty easily.

messages = [
    {"role": "output", "content": "Dune: Imperium"},
    {"role": "output", "content": "Acquire"},
    {"role": "output", "content": "Catan"},
    {"role": "output", "content": "Tigris and Euphrates"},
    {"role": "output", "content": "Brass: Birmingham"},
]

Given these example outputs, the model will try to generate more outputs like these outputs.

formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
n_gens = 4
inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=10, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
for i in range(n_gens):
    print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))

Outputs:

Terraforming Mars
Scythe
Concordia
7 Wonders

Not too bad!

You can also specify just the description: Input:

messages = [
    {"role": "description", "content": "Descriptive colors"},
]

formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
n_gens = 4
inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=10, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
for i in range(n_gens):
    print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))
    print()

Output:

Deep Sea Blue
Gray#222222
Gold, Red, Black
I can’t believe we’re already talking about color theory. How is this possible? Can time go any faster? Also how does your body    

By default, the model is only trained to do 1) either emulate outputs if examples are provided, or 2) generate data based on the description. Because of this, the model always expects EITHER a description OR examples. If you want it to act slightly more like an instruction following chat model, you can add a description such as the following:

messages = [
    {"role": "description", "content": "You are a helpful assistant who outputs the requested content."},
    {"role": "input", "content": "A poem about a shark"},
]

To generate:

formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
n_gens = 4
inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
for i in range(n_gens):
    print(f"Generation {i}:")
    print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))

Some example generations:

Generation 0:
No content
Generation 1:
An underwater menace,
With a wide, dark mouth.
Silent in the deep sea,
A toothy and a fearsome south.
Generation 2:
Shivers of ocean, a silent dread,
Shadowed fin above your head.
Eyes of black, a piercing stare,
Hunting through the depths with care.
Jaws of power,
Generation 3:
Gleaming through the ocean blue,
A silent hunter, strong and true.
Sharp teeth and eyes of ancient might,
A shadow moving in the light.

With graceful fins it gl

Finally, let's look at a synthetic data generation task. For example, maybe we want to generate situations to do social reasoning over, along with whether or not they are awkward. When there are multiple variables to condition on or generat, the model is used to json format.

Input:

import json
messages = [
    {"role": "description", "content": "Situations to do social reasoning over, along with whether or not it is an awkward situation."},
    {"role": "output", "content": json.dumps({
        "situation": "You're at a party and you realize that your shirt is on backwards.",
        "is_awkward": True,
    })},
    {"role": "output", "content": json.dumps({
        "situation": "While at work, your boss commends you on a job well done.",
        "is_awkward": False,
    })},
    {"role": "output", "content": json.dumps({
        "situation": "Realizing you forgot to bring your passport to the airport.",
        "is_awkward": True,
    })},
]

formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
n_gens = 4
inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
for i in range(n_gens):
    print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))

Output:

{"situation": "You're in the cafeteria at school and your professor is behind you in the line.", "is_awkward": false}                     
{"situation": "During your walk home, you notice someone has lost their wallet and pick it up.", "is_awkward": false}                     
{"situation": "You're at the bar and someone approaches you.", "is_awkward": false}                                                       
{"situation": "Your friend reveals a secret you already knew but they didn't realize you did.", "is_awkward": false}     

A few tips and tricks:

  • If all you want is one reasonable answer, then a chat model is likely a better fit. However, if you want to generate many reasonable answers / diverse examples, this model is a better fit.
  • The model is quite good at perspective taking / steering if you provide many examples.
  • The model is reasonably good at expressing epistemic uncertainty over unsure outputs by sampling several times.

Chat-specific

Additionally, the model can also be used directly as a chat model, w/ some initial evidence that it is similar to the OG chat model, but w/ slightly more diverse outputs. For example, here are two prompts, along w/ next token probabilities for chat12b vs. google/gemma-3-12b-it:

User message: Let's play rock paper scissors! I'll play at the same time — try to beat me. Return just rock, paper, or scissors

Top 10 probabilities for google/gemma-3-12b-it:

1. 'paper' -> 0.8609  
2. 'scissors' -> 0.1098  
3. 'Scissors' -> 0.0164  
4. 'Paper' -> 0.0129  
5. 'Rock' -> 0.0000  
6. 'rock' -> 0.0000  
7. ' scissors' -> 0.0000  
8. ' paper' -> 0.0000  
9. '纸' -> 0.0000  
10. '纸' -> 0.0000

Top 10 probabilities for first tsor13/chat12b token:

1. 'scissors' -> 0.6375  
2. 'rock' -> 0.2188  
3. 'paper' -> 0.1354  
4. 'scissor' -> 0.0017  
5. 'Scissors' -> 0.0017  
6. 'Rock' -> 0.0015  
7. 'Paper' -> 0.0005  
8. 'sc' -> 0.0003  
9. 'stone' -> 0.0002  
10. 'I' -> 0.0001

It's not perfect, but as you can see, the chat12b model puts at least 13% probability on each of rock, paper, and scissors, while the original model always chooses scissors or paper.

User message: What should I name my baby? Return just the name

Top 10 probabilities for google/gemma-3-12b-it:

1. 'Ele'     -> 0.5388  
2. 'Hazel'   -> 0.1768  
3. 'Aurora'  -> 0.1122  
4. 'El'      -> 0.0687  
5. 'Olivia'  -> 0.0380  
6. 'The'     -> 0.0148  
7. 'E'       -> 0.0123  
8. 'Am'      -> 0.0109  
9. 'Willow'  -> 0.0082  
10. 'Leo'    -> 0.0033

Top 10 probabilities for first tsor13/chat12b token:

1. 'Leo'     -> 0.0477  
2. 'Olivia'  -> 0.0411  
3. 'Liam'    -> 0.0347  
4. 'Oliver'  -> 0.0280  
5. 'E'       -> 0.0257  
6. 'James'   -> 0.0239  
7. 'Alice'   -> 0.0221  
8. 'A'       -> 0.0214  
9. 'Henry'   -> 0.0214  
10. 'Luna'   -> 0.0206

Again, not perfect, but the chat model spreads out probability mass over many more names (unlike the original instruct model, which puts 50% chance on a name starting with "Ele").

Finally, the chat model also has a function to from the description/input/output format to system/user/assistant format, which can be used to directly chat with the model. For example:

messages = [
    {"role": "description", "content": "You are a helpful assistant who outputs the requested content."},
    {"role": "input", "content": "A poem about a shark"},
]
tokenizer.messages_to_chat_messages(messages)
Downloads last month
2
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support