We got Qwen 3.5 to count Rs in Strawberry correctly! π¨
Building on Sawtone, weβve been testing a different way to feed language into an LLM to build the next generation of multilingual AI.
The usual setup gives the model tokenized text and asks it to perform various linguistic tasks. That works surprisingly well, until it doesnβt. Accents disappear. Words get mangled. Internal structure gets blurred away. And the cost of that gets higher once you move into multilingual and lower-resource settings.
So we tried adding a second path.
In addition to the normal text input, the model also receives Sawtone: a byte-level word representation that preserves how a word is written, how it sounds, and how it is structured.
Same LLM. Better interface.
In this proof of concept with Qwen 3.5 0.8B, that pushed our eval from 64% to 88%. The gains showed up exactly where tokenized models usually get shaky: diacritics, character order, exact spelling, and other form-sensitive behavior.
Sawtone itself is tokenizer-free, byte-level, and pre-trained across 507 languages.
We're releasing Darwin-4B-David, the first second-generation model in the Darwin Opus family. By evolving an already-evolved model, it achieves 85.0% on GPQA Diamond β surpassing its 58.6% original ancestor and even gemma-4-31B (84.3%) β with just 4.5B parameters.
Second-Generation Evolution Most merges start from a base model and produce a single offspring. Darwin-4B-David breaks this pattern. The Father (Darwin-4B-Opus) was already evolved from gemma-4-E4B-it with Claude Opus reasoning distillation β a Gen-1 model. The Mother (DavidAU's DECKARD-Expresso-Universe) brings Unsloth deep tuning across 5 in-house datasets with thinking mode by default. Crossbreeding these two produced the first Gen-2 Darwin model.
Darwin V6's Model MRI scanned both parents across all 42 layers, assigning independent optimal ratios per layer. The Mother's creativity and Korean language hotspot (Layer 22-25, weight 0.95) was maximally absorbed, while the Father's reasoning core (Layer 30-40, weight 0.48) was preserved. This is "Merge = Evolve" applied recursively β evolution of evolution.
Benchmarks Darwin-4B-David scores 85.0% on GPQA Diamond (+26.4%p over original 58.6%), evaluated generatively with maj@8 (8 generations per question, majority vote), Epoch AI prompt format, thinking mode enabled, 50 sampled questions. On ARC-Challenge (25-shot, loglikelihood), both score 64.93% β expected, as loglikelihood doesn't capture thinking-mode reasoning differences.
Why This Matters gemma-4-31B (30.7B) scores 84.3%. Darwin-4B-David surpasses it at 1/7th the size β no training, no RL, just 45 minutes of MRI-guided DARE-TIES on one H100. The name "David" honors Mother creator DavidAU and evokes David vs. Goliath.
Turn any 3D model into vintage comic art, right in your browser No Photoshop. No plugins. No server. Just your model image or 3D ( glb, obj... ) a canvas, and four styles that hit different: Halftone - offset dots, newsprint feel, classic retro Comic - cel-shading, ink outlines, misregistration grain Kraft - raw paper, zine energy, underground press vibes Anaglyph - red/cyan shift, retro sci-fi, put your glasses on Drop a GLB or OBJ. Orbit around it. Watch the filter breathe on the geometry in real time. Dial in dot size, paper color, ink intensity, contrast then export as PNG, GIF 360Β°, sprite sheet or WebM video. ---
I built this for the people who love 3D but miss the warmth of print.
For designers who grew up on comics. For artists who think PBR is overrated. For anyone who ever asked "what if my Blender model looked like it came from 1967?" It's free. It's instant. It runs entirely in your browser.