File size: 29,826 Bytes
b32916f
 
 
 
 
 
 
 
 
 
af4eb82
b32916f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
332dbaf
 
b32916f
332dbaf
b32916f
332dbaf
 
 
b32916f
 
 
 
 
332dbaf
b32916f
332dbaf
 
b32916f
332dbaf
b32916f
 
 
 
 
332dbaf
b32916f
332dbaf
 
 
b32916f
 
 
332dbaf
 
b32916f
 
332dbaf
 
 
 
b32916f
332dbaf
b32916f
 
332dbaf
 
b32916f
332dbaf
b32916f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
332dbaf
 
b32916f
 
 
332dbaf
b32916f
332dbaf
 
 
 
b32916f
 
332dbaf
 
 
 
 
 
b32916f
 
 
 
 
 
 
 
332dbaf
b32916f
332dbaf
 
 
b32916f
332dbaf
 
 
 
 
b32916f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76dd1b2
 
 
b32916f
76dd1b2
b32916f
76dd1b2
 
b32916f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
332dbaf
b32916f
 
 
332dbaf
 
 
 
b32916f
 
 
332dbaf
b32916f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eb1d326
b32916f
 
eb1d326
b32916f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76dd1b2
 
 
 
aa5c665
76dd1b2
f997db8
76dd1b2
 
 
 
 
 
 
 
b32916f
 
ec430e9
b32916f
 
 
ec430e9
b32916f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
# SemDisDiffAE β€” Technical Report

**SemDisDiffAE** (**Sem**antically **Dis**entangled **Diff**usion **A**uto**E**ncoder)
β€” a fast diffusion autoencoder with a 128-channel spatial bottleneck built on
FCDM (Fully Convolutional Diffusion Model) blocks. The encoder uses a
VP-parameterized diagonal Gaussian posterior (learned log-SNR output head),
and the decoder reconstructs via single-step VP diffusion.

This checkpoint is trained with DINOv2 semantic alignment and variance
expansion regularization. The name is a nod to DRA (Page et al., 2026) whose
disentangled representation alignment approach we largely follow here.

## Contents

1. [Architecture](#1-architecture)
   - [FCDM Block](#11-fcdm-block) Β· [Encoder](#12-encoder) Β· [Decoder](#13-decoder) Β· [AdaLN](#14-adaln-shared-base--low-rank-deltas) Β· [PDG](#15-path-drop-guidance-pdg)
2. [Decoder VP Diffusion Parameterization](#2-decoder-vp-diffusion-parameterization)
   - [Forward Process](#21-forward-process) Β· [Log SNR](#22-log-signal-to-noise-ratio) Β· [Schedule](#23-cosine-interpolated-schedule) Β· [X-Prediction](#24-x-prediction-objective) Β· [Sampling](#25-sampling)
3. [Stochastic Posterior](#3-stochastic-posterior)
   - [VP Log-SNR Parameterization](#31-vp-log-snr-parameterization) Β· [Variance Expansion Loss](#32-variance-expansion-loss) Β· [Posterior Mode](#33-posterior-mode-for-inference)
4. [Semantic Alignment](#4-semantic-alignment)
5. [Design Choices](#5-design-choices)
   - [Convolutional Architecture](#51-convolutional-architecture) Β· [Single-Stride Encoder](#52-single-stride-encoder) Β· [Diffusion Decoding](#53-diffusion-decoding) Β· [Skip Connection and PDG](#54-skip-connection-and-path-drop-guidance)
6. [Training](#6-training)
   - [Loss Functions](#61-loss-functions) Β· [Optimizer](#62-optimizer-and-hyperparameters) Β· [Data](#63-data)
7. [Model Configuration](#7-model-configuration)
8. [Inference](#8-inference)
9. [Results](#9-results)

**References:**

- **FCDM** β€” Kwon et al., *Reviving ConvNeXt for Efficient Convolutional Diffusion Models*, [arXiv:2603.09408](https://arxiv.org/abs/2603.09408), 2026.
- **SiD2** β€” Hoogeboom et al., *Simpler Diffusion (SiD2): 1.5 FID on ImageNet512 with pixel-space diffusion*, [arXiv:2410.19324](https://arxiv.org/abs/2410.19324), ICLR 2025.
- **DiTo** β€” Yin et al., *Diffusion Autoencoders are Scalable Image Tokenizers*, [arXiv:2501.18593](https://arxiv.org/abs/2501.18593), 2025.
- **DiCo** β€” Ai et al., *DiCo: Revitalizing ConvNets for Scalable and Efficient Diffusion Modeling*, [arXiv:2505.11196](https://arxiv.org/abs/2505.11196), 2025.
- **ConvNeXt V2** β€” Woo et al., *ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders*, [arXiv:2301.00808](https://arxiv.org/abs/2301.00808), CVPR 2023.
- **Z-image** β€” Cai et al., *Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer*, [arXiv:2511.22699](https://arxiv.org/abs/2511.22699), 2025.
- **SPRINT** β€” Park et al., *Sprint: Sparse-Dense Residual Fusion for Efficient Diffusion Transformers*, [arXiv:2510.21986](https://arxiv.org/abs/2510.21986), 2025.
- **DINOv2** β€” Oquab et al., *DINOv2: Learning Robust Visual Features without Supervision*, [arXiv:2304.07193](https://arxiv.org/abs/2304.07193), 2023. Register variant: Darcet et al., *Vision Transformers Need Registers*, [arXiv:2309.16588](https://arxiv.org/abs/2309.16588), ICLR 2024.
- **iREPA** β€” Singh et al., *What matters for Representation Alignment: Global Information or Spatial Structure?*, [arXiv:2512.10794](https://arxiv.org/abs/2512.10794), 2025.
- **DRA** β€” Page et al., *Boosting Latent Diffusion Models via Disentangled Representation Alignment*, [arXiv:2601.05823](https://arxiv.org/abs/2601.05823), 2026.
- **VEL** β€” Li et al., *Taming Sampling Perturbations with Variance Expansion Loss for Latent Diffusion Models*, [arXiv:2603.21085](https://arxiv.org/abs/2603.21085), 2026.
- **iRDiffAE** β€” [data-archetype/irdiffae-v1](https://huggingface.co/data-archetype/irdiffae-v1) β€” predecessor model using DiCo blocks.

---

## 1. Architecture

### 1.1 FCDM Block

SemDisDiffAE uses **FCDM blocks** β€” ConvNeXt-style convolutional blocks
adapted for diffusion models (Li et al., 2026). Each block follows a single
unified residual path:

```
x ──► DWConv 7Γ—7 ──► RMSNorm ──► [Scale] ──► Conv 1Γ—1 ──► GELU ──► GRN ──► Conv 1Γ—1 ──► [Gate] ──► + ──► out
β”‚                                                                                                    β–²
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
```

This differs from DiCo blocks (used in the predecessor
[iRDiffAE](https://huggingface.co/data-archetype/irdiffae-v1)) which use two
separate residual paths (conv + MLP) with Compact Channel Attention (CCA).
FCDM consolidates into one path, replacing CCA with Global Response
Normalization (GRN).

Key components:

- **Depthwise convolution** (7Γ—7, groups=channels): spatial mixing without
  cross-channel interaction. The depthwise conv output feeds directly into
  RMSNorm (no intermediate activation).

- **RMSNorm** (non-affine, per-channel): normalizes activations before the
  pointwise MLP, replacing LayerNorm used in standard ConvNeXt.

- **Global Response Normalization (GRN)** (from ConvNeXt V2, Woo et al. 2023):
  applied between the two pointwise convolutions. GRN computes per-channel L2
  norms across the spatial dimensions and normalizes by the cross-channel mean:
  ```
  g = ||x||_2 over (H, W)
  n = g / mean(g over channels)
  GRN(x) = gamma * (x * n) + beta + x
  ```
  This encourages feature diversity across channels and prevents channel
  collapse during training.

- **Scale+Gate modulation** (decoder only): FCDM blocks use a 2-way modulation
  `(scale, gate)` from the timestep embedding, in contrast to DiCo's 4-way
  `(shift_conv, gate_conv, shift_mlp, gate_mlp)`. Scale is applied after
  RMSNorm: `h = h * (1 + scale)`. Gate is applied to the residual:
  `out = x + gate * h`. The gate is used **raw** (no tanh activation), giving
  unbounded gating β€” this differs from DiCo which applies tanh to constrain
  the gate to [-1, 1].

- **Layer Scale** (encoder only): for unconditioned encoder blocks, a learnable
  per-channel scale (initialized to 1e-3) gates the residual for near-identity
  initialization, following ConvNeXt.

### 1.2 Encoder

The encoder uses a single spatial stride (via PixelUnshuffle at the input)
followed by FCDM blocks at constant spatial resolution, then a bottleneck
projection that outputs both the posterior mean and per-element log-SNR:

```
Image [B, 3, H, W]
  ──► PixelUnshuffle(p=16) + Conv 1Γ—1 (3Β·16Β² β†’ 896)     [Patchify]
  ──► 4 Γ— FCDMBlock (unconditioned, layer-scale gated)
  ──► Conv 1Γ—1 (896 β†’ 256)                               [Bottleneck projection]
  ──► Split β†’ mean [B, 128, h, w] + logsnr [B, 128, h, w]
  ──► Ξ±(logsnr) Β· mean                                    [Posterior mode]
```

The single-stride design ensures all encoder blocks see the full spatial
resolution and full channel width simultaneously. The information bottleneck
is imposed only at the very end, where a single linear projection selects
which channels to retain. See Section 4.2 for the rationale.

**Note:** This checkpoint uses `bottleneck_norm_mode=disabled`, so no
post-bottleneck RMSNorm is applied to the mean branch. The posterior mode
output is simply `Ξ± Β· ΞΌ` where `Ξ± = βˆšΟƒ(Ξ»)`.

### 1.3 Decoder

The decoder predicts xΜ‚β‚€ from noisy input x_t, conditioned on encoder
latents z and timestep t:

```
Noised image x_t [B, 3, H, W]
  ──► PixelUnshuffle(p=16) + Conv 1Γ—1 (3Β·16Β² β†’ 896)     [Patchify]
  ──► Concatenate with Conv 1Γ—1(latents, 128 β†’ 896)      [Latent fusion]
  ──► Conv 1Γ—1 (2Β·896 β†’ 896)
  ──► 2 Γ— FCDMBlock (AdaLN conditioned)                   [Start blocks]
  ──► 4 Γ— FCDMBlock (AdaLN conditioned)                   [Middle blocks]
  ──► Concat(start_out, middle_out) + Conv 1Γ—1            [Skip fusion]
  ──► 2 Γ— FCDMBlock (AdaLN conditioned)                   [End blocks]
  ──► Conv 1Γ—1 (896 β†’ 3Β·16Β²) + PixelShuffle(16)          [Unpatchify]
  ──► xΜ‚β‚€ prediction [B, 3, H, W]
```

The skip-concat topology with 2+4+2 blocks is inspired by SPRINT's
sparse-dense residual fusion (Park et al., 2025). See Section 4.4 for the
design rationale.

### 1.4 AdaLN: Shared Base + Low-Rank Deltas

Timestep conditioning follows the Z-image style AdaLN
([Cai et al., 2025](https://arxiv.org/abs/2511.22699)): a shared base
projection plus a low-rank delta per layer.

A single base projector is shared across all 8 decoder layers, and each
layer adds a low-rank correction:

```
m_i = Base(SiLU(cond)) + Ξ”_i(SiLU(cond))
```

where `Base: ℝ^D β†’ ℝ^{2D}` is a linear projection (zero-initialized) and
`Ξ”_i: ℝ^D β†’ ℝ^r β†’ ℝ^{2D}` is a low-rank factorization with rank r = 128
(zero-initialized up-projection).

The packed modulation `m_i ∈ ℝ^{B Γ— 2D}` is split into `(scale, gate)` which
modulate the FCDM block (no shift term):

```
Δ₯ = RMSNorm(x) Β· (1 + scale)
x ← x + gate Β· f(Δ₯)
```

### 1.5 Path-Drop Guidance (PDG)

At inference, optional PDG sharpens reconstructions by exploiting the
skip-concat structure β€” a classifier-free guidance analogue that does not
require training with conditioning dropout:

1. **Conditional pass:** run all blocks normally β†’ xΜ‚β‚€^cond
2. **Unconditional pass:** replace the middle block output with a learned
   mask feature m ∈ ℝ^{1Γ—DΓ—1Γ—1} (initialized to zero), effectively dropping
   the deep processing path β†’ xΜ‚β‚€^uncond
3. **Guided prediction:** xΜ‚β‚€ = xΜ‚β‚€^uncond + s Β· (xΜ‚β‚€^cond - xΜ‚β‚€^uncond)

where s is the guidance strength.

For PSNR-optimal reconstruction, PDG is disabled (1 NFE). For perceptual
sharpening, use 10 steps with PDG strength 2.0. Note that PDG is primarily
useful for more compressed bottlenecks (e.g. 32 or 64 channels) and is
rarely necessary for 128-channel models where reconstruction quality is
already high.

---

## 2. Decoder VP Diffusion Parameterization

The decoder uses the variance-preserving (VP) diffusion framework from
SiD2 with an x-prediction objective.

### 2.1 Forward Process

Given a clean image \\(x_0\\), the forward process constructs a noisy sample at
continuous time \\(t \in [0, 1]\\):

$$x_t = \alpha_t \, x_0 + \sigma_t \, \varepsilon, \qquad \varepsilon \sim \mathcal{N}(0, s^2 I)$$

where \\(s = 0.558\\) is the pixel-space noise standard deviation (estimated from
the dataset image distribution) and the VP constraint holds:
\\(\alpha_t^2 + \sigma_t^2 = 1\\).

### 2.2 Log Signal-to-Noise Ratio

The schedule is parameterized through the log signal-to-noise ratio:

$$\lambda_t = \log \frac{\alpha_t^2}{\sigma_t^2}$$

which monotonically decreases as \\(t \to 1\\) (pure noise). From \\(\lambda_t\\)
we recover \\(\alpha_t\\) and \\(\sigma_t\\) via the sigmoid function:

$$\alpha_t = \sqrt{\sigma(\lambda_t)}, \qquad \sigma_t = \sqrt{\sigma(-\lambda_t)}$$

### 2.3 Cosine-Interpolated Schedule

Following SiD2, the logSNR schedule uses cosine interpolation:

$$\lambda(t) = -2 \log \tan(a \cdot t + b)$$

where \\(a\\) and \\(b\\) are computed to satisfy the boundary conditions
\\(\lambda(0) = \lambda_\text{max} = 10\\) and
\\(\lambda(1) = \lambda_\text{min} = -10\\).

### 2.4 X-Prediction Objective

The model predicts the clean image \\(\hat{x}_0 = f_\theta(x_t, t, z)\\)
conditioned on encoder latents \\(z\\).

**Schedule-invariant loss.** Following SiD2, the training loss is defined as
an integral over logSNR \\(\lambda\\), making it invariant to the choice of
noise schedule. Since timesteps are sampled uniformly
\\(t \sim \mathcal{U}(0,1)\\), the change of variable introduces a Jacobian
factor:

$$\mathcal{L} = \mathbb{E}_{t \sim \mathcal{U}(0,1)} \left[ \left(-\frac{d\lambda}{dt}\right) \cdot w(\lambda(t)) \cdot \| x_0 - \hat{x}_0 \|^2 \right]$$

**Sigmoid weighting.** The weighting function uses a sigmoid centered at bias
\\(b = -2.0\\), converting from \\(\varepsilon\\)-prediction to
\\(x\\)-prediction form:

$$\text{weight}(t) = -\frac{1}{2} \frac{d\lambda}{dt} \cdot e^b \cdot \sigma(\lambda(t) - b)$$

### 2.5 Sampling

Decoding uses DDIM by default. With 1 NFE (default), the model runs a single
evaluation at t_start β‰ˆ 1 (near pure noise) and directly outputs the xβ‚€
prediction. This is equivalent to a denoising autoencoder that maps
`Οƒ_start Β· noise β†’ xΜ‚β‚€` conditioned on encoder latents.

DPM++2M is also supported as an alternative sampler, using a half-lambda
exponential integrator for faster convergence with more steps.

---

## 3. Stochastic Posterior

### 3.1 VP Log-SNR Parameterization

Instead of a KL-divergence penalty on a Gaussian encoder, SemDisDiffAE
parameterizes the bottleneck posterior using the VP interpolation convention.
This approach uses a VP-style noise interpolation in the encoder bottleneck
as an alternative to the traditional VAE KL penalty.

The encoder outputs two sets of 128 channels:

- \\(\mu\\) β€” the clean signal (posterior mean)
- \\(\lambda\\) β€” per-element log signal-to-noise ratio

The posterior distribution is:

$$z = \alpha(\lambda) \, \mu + \sigma(\lambda) \, \varepsilon, \qquad \varepsilon \sim \mathcal{N}(0, I)$$

where \\(\alpha = \sqrt{\sigma(\lambda)}\\) and
\\(\sigma = \sqrt{\sigma(-\lambda)}\\) (sigmoid parameterization). This is
equivalent to a Gaussian with mean \\(\alpha \mu\\) and variance
\\(\sigma^2\\).

Using a VP interpolation rather than simple additive noise decouples token
scale from stochasticity. With additive noise (\\(z = \mu + \sigma\varepsilon\\)),
the encoder faces gradient pressure to scale latents up to counter the noise
β€” the SNR depends on the magnitude of \\(\mu\\). The VP formulation
(\\(z = \alpha\mu + \sigma\varepsilon\\) with \\(\alpha^2 + \sigma^2 = 1\\))
removes this coupling: the noise level is controlled entirely by the predicted
log-SNR, independent of the latent magnitude.

### 3.2 Variance Expansion Loss

To prevent posterior collapse (where the encoder learns to set Οƒ β†’ 0 and
ignore the stochastic component entirely), we adopt a **variance expansion
loss** inspired by VEL (Li et al., 2026,
[arXiv:2603.21085](https://arxiv.org/abs/2603.21085)):

$$\mathcal{L}_\text{var} = -\operatorname{mean}\!\bigl(\log(\sigma^2 + \delta)\bigr)$$

where \\(\sigma^2\\) is the posterior variance derived from the predicted
log-SNR and \\(\delta = 10^{-6}\\) for numerical stability. This loss
encourages non-zero posterior variance by penalizing small \\(\sigma^2\\).

VEL proposes the form \\(1/(\sigma^2 + \delta)\\) for variance expansion. We
found this to be too aggressive β€” the \\(1/\sigma^2\\) gradient pushes variance
up very rapidly, leading to excessive high-frequency noise in the latent
space. We use the \\(-\log(\sigma^2 + \delta)\\) form instead, which provides
a gentler, logarithmic penalty that stabilizes training.

**For this checkpoint:** the variance expansion loss is active with weight
**1e-5**.

> **Key finding: latent spectral structure matters for downstream diffusion.**
>
> Reconstruction quality is not very sensitive to the posterior noise level β€”
> good PSNR is achievable even with log-SNR as low as -2. However, the
> posterior noise level has a strong effect on the **spatial frequency
> content** of the latent space. When variance expansion is too aggressive,
> the latent space develops excessive high-frequency content; when it is
> too weak or absent, latents become overly smooth.
>
> We found empirically that downstream diffusion models converge best when
> the latent space has a **radial power spectral density (PSD) decay
> exponent of approximately 1.5** β€” deviating significantly in either
> direction (too smooth or too high-frequency) consistently yields worse
> downstream training convergence. We monitor this metric during training
> validation to guide the variance expansion weight.
>
> The weight of 1e-5 for this checkpoint was chosen to target this spectral
> sweet spot.

### 3.3 Posterior Mode for Inference

At inference, the encoder returns the **posterior mode**: `z = Ξ±(Ξ») Β· ΞΌ`. For
this checkpoint, the posterior log-SNR is very high (posterior variance is
negligible), so sampling and mode are nearly identical.

The `encode_posterior()` method is available for users who need the full
posterior distribution.

---

## 4. Semantic Alignment

This checkpoint uses **semantic alignment** to encourage semantically
structured latent representations. The approach is inspired by DRA
(Page et al., 2026, [arXiv:2601.05823](https://arxiv.org/abs/2601.05823))
which aligns autoencoder latents with frozen vision encoder features. Our
implementation differs in the projection architecture and noise schedule.

### 4.1 Teacher

A frozen DINOv2-S with registers
(timm: `vit_small_patch16_dinov3.lvd_1689m`, 384-dim patch tokens) provides
the target spatial semantic features.

### 4.2 Projection Head

The student projection head maps noisy encoder latents to the teacher's
token space. It consists of:

```
Noisy latents z_noisy ∈ ℝ^{BΓ—128Γ—hΓ—w}
  ──► Conv 1Γ—1 (128 β†’ 384)                     [Channel projection]
  ──► Flatten to tokens [B, T, 384]
  ──► DiT transformer block                     [Single block, 6 heads Γ— 64 dim]
      (self-attention with axial RoPE 2D + AdaLN conditioned on Ο„)
  ──► RMSNorm
  ──► student tokens ∈ ℝ^{BΓ—TΓ—384}
```

The DiT block uses standard multi-head self-attention with 2D axial
rotary position embeddings (RoPE) and AdaLN-Zero timestep conditioning.
This gives the projection head global spatial reasoning β€” important for
matching the teacher's self-attention-based representations β€” while the
main encoder/decoder remain purely convolutional.

### 4.3 Noisy Alignment

Unlike standard representation alignment which operates on clean latents,
we align **noisy** latent versions. The noise level \\(\tau\\) is sampled from a
\\(\text{Beta}(2,2)\\) distribution (concentrated around \\(\tau = 0.5\\)) using
flow matching linear interpolation:

$$z_\text{noisy} = (1 - \tau) \, z + \tau \, \varepsilon, \qquad \varepsilon \sim \mathcal{N}(0, I), \quad \tau \sim \text{Beta}(2, 2)$$

The projection head receives both the noisy latents and the noise level
\\(\tau\\) (via its AdaLN conditioning). This trains the head to extract semantic
information even from partially corrupted latents, improving robustness
for downstream diffusion models which operate on noised latent inputs.

### 4.4 Training Details

The alignment loss is the mean negative cosine similarity between student
and teacher tokens, weighted at **0.01** throughout training. The student
projection head operates on all 128 bottleneck channels, unlike the
predecessor iRDiffAE which aligned only the first 64 of 128 channels.

Note that the projection head is a training-only component β€” it is not
included in the exported model weights.

---

## 5. Design Choices

### 5.1 Convolutional Architecture

SemDisDiffAE uses a fully convolutional architecture rather than a vision
transformer. For an autoencoder whose goal is faithful pixel-level
reconstruction (not global semantic understanding), convolutions offer:

- **Resolution generalization.** Convolutions operate on local patches and
  generalize naturally to arbitrary image dimensions without interpolating
  position embeddings or suffering attention distribution shift.
- **Translation invariance.** Weight sharing across spatial positions is well
  matched to reconstruction, where the same local patterns (edges, textures)
  conditioned on the low-frequency latent recur throughout the image.
- **Locality.** Reconstruction quality depends on preserving fine spatial
  detail. Convolutions are inherently local operators, avoiding the quadratic
  cost of global attention while focusing computation where it matters most.

### 5.2 Single-Stride Encoder with Final Bottleneck

The encoder uses a single spatial stride (PixelUnshuffle at the input)
followed by blocks at constant spatial resolution, then a final 1Γ—1 convolution
to project to the bottleneck. This differs from classical VAE encoders that use
progressive downsampling with channel expansion at each stage.

The single-stride design ensures that all encoder blocks see the full spatial
resolution and full channel width simultaneously. The information bottleneck is
imposed only at the very end, where a single linear projection selects which
channels to retain.

### 5.3 Diffusion Decoding

The main advantage of diffusion decoding over the standard GAN + LPIPS
approach is **simplicity and speed of experimentation**. The training
objective is a straightforward weighted MSE β€” no discriminator, no LPIPS
perceptual loss, no delicate adversarial balancing. This makes it very fast to train and easy to iterate on β€” typically a few
hours on a single GPU is sufficient. This checkpoint was trained for 251k
steps. By contrast, GAN + LPIPS-based VAEs require many days of large-GPU
time and are notoriously difficult to stabilize from scratch.

This simplicity enables rapid experimentation with latent space shaping to
get it as diffusion-friendly as possible, while still achieving excellent
reconstruction quality.

### 5.4 Skip Connection and Path-Drop Guidance

The decoder's start β†’ middle β†’ skip-fuse β†’ end architecture is inspired by
SPRINT's sparse-dense residual fusion (Park et al., 2025). The design serves
three purposes:

1. **Regularization.** The skip path ensures that even if the middle blocks
   are dropped or poorly conditioned, the end blocks still receive meaningful
   features from the start blocks.
2. **High-frequency preservation.** The start blocks (which see the input most
   directly) pass fine detail through the skip to the end blocks.
3. **Path-Drop Guidance.** At inference, replacing the middle block output
   with a learned mask feature creates an "unconditional" prediction that
   preserves the skip path but drops the deep processing. Interpolating
   between conditional and unconditional predictions (as in classifier-free
   guidance) sharpens the output without requiring training-time dropout.

---

## 6. Training

### 6.1 Loss Functions

The total training loss is:

$$\mathcal{L}_\text{total} = \mathcal{L}_\text{recon} + 0.01 \cdot \mathcal{L}_\text{semantic} + 10^{-4} \cdot \mathcal{L}_\text{scale} + 10^{-5} \cdot \mathcal{L}_\text{var}$$

| Loss | Weight | Description |
|------|--------|-------------|
| \\(\mathcal{L}_\text{recon}\\) | 1.0 | SiD2 sigmoid-weighted x-prediction MSE (\\(b = -2.0\\)). Per-pixel \\((\\hat{x}_0 - x_0)^2\\) averaged over (C, H, W) per sample, multiplied by \\(w(t) = -\tfrac{1}{2} \tfrac{d\lambda}{dt} e^b \sigma(\lambda - b)\\), then averaged over the batch |
| \\(\mathcal{L}_\text{semantic}\\) | 0.01 | Per-token \\(1 - \cos(\text{student}, \text{teacher})\\) averaged over all tokens and batch (see Β§4) |
| \\(\mathcal{L}_\text{scale}\\) | 0.0001 | Per-channel variance \\(\text{var}_c\\) estimated over (B, H, W), then \\((\log(\text{var}_c + \varepsilon) - \log(\text{target}))^2\\) averaged over channels. Target variance = 1.0 |
| \\(\mathcal{L}_\text{var}\\) | 1e-5 | Per-element \\(-\log(\sigma^2 + \delta)\\) where \\(\sigma^2\\) is the posterior variance, averaged over all dims (B, C, H, W). See Β§3.2 |

**Note on loss scales:** The decoder reconstruction loss has a small
effective magnitude due to the SiD2 VP x-prediction weighting (the Jacobian
\\(d\lambda/dt\\) and sigmoid weighting compress the per-sample loss scale). As a
result, all auxiliary loss weights must be kept correspondingly small to
avoid dominating the reconstruction objective.

### 6.2 Optimizer and Hyperparameters

| Parameter | Value |
|-----------|-------|
| Optimizer | AdamW (β₁=0.9, Ξ²β‚‚=0.99) |
| Learning rate | 1e-4 (constant after warmup) |
| Weight decay | 0.0 |
| Warmup steps | 2,000 |
| Gradient clip | 1.0 (max norm) |
| Precision | AMP bfloat16 (FP32 master weights, TF32 matmul) |
| EMA decay | 0.9995 (updated every step) |
| Batch size | 128 |
| Timestep sampling | Uniform with SiD2 logSNR shift -1.0 |
| Compilation | `torch.compile` enabled |
| Training steps | 251k |
| Hardware | Single GPU |

Convergence is fast β€” training is stopped when the training loss starts
plateauing, which typically occurs within a few hours on a single GPU.

### 6.3 Data

Training uses ~5M images at various resolutions: mostly photographs, with
a significant proportion of illustrations and text-heavy images (documents,
screenshots, book covers, diagrams) to encourage crisp line and edge
reconstruction. Images are loaded via two strategies in a 50/50 mix:

- **Full-image downsampling:** images are bucketed by aspect ratio and
  downsampled to ~256Β² resolution (preserving aspect ratio).
- **Random 256Γ—256 crops:** deterministic patches extracted from images
  stored at β‰₯512px resolution.

This mixed strategy exposes the model to both global scene composition (via
downsampled full images) and fine local detail (via crops from higher-resolution
sources).

---

## 7. Model Configuration

| Parameter | Value |
|-----------|-------|
| Patch size | 16 |
| Model dimension | 896 |
| Encoder depth | 4 blocks |
| Decoder depth | 8 blocks (2 start + 4 middle + 2 end) |
| Bottleneck dimension | 128 channels |
| Spatial compression | 16Γ— (H/16 Γ— W/16) |
| Total compression | 6.0Γ— (3Β·256 / 128) |
| MLP ratio | 4.0 |
| Depthwise kernel | 7Γ—7 |
| AdaLN per-block delta rank | 128 |
| Block type | FCDM (ConvNeXt + GRN + scale/gate AdaLN) |
| Posterior | Diagonal Gaussian (VP log-SNR), variance expansion weight 1e-5 |
| Bottleneck norm | Disabled |
| Ξ»_min, Ξ»_max | -10, +10 |
| Sigmoid bias b | -2.0 |
| Pixel noise std s | 0.558 |
| Parameters | 88.8M |

---

## 8. Inference

### Recommended Settings

| Use case | Steps (NFE) | PDG | Sampler | Notes |
|----------|-------------|-----|---------|-------|
| **PSNR-optimal** | 1 | off | DDIM | Default. Fastest. |
| **Perceptual** | 10 | on (2.0) | DDIM | Sharper details, ~15Γ— slower (PDG skips middle blocks) |

### Usage

```python
from fcdm_diffae import FCDMDiffAE, FCDMDiffAEInferenceConfig

# Load model
model = FCDMDiffAE.from_pretrained("data-archetype/semdisdiffae", device="cuda")

# Encode (returns posterior mode by default)
latents = model.encode(images)  # [B,3,H,W] β†’ [B,128,H/16,W/16]

# Decode (1 step)
recon = model.decode(latents, height=H, width=W)

# Full posterior access
posterior = model.encode_posterior(images)
print(posterior.mean.shape, posterior.logsnr.shape)
z_sampled = posterior.sample()
```

---

## Citation

```bibtex
@misc{semdisdiffae,
  title   = {SemDisDiffAE: A Semantically Disentangled Diffusion Autoencoder},
  author  = {data-archetype},
  email   = {data-archetype@proton.me},
  year    = {2026},
  month   = apr,
  url     = {https://huggingface.co/data-archetype/semdisdiffae},
}
```

---

## 9. Results

Reconstruction quality evaluated on a curated set of test images covering photographs, book covers, and documents.

### 7.1 Interactive Viewer

**[Open full-resolution comparison viewer](https://huggingface.co/spaces/data-archetype/semdisdiffae-results)** β€” side-by-side reconstructions, RGB deltas, and latent PCA with adjustable image size.

### 7.2 Inference Settings

| Setting | Value |
|---------|-------|
| Sampler | ddim |
| Steps | 1 |
| Schedule | linear |
| Seed | 42 |
| PDG | no_path_dropg |
| Batch size (timing) | 4 |

> All models run in bfloat16. Timings measured on an NVIDIA RTX Pro 6000 (Blackwell).

### 7.3 Global Metrics

| Metric | semdisdiffae (1 step) | Flux.2 VAE |
|--------|--------|--------|
| Avg PSNR (dB) | 35.78 | 34.16 |
| Avg encode (ms/image) | 2.5 | 46.1 |
| Avg decode (ms/image) | 5.5 | 91.8 |

### 7.4 Per-Image PSNR (dB)

| Image | semdisdiffae (1 step) | Flux.2 VAE |
|-------|--------|--------|
| p640x1536:94623 | 35.44 | 33.50 |
| p640x1536:94624 | 31.33 | 30.03 |
| p640x1536:94625 | 35.05 | 33.98 |
| p640x1536:94626 | 33.21 | 31.53 |
| p640x1536:94627 | 32.54 | 30.53 |
| p640x1536:94628 | 29.80 | 28.88 |
| p960x1024:216264 | 46.37 | 45.39 |
| p960x1024:216265 | 29.70 | 27.80 |
| p960x1024:216266 | 47.15 | 46.20 |
| p960x1024:216267 | 40.99 | 39.23 |
| p960x1024:216268 | 38.47 | 36.13 |
| p960x1024:216269 | 32.74 | 30.24 |
| p960x1024:216270 | 36.23 | 34.18 |
| p960x1024:216271 | 44.41 | 42.18 |
| p704x1472:94699 | 43.80 | 41.79 |
| p704x1472:94700 | 32.83 | 32.08 |
| p704x1472:94701 | 39.00 | 37.90 |
| p704x1472:94702 | 34.52 | 32.50 |
| p704x1472:94703 | 32.81 | 31.35 |
| p704x1472:94704 | 33.38 | 31.84 |
| p704x1472:94705 | 39.70 | 37.44 |
| p704x1472:94706 | 35.12 | 33.66 |
| r256_p1344x704:15577 | 31.02 | 29.98 |
| r256_p1344x704:15578 | 32.38 | 30.79 |
| r256_p1344x704:15579 | 33.27 | 31.83 |
| r256_p1344x704:15580 | 37.84 | 36.03 |
| r256_p1344x704:15581 | 38.57 | 36.94 |
| r256_p1344x704:15582 | 33.41 | 32.10 |
| r256_p1344x704:15583 | 36.67 | 34.54 |
| r256_p1344x704:15584 | 33.23 | 31.76 |
| r256_p896x1152:144131 | 35.30 | 33.60 |
| r256_p896x1152:144132 | 36.99 | 35.32 |
| r256_p896x1152:144133 | 39.69 | 37.33 |
| r256_p896x1152:144134 | 36.01 | 34.47 |
| r256_p896x1152:144135 | 31.20 | 29.87 |
| r256_p896x1152:144136 | 37.51 | 35.68 |
| r256_p896x1152:144137 | 33.83 | 32.86 |
| r256_p896x1152:144138 | 27.39 | 25.63 |
| VAE_accuracy_test_image | 36.64 | 35.25 |