semdisdiffae / technical_report_semantic.md
data-archetype's picture
Upload technical_report_semantic.md with huggingface_hub
68589fe verified

SemDisDiffAE β€” Technical Report

SemDisDiffAE (Semantically Disentangled Diffusion AutoEncoder) β€” a fast diffusion autoencoder with a 128-channel spatial bottleneck built on FCDM (Fully Convolutional Diffusion Model) blocks. The encoder uses a VP-parameterized diagonal Gaussian posterior (learned log-SNR output head), and the decoder reconstructs via single-step VP diffusion.

This checkpoint is trained with DINOv2 semantic alignment and variance expansion regularization. The name is a nod to DRA (Page et al., 2026) whose disentangled representation alignment approach we largely follow here.

Contents

  1. Architecture
  2. Decoder VP Diffusion Parameterization
  3. Stochastic Posterior
  4. Semantic Alignment
  5. Design Choices
  6. Training
  7. Model Configuration
  8. Inference
  9. Results

References:

  • FCDM β€” Kwon et al., Reviving ConvNeXt for Efficient Convolutional Diffusion Models, arXiv:2603.09408, 2026.
  • SiD2 β€” Hoogeboom et al., Simpler Diffusion (SiD2): 1.5 FID on ImageNet512 with pixel-space diffusion, arXiv:2410.19324, ICLR 2025.
  • DiTo β€” Yin et al., Diffusion Autoencoders are Scalable Image Tokenizers, arXiv:2501.18593, 2025.
  • DiCo β€” Ai et al., DiCo: Revitalizing ConvNets for Scalable and Efficient Diffusion Modeling, arXiv:2505.11196, 2025.
  • ConvNeXt V2 β€” Woo et al., ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders, arXiv:2301.00808, CVPR 2023.
  • Z-image β€” Cai et al., Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer, arXiv:2511.22699, 2025.
  • SPRINT β€” Park et al., Sprint: Sparse-Dense Residual Fusion for Efficient Diffusion Transformers, arXiv:2510.21986, 2025.
  • DINOv2 β€” Oquab et al., DINOv2: Learning Robust Visual Features without Supervision, arXiv:2304.07193, 2023. Register variant: Darcet et al., Vision Transformers Need Registers, arXiv:2309.16588, ICLR 2024.
  • iREPA β€” Singh et al., What matters for Representation Alignment: Global Information or Spatial Structure?, arXiv:2512.10794, 2025.
  • DRA β€” Page et al., Boosting Latent Diffusion Models via Disentangled Representation Alignment, arXiv:2601.05823, 2026.
  • VEL β€” Li et al., Taming Sampling Perturbations with Variance Expansion Loss for Latent Diffusion Models, arXiv:2603.21085, 2026.
  • iRDiffAE β€” data-archetype/irdiffae-v1 β€” predecessor model using DiCo blocks.

1. Architecture

1.1 FCDM Block

SemDisDiffAE uses FCDM blocks β€” ConvNeXt-style convolutional blocks adapted for diffusion models (Li et al., 2026). Each block follows a single unified residual path:

x ──► DWConv 7Γ—7 ──► RMSNorm ──► [Scale] ──► Conv 1Γ—1 ──► GELU ──► GRN ──► Conv 1Γ—1 ──► [Gate] ──► + ──► out
β”‚                                                                                                    β–²
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

This differs from DiCo blocks (used in the predecessor iRDiffAE) which use two separate residual paths (conv + MLP) with Compact Channel Attention (CCA). FCDM consolidates into one path, replacing CCA with Global Response Normalization (GRN).

Key components:

  • Depthwise convolution (7Γ—7, groups=channels): spatial mixing without cross-channel interaction. The depthwise conv output feeds directly into RMSNorm (no intermediate activation).

  • RMSNorm (non-affine, per-channel): normalizes activations before the pointwise MLP, replacing LayerNorm used in standard ConvNeXt.

  • Global Response Normalization (GRN) (from ConvNeXt V2, Woo et al. 2023): applied between the two pointwise convolutions. GRN computes per-channel L2 norms across the spatial dimensions and normalizes by the cross-channel mean:

    g = ||x||_2 over (H, W)
    n = g / mean(g over channels)
    GRN(x) = gamma * (x * n) + beta + x
    

    This encourages feature diversity across channels and prevents channel collapse during training.

  • Scale+Gate modulation (decoder only): FCDM blocks use a 2-way modulation (scale, gate) from the timestep embedding, in contrast to DiCo's 4-way (shift_conv, gate_conv, shift_mlp, gate_mlp). Scale is applied after RMSNorm: h = h * (1 + scale). Gate is applied to the residual: out = x + gate * h. The gate is used raw (no tanh activation), giving unbounded gating β€” this differs from DiCo which applies tanh to constrain the gate to [-1, 1].

  • Layer Scale (encoder only): for unconditioned encoder blocks, a learnable per-channel scale (initialized to 1e-3) gates the residual for near-identity initialization, following ConvNeXt.

1.2 Encoder

The encoder uses a single spatial stride (via PixelUnshuffle at the input) followed by FCDM blocks at constant spatial resolution, then a bottleneck projection that outputs both the posterior mean and per-element log-SNR:

Image [B, 3, H, W]
  ──► PixelUnshuffle(p=16) + Conv 1Γ—1 (3Β·16Β² β†’ 896)     [Patchify]
  ──► 4 Γ— FCDMBlock (unconditioned, layer-scale gated)
  ──► Conv 1Γ—1 (896 β†’ 256)                               [Bottleneck projection]
  ──► Split β†’ mean [B, 128, h, w] + logsnr [B, 128, h, w]
  ──► Ξ±(logsnr) Β· mean                                    [Posterior mode]

The single-stride design ensures all encoder blocks see the full spatial resolution and full channel width simultaneously. The information bottleneck is imposed only at the very end, where a single linear projection selects which channels to retain. See Section 4.2 for the rationale.

Note: This checkpoint uses bottleneck_norm_mode=disabled, so no post-bottleneck RMSNorm is applied to the mean branch. The posterior mode output is simply Ξ± Β· ΞΌ where Ξ± = βˆšΟƒ(Ξ»).

1.3 Decoder

The decoder predicts xΜ‚β‚€ from noisy input x_t, conditioned on encoder latents z and timestep t:

Noised image x_t [B, 3, H, W]
  ──► PixelUnshuffle(p=16) + Conv 1Γ—1 (3Β·16Β² β†’ 896)     [Patchify]
  ──► Concatenate with Conv 1Γ—1(latents, 128 β†’ 896)      [Latent fusion]
  ──► Conv 1Γ—1 (2Β·896 β†’ 896)
  ──► 2 Γ— FCDMBlock (AdaLN conditioned)                   [Start blocks]
  ──► 4 Γ— FCDMBlock (AdaLN conditioned)                   [Middle blocks]
  ──► Concat(start_out, middle_out) + Conv 1Γ—1            [Skip fusion]
  ──► 2 Γ— FCDMBlock (AdaLN conditioned)                   [End blocks]
  ──► Conv 1Γ—1 (896 β†’ 3Β·16Β²) + PixelShuffle(16)          [Unpatchify]
  ──► xΜ‚β‚€ prediction [B, 3, H, W]

The skip-concat topology with 2+4+2 blocks is inspired by SPRINT's sparse-dense residual fusion (Park et al., 2025). See Section 4.4 for the design rationale.

1.4 AdaLN: Shared Base + Low-Rank Deltas

Timestep conditioning follows the Z-image style AdaLN (Cai et al., 2025): a shared base projection plus a low-rank delta per layer.

A single base projector is shared across all 8 decoder layers, and each layer adds a low-rank correction:

m_i = Base(SiLU(cond)) + Ξ”_i(SiLU(cond))

where Base: ℝ^D β†’ ℝ^{2D} is a linear projection (zero-initialized) and Ξ”_i: ℝ^D β†’ ℝ^r β†’ ℝ^{2D} is a low-rank factorization with rank r = 128 (zero-initialized up-projection).

The packed modulation m_i ∈ ℝ^{B Γ— 2D} is split into (scale, gate) which modulate the FCDM block (no shift term):

Δ₯ = RMSNorm(x) Β· (1 + scale)
x ← x + gate Β· f(Δ₯)

1.5 Path-Drop Guidance (PDG)

At inference, optional PDG sharpens reconstructions by exploiting the skip-concat structure β€” a classifier-free guidance analogue that does not require training with conditioning dropout:

  1. Conditional pass: run all blocks normally β†’ xΜ‚β‚€^cond
  2. Unconditional pass: replace the middle block output with a learned mask feature m ∈ ℝ^{1Γ—DΓ—1Γ—1} (initialized to zero), effectively dropping the deep processing path β†’ xΜ‚β‚€^uncond
  3. Guided prediction: xΜ‚β‚€ = xΜ‚β‚€^uncond + s Β· (xΜ‚β‚€^cond - xΜ‚β‚€^uncond)

where s is the guidance strength.

For PSNR-optimal reconstruction, PDG is disabled (1 NFE). For perceptual sharpening, use 10 steps with PDG strength 2.0. Note that PDG is primarily useful for more compressed bottlenecks (e.g. 32 or 64 channels) and is rarely necessary for 128-channel models where reconstruction quality is already high.


2. Decoder VP Diffusion Parameterization

The decoder uses the variance-preserving (VP) diffusion framework from SiD2 with an x-prediction objective.

2.1 Forward Process

Given a clean image x0x_0, the forward process constructs a noisy sample at continuous time t∈[0,1]t \in [0, 1]:

xt=Ξ±t x0+Οƒt Ρ,Ρ∼N(0,s2I)x_t = \alpha_t \, x_0 + \sigma_t \, \varepsilon, \qquad \varepsilon \sim \mathcal{N}(0, s^2 I)

where s=0.558s = 0.558 is the pixel-space noise standard deviation (estimated from the dataset image distribution) and the VP constraint holds: Ξ±t2+Οƒt2=1\alpha_t^2 + \sigma_t^2 = 1.

2.2 Log Signal-to-Noise Ratio

The schedule is parameterized through the log signal-to-noise ratio:

Ξ»t=log⁑αt2Οƒt2\lambda_t = \log \frac{\alpha_t^2}{\sigma_t^2}

which monotonically decreases as t→1t \to 1 (pure noise). From λt\lambda_t we recover αt\alpha_t and σt\sigma_t via the sigmoid function:

Ξ±t=Οƒ(Ξ»t),Οƒt=Οƒ(βˆ’Ξ»t)\alpha_t = \sqrt{\sigma(\lambda_t)}, \qquad \sigma_t = \sqrt{\sigma(-\lambda_t)}

2.3 Cosine-Interpolated Schedule

Following SiD2, the logSNR schedule uses cosine interpolation:

Ξ»(t)=βˆ’2log⁑tan⁑(aβ‹…t+b)\lambda(t) = -2 \log \tan(a \cdot t + b)

where aa and bb are computed to satisfy the boundary conditions Ξ»(0)=Ξ»max=10\lambda(0) = \lambda_\text{max} = 10 and Ξ»(1)=Ξ»min=βˆ’10\lambda(1) = \lambda_\text{min} = -10.

2.4 X-Prediction Objective

The model predicts the clean image x^0=fΞΈ(xt,t,z)\hat{x}_0 = f_\theta(x_t, t, z) conditioned on encoder latents zz.

Schedule-invariant loss. Following SiD2, the training loss is defined as an integral over logSNR λ\lambda, making it invariant to the choice of noise schedule. Since timesteps are sampled uniformly t∼U(0,1)t \sim \mathcal{U}(0,1), the change of variable introduces a Jacobian factor:

L=Et∼U(0,1)[(βˆ’dΞ»dt)β‹…w(Ξ»(t))β‹…βˆ₯x0βˆ’x^0βˆ₯2]\mathcal{L} = \mathbb{E}_{t \sim \mathcal{U}(0,1)} \left[ \left(-\frac{d\lambda}{dt}\right) \cdot w(\lambda(t)) \cdot \| x_0 - \hat{x}_0 \|^2 \right]

Sigmoid weighting. The weighting function uses a sigmoid centered at bias b=βˆ’2.0b = -2.0, converting from Ξ΅\varepsilon-prediction to xx-prediction form:

weight(t)=βˆ’12dΞ»dtβ‹…ebβ‹…Οƒ(Ξ»(t)βˆ’b)\text{weight}(t) = -\frac{1}{2} \frac{d\lambda}{dt} \cdot e^b \cdot \sigma(\lambda(t) - b)

2.5 Sampling

Decoding uses DDIM by default. With 1 NFE (default), the model runs a single evaluation at t_start β‰ˆ 1 (near pure noise) and directly outputs the xβ‚€ prediction. This is equivalent to a denoising autoencoder that maps Οƒ_start Β· noise β†’ xΜ‚β‚€ conditioned on encoder latents.

DPM++2M is also supported as an alternative sampler, using a half-lambda exponential integrator for faster convergence with more steps.


3. Stochastic Posterior

3.1 VP Log-SNR Parameterization

Instead of a KL-divergence penalty on a Gaussian encoder, SemDisDiffAE parameterizes the bottleneck posterior using the VP interpolation convention. This approach uses a VP-style noise interpolation in the encoder bottleneck as an alternative to the traditional VAE KL penalty.

The encoder outputs two sets of 128 channels:

  • ΞΌ\mu β€” the clean signal (posterior mean)
  • Ξ»\lambda β€” per-element log signal-to-noise ratio

The posterior distribution is:

z=Ξ±(Ξ») μ+Οƒ(Ξ») Ρ,Ρ∼N(0,I)z = \alpha(\lambda) \, \mu + \sigma(\lambda) \, \varepsilon, \qquad \varepsilon \sim \mathcal{N}(0, I)

where Ξ±=Οƒ(Ξ»)\alpha = \sqrt{\sigma(\lambda)} and Οƒ=Οƒ(βˆ’Ξ»)\sigma = \sqrt{\sigma(-\lambda)} (sigmoid parameterization). This is equivalent to a Gaussian with mean Ξ±ΞΌ\alpha \mu and variance Οƒ2\sigma^2.

Using a VP interpolation rather than simple additive noise decouples token scale from stochasticity. With additive noise (\(z = \mu + \sigma\varepsilon\)), the encoder faces gradient pressure to scale latents up to counter the noise β€” the SNR depends on the magnitude of ΞΌ\mu. The VP formulation (\(z = \alpha\mu + \sigma\varepsilon\) with Ξ±2+Οƒ2=1\alpha^2 + \sigma^2 = 1) removes this coupling: the noise level is controlled entirely by the predicted log-SNR, independent of the latent magnitude.

3.2 Variance Expansion Loss

To prevent posterior collapse (where the encoder learns to set Οƒ β†’ 0 and ignore the stochastic component entirely), we adopt a variance expansion loss inspired by VEL (Li et al., 2026, arXiv:2603.21085):

Lvar=βˆ’mean⁑ ⁣(log⁑(Οƒ2+Ξ΄))\mathcal{L}_\text{var} = -\operatorname{mean}\!\bigl(\log(\sigma^2 + \delta)\bigr)

where Οƒ2\sigma^2 is the posterior variance derived from the predicted log-SNR and Ξ΄=10βˆ’6\delta = 10^{-6} for numerical stability. This loss encourages non-zero posterior variance by penalizing small Οƒ2\sigma^2.

VEL proposes the form 1/(Οƒ2+Ξ΄)1/(\sigma^2 + \delta) for variance expansion. We found this to be too aggressive β€” the 1/Οƒ21/\sigma^2 gradient pushes variance up very rapidly, leading to excessive high-frequency noise in the latent space. We use the βˆ’log⁑(Οƒ2+Ξ΄)-\log(\sigma^2 + \delta) form instead, which provides a gentler, logarithmic penalty that stabilizes training.

For this checkpoint: the variance expansion loss is active with weight 1e-5.

Key finding: latent spectral structure matters for downstream diffusion.

Reconstruction quality is not very sensitive to the posterior noise level β€” good PSNR is achievable even with log-SNR as low as -2. However, the posterior noise level has a strong effect on the spatial frequency content of the latent space. When variance expansion is too aggressive, the latent space develops excessive high-frequency content; when it is too weak or absent, latents become overly smooth.

We found empirically that downstream diffusion models converge best when the latent space has a radial power spectral density (PSD) decay exponent of approximately 1.5 β€” deviating significantly in either direction (too smooth or too high-frequency) consistently yields worse downstream training convergence. We monitor this metric during training validation to guide the variance expansion weight.

The weight of 1e-5 for this checkpoint was chosen to target this spectral sweet spot.

3.3 Posterior Mode for Inference

At inference, the encoder returns the posterior mode: z = Ξ±(Ξ») Β· ΞΌ. For this checkpoint, the posterior log-SNR is very high (posterior variance is negligible), so sampling and mode are nearly identical.

The encode_posterior() method is available for users who need the full posterior distribution.


4. Semantic Alignment

This checkpoint uses semantic alignment to encourage semantically structured latent representations. The approach is inspired by DRA (Page et al., 2026, arXiv:2601.05823) which aligns autoencoder latents with frozen vision encoder features. Our implementation differs in the projection architecture and noise schedule.

4.1 Teacher

A frozen DINOv2-S with registers (timm: vit_small_patch16_dinov3.lvd_1689m, 384-dim patch tokens) provides the target spatial semantic features.

4.2 Projection Head

The student projection head maps noisy encoder latents to the teacher's token space. It consists of:

Noisy latents z_noisy ∈ ℝ^{BΓ—128Γ—hΓ—w}
  ──► Conv 1Γ—1 (128 β†’ 384)                     [Channel projection]
  ──► Flatten to tokens [B, T, 384]
  ──► DiT transformer block                     [Single block, 6 heads Γ— 64 dim]
      (self-attention with axial RoPE 2D + AdaLN conditioned on Ο„)
  ──► RMSNorm
  ──► student tokens ∈ ℝ^{BΓ—TΓ—384}

The DiT block uses standard multi-head self-attention with 2D axial rotary position embeddings (RoPE) and AdaLN-Zero timestep conditioning. This gives the projection head global spatial reasoning β€” important for matching the teacher's self-attention-based representations β€” while the main encoder/decoder remain purely convolutional.

4.3 Noisy Alignment

Unlike standard representation alignment which operates on clean latents, we align noisy latent versions. The noise level Ο„\tau is sampled from a Beta(2,2)\text{Beta}(2,2) distribution (concentrated around Ο„=0.5\tau = 0.5) using flow matching linear interpolation:

znoisy=(1βˆ’Ο„) z+τ Ρ,Ρ∼N(0,I),Ο„βˆΌBeta(2,2)z_\text{noisy} = (1 - \tau) \, z + \tau \, \varepsilon, \qquad \varepsilon \sim \mathcal{N}(0, I), \quad \tau \sim \text{Beta}(2, 2)

The projection head receives both the noisy latents and the noise level Ο„\tau (via its AdaLN conditioning). This trains the head to extract semantic information even from partially corrupted latents, improving robustness for downstream diffusion models which operate on noised latent inputs.

4.4 Training Details

The alignment loss is the mean negative cosine similarity between student and teacher tokens, weighted at 0.01 throughout training. The student projection head operates on all 128 bottleneck channels, unlike the predecessor iRDiffAE which aligned only the first 64 of 128 channels.

Note that the projection head is a training-only component β€” it is not included in the exported model weights.


5. Design Choices

5.1 Convolutional Architecture

SemDisDiffAE uses a fully convolutional architecture rather than a vision transformer. For an autoencoder whose goal is faithful pixel-level reconstruction (not global semantic understanding), convolutions offer:

  • Resolution generalization. Convolutions operate on local patches and generalize naturally to arbitrary image dimensions without interpolating position embeddings or suffering attention distribution shift.
  • Translation invariance. Weight sharing across spatial positions is well matched to reconstruction, where the same local patterns (edges, textures) conditioned on the low-frequency latent recur throughout the image.
  • Locality. Reconstruction quality depends on preserving fine spatial detail. Convolutions are inherently local operators, avoiding the quadratic cost of global attention while focusing computation where it matters most.

5.2 Single-Stride Encoder with Final Bottleneck

The encoder uses a single spatial stride (PixelUnshuffle at the input) followed by blocks at constant spatial resolution, then a final 1Γ—1 convolution to project to the bottleneck. This differs from classical VAE encoders that use progressive downsampling with channel expansion at each stage.

The single-stride design ensures that all encoder blocks see the full spatial resolution and full channel width simultaneously. The information bottleneck is imposed only at the very end, where a single linear projection selects which channels to retain.

5.3 Diffusion Decoding

The main advantage of diffusion decoding over the standard GAN + LPIPS approach is simplicity and speed of experimentation. The training objective is a straightforward weighted MSE β€” no discriminator, no LPIPS perceptual loss, no delicate adversarial balancing. This makes it very fast to train and easy to iterate on β€” typically a few hours on a single GPU is sufficient. This checkpoint was trained for 251k steps. By contrast, GAN + LPIPS-based VAEs require many days of large-GPU time and are notoriously difficult to stabilize from scratch.

This simplicity enables rapid experimentation with latent space shaping to get it as diffusion-friendly as possible, while still achieving excellent reconstruction quality.

5.4 Skip Connection and Path-Drop Guidance

The decoder's start β†’ middle β†’ skip-fuse β†’ end architecture is inspired by SPRINT's sparse-dense residual fusion (Park et al., 2025). The design serves three purposes:

  1. Regularization. The skip path ensures that even if the middle blocks are dropped or poorly conditioned, the end blocks still receive meaningful features from the start blocks.
  2. High-frequency preservation. The start blocks (which see the input most directly) pass fine detail through the skip to the end blocks.
  3. Path-Drop Guidance. At inference, replacing the middle block output with a learned mask feature creates an "unconditional" prediction that preserves the skip path but drops the deep processing. Interpolating between conditional and unconditional predictions (as in classifier-free guidance) sharpens the output without requiring training-time dropout.

6. Training

6.1 Loss Functions

The total training loss is:

Ltotal=Lrecon+0.01β‹…Lsemantic+10βˆ’4β‹…Lscale+10βˆ’5β‹…Lvar\mathcal{L}_\text{total} = \mathcal{L}_\text{recon} + 0.01 \cdot \mathcal{L}_\text{semantic} + 10^{-4} \cdot \mathcal{L}_\text{scale} + 10^{-5} \cdot \mathcal{L}_\text{var}

Loss Weight Description
Lrecon\mathcal{L}_\text{recon} 1.0 SiD2 sigmoid-weighted x-prediction MSE (\(b = -2.0\)). Per-pixel (hatx0βˆ’x0)2(\\hat{x}_0 - x_0)^2 averaged over (C, H, W) per sample, multiplied by w(t)=βˆ’12dΞ»dtebΟƒ(Ξ»βˆ’b)w(t) = -\tfrac{1}{2} \tfrac{d\lambda}{dt} e^b \sigma(\lambda - b), then averaged over the batch
Lsemantic\mathcal{L}_\text{semantic} 0.01 Per-token 1βˆ’cos⁑(student,teacher)1 - \cos(\text{student}, \text{teacher}) averaged over all tokens and batch (see Β§4)
Lscale\mathcal{L}_\text{scale} 0.0001 Per-channel variance varc\text{var}_c estimated over (B, H, W), then (log⁑(varc+Ξ΅)βˆ’log⁑(target))2(\log(\text{var}_c + \varepsilon) - \log(\text{target}))^2 averaged over channels. Target variance = 1.0
Lvar\mathcal{L}_\text{var} 1e-5 Per-element βˆ’log⁑(Οƒ2+Ξ΄)-\log(\sigma^2 + \delta) where Οƒ2\sigma^2 is the posterior variance, averaged over all dims (B, C, H, W). See Β§3.2

Note on loss scales: The decoder reconstruction loss has a small effective magnitude due to the SiD2 VP x-prediction weighting (the Jacobian dΞ»/dtd\lambda/dt and sigmoid weighting compress the per-sample loss scale). As a result, all auxiliary loss weights must be kept correspondingly small to avoid dominating the reconstruction objective.

6.2 Optimizer and Hyperparameters

Parameter Value
Optimizer AdamW (β₁=0.9, Ξ²β‚‚=0.99)
Learning rate 1e-4 (constant after warmup)
Weight decay 0.0
Warmup steps 2,000
Gradient clip 1.0 (max norm)
Precision AMP bfloat16 (FP32 master weights, TF32 matmul)
EMA decay 0.9995 (updated every step)
Batch size 128
Timestep sampling Uniform with SiD2 logSNR shift -1.0
Compilation torch.compile enabled
Training steps 251k
Hardware Single GPU

Convergence is fast β€” training is stopped when the training loss starts plateauing, which typically occurs within a few hours on a single GPU.

6.3 Data

Training uses ~5M images at various resolutions: mostly photographs, with a significant proportion of illustrations and text-heavy images (documents, screenshots, book covers, diagrams) to encourage crisp line and edge reconstruction. Images are loaded via two strategies in a 50/50 mix:

  • Full-image downsampling: images are bucketed by aspect ratio and downsampled to ~256Β² resolution (preserving aspect ratio).
  • Random 256Γ—256 crops: deterministic patches extracted from images stored at β‰₯512px resolution.

This mixed strategy exposes the model to both global scene composition (via downsampled full images) and fine local detail (via crops from higher-resolution sources).


7. Model Configuration

Parameter Value
Patch size 16
Model dimension 896
Encoder depth 4 blocks
Decoder depth 8 blocks (2 start + 4 middle + 2 end)
Bottleneck dimension 128 channels
Spatial compression 16Γ— (H/16 Γ— W/16)
Total compression 6.0Γ— (3Β·256 / 128)
MLP ratio 4.0
Depthwise kernel 7Γ—7
AdaLN per-block delta rank 128
Block type FCDM (ConvNeXt + GRN + scale/gate AdaLN)
Posterior Diagonal Gaussian (VP log-SNR), variance expansion weight 1e-5
Bottleneck norm Disabled
Ξ»_min, Ξ»_max -10, +10
Sigmoid bias b -2.0
Pixel noise std s 0.558
Parameters 88.8M

8. Inference

Recommended Settings

Use case Steps (NFE) PDG Sampler Notes
PSNR-optimal 1 off DDIM Default. Fastest.
Perceptual 10 on (2.0) DDIM Sharper details, ~15Γ— slower (PDG skips middle blocks)

Usage

from fcdm_diffae import FCDMDiffAE, FCDMDiffAEInferenceConfig

# Load model
model = FCDMDiffAE.from_pretrained("data-archetype/semdisdiffae", device="cuda")

# Encode (returns posterior mode by default)
latents = model.encode(images)  # [B,3,H,W] β†’ [B,128,H/16,W/16]

# Decode (1 step)
recon = model.decode(latents, height=H, width=W)

# Full posterior access
posterior = model.encode_posterior(images)
print(posterior.mean.shape, posterior.logsnr.shape)
z_sampled = posterior.sample()

Citation

@misc{semdisdiffae,
  title   = {SemDisDiffAE: A Semantically Disentangled Diffusion Autoencoder},
  author  = {data-archetype},
  email   = {data-archetype@proton.me},
  year    = {2026},
  month   = apr,
  url     = {https://huggingface.co/data-archetype/semdisdiffae},
}

9. Results

Reconstruction quality evaluated on a curated set of test images covering photographs, book covers, and documents.

7.1 Interactive Viewer

Open full-resolution comparison viewer β€” side-by-side reconstructions, RGB deltas, and latent PCA with adjustable image size.

7.2 Inference Settings

Setting Value
Sampler ddim
Steps 1
Schedule linear
Seed 42
PDG no_path_dropg
Batch size (timing) 4

All models run in bfloat16. Timings measured on an NVIDIA RTX Pro 6000 (Blackwell).

7.3 Global Metrics

Metric semdisdiffae (1 step) Flux.2 VAE
Avg PSNR (dB) 35.78 34.16
Avg encode (ms/image) 2.5 46.1
Avg decode (ms/image) 5.5 91.8

7.4 Per-Image PSNR (dB)

Image semdisdiffae (1 step) Flux.2 VAE
p640x1536:94623 35.44 33.50
p640x1536:94624 31.33 30.03
p640x1536:94625 35.05 33.98
p640x1536:94626 33.21 31.53
p640x1536:94627 32.54 30.53
p640x1536:94628 29.80 28.88
p960x1024:216264 46.37 45.39
p960x1024:216265 29.70 27.80
p960x1024:216266 47.15 46.20
p960x1024:216267 40.99 39.23
p960x1024:216268 38.47 36.13
p960x1024:216269 32.74 30.24
p960x1024:216270 36.23 34.18
p960x1024:216271 44.41 42.18
p704x1472:94699 43.80 41.79
p704x1472:94700 32.83 32.08
p704x1472:94701 39.00 37.90
p704x1472:94702 34.52 32.50
p704x1472:94703 32.81 31.35
p704x1472:94704 33.38 31.84
p704x1472:94705 39.70 37.44
p704x1472:94706 35.12 33.66
r256_p1344x704:15577 31.02 29.98
r256_p1344x704:15578 32.38 30.79
r256_p1344x704:15579 33.27 31.83
r256_p1344x704:15580 37.84 36.03
r256_p1344x704:15581 38.57 36.94
r256_p1344x704:15582 33.41 32.10
r256_p1344x704:15583 36.67 34.54
r256_p1344x704:15584 33.23 31.76
r256_p896x1152:144131 35.30 33.60
r256_p896x1152:144132 36.99 35.32
r256_p896x1152:144133 39.69 37.33
r256_p896x1152:144134 36.01 34.47
r256_p896x1152:144135 31.20 29.87
r256_p896x1152:144136 37.51 35.68
r256_p896x1152:144137 33.83 32.86
r256_p896x1152:144138 27.39 25.63
VAE_accuracy_test_image 36.64 35.25