LayerSync: Self-aligning Intermediate Layers

Yasaman Haghighi* Bastien van Delft* Mariam Hassan Alexandre Alahi
Ecole Polytechnique Fédérale de Lausanne (EPFL)
* Equal Contributors   

Abstract

We propose LayerSync, a domain-agnostic approach for improving the generation quality and the training efficiency of diffusion models. Prior studies have highlighted the connection between the quality of generation and the representations learned by diffusion models, showing that external guidance on model intermediate representations accelerates training. We reconceptualize this paradigm by regularizing diffusion models with their own intermediate representations. Building on the observation that representation quality varies across diffusion model layers, we show that the most semantically rich representations can act as an intrinsic guidance for weaker ones, reducing the need for external supervision. Our approach, LayerSync, is a self-sufficient, plug-and-play regularizer term with no overhead on diffusion model training and generalizes beyond the visual domain to other modalities. LayerSync requires no pretrained models nor additional data. We extensively evaluate the method on image generation and demonstrate its applicability to other domains such as audio, video, and motion generation. We show that it consistently improves the generation quality and the training efficiency. For example, we speed up the training of flow-based transformer by over 8.75× on ImageNet dataset and improved the generation quality by 23.6%.

LayerSync improvement overview
LayerSync improves training efficiency and generation quality via internal representation alignment. (a) LayerSync aligns deep and shallow layers. (b) LayerSync achieves over 8.75× training acceleration on the ImageNet 256x256. (c) LayerSync consistently improves generation quality across multiple modalities: by 23.6% on FID for images (ImageNet 256×256), 24% on FAD for audio (MTG-Jamendo), and 7.7% for FID on human motion (HumanML3D).

Representation Learning

LayerSync representation quality across layers
Assessing the quality of intermediate features shows that LayerSync improves average validation accuracy across layers (shown with dashed lines in the figures) for both classification and segmentation, and enhances alignment with DINOv2.

Image Generation

LayerSync improves generation quality without relying on external representation. We compare the images generated by SiT-XL/2 when regularized with dispersive and LayerSync. All the models are trained on ImageNet 256×256 for 400K iterations, share the same noise, sampler, and number of sampling steps, and none of them use classifier-free guidance.

LayerSync Image Generation Comparison

Audio Generation

Side-by-side comparison of audio samples generated with Baseline vs LayerSync. Both models are trained for the same number of iterations, share the same noise, sampler, and number of sampling steps, and none of them use classifier-free guidance.

Samples: 01 02 03 04 05 06 07

Sample 01

Baseline
LayerSync

Sample 02

Baseline
LayerSync

Sample 03

Baseline
LayerSync

Sample 04

Baseline
LayerSync

Sample 05

Baseline
LayerSync

Sample 06

Baseline
LayerSync

Sample 07

Baseline
LayerSync

Evolution over Training

The evolution of audio generation over different epoches.

Baseline
Epoch 280
Epoch 475
Epoch 650
Baseline + LayerSync
Epoch 280
Epoch 475
Epoch 650

Text-Conditioned Human Motion Generation

Qualitative comparison between human motions generated with MDM and MDM + LayerSync. The condition text is randomly selected from HumanML3D test set, both models are trained for the same number of iterations and the generated samples share the same noise.

MDM
MDM + LayerSync
MDM
MDM + LayerSync
MDM
MDM + LayerSync
MDM
MDM + LayerSync

Video Generation

Training on CLEVRER

Qualitative comparision for unconditional video generation on CLEVRER dataset between baseline and baseline + LayerSync. Both models are trained for the same number of iterations.

Baseline
Baseline + LayerSync

Finetuning on SSv2

Qualitative comparision for finetuning Wan2.1 and CogVideoX-2B on SSv2 dataset for text to video generation.

Wan2.1

Wan2.1 + LayerSync

Wan2.1

Wan2.1 + LayerSync

Wan2.1

Wan2.1 + LayerSync

Wan2.1

Wan2.1 + LayerSync

CogVideoX

CogVideoX + LayerSync

CogVideoX

CogVideoX + LayerSync

CogVideoX

CogVideoX + LayerSync

Citation

If you find LayerSync useful, please cite our work.

@misc{haghighi2025layersyncselfaligningintermediatelayers,
      title={LayerSync: Self-aligning Intermediate Layers},
      author={Yasaman Haghighi and Bastien van Delft and Mariam Hassan and Alexandre Alahi},
      year={2025},
      eprint={2510.12581},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2510.12581},
}