We propose LayerSync, a domain-agnostic approach for improving the generation quality and the training efficiency of diffusion models. Prior studies have highlighted the connection between the quality of generation and the representations learned by diffusion models, showing that external guidance on model intermediate representations accelerates training. We reconceptualize this paradigm by regularizing diffusion models with their own intermediate representations. Building on the observation that representation quality varies across diffusion model layers, we show that the most semantically rich representations can act as an intrinsic guidance for weaker ones, reducing the need for external supervision. Our approach, LayerSync, is a self-sufficient, plug-and-play regularizer term with no overhead on diffusion model training and generalizes beyond the visual domain to other modalities. LayerSync requires no pretrained models nor additional data. We extensively evaluate the method on image generation and demonstrate its applicability to other domains such as audio, video, and motion generation. We show that it consistently improves the generation quality and the training efficiency. For example, we speed up the training of flow-based transformer by over 8.75× on ImageNet dataset and improved the generation quality by 23.6%.
LayerSync improves generation quality without relying on external representation. We compare the images generated by SiT-XL/2 when regularized with dispersive and LayerSync. All the models are trained on ImageNet 256×256 for 400K iterations, share the same noise, sampler, and number of sampling steps, and none of them use classifier-free guidance.
Side-by-side comparison of audio samples generated with Baseline vs LayerSync. Both models are trained for the same number of iterations, share the same noise, sampler, and number of sampling steps, and none of them use classifier-free guidance.
The evolution of audio generation over different epoches.
Qualitative comparison between human motions generated with MDM and MDM + LayerSync. The condition text is randomly selected from HumanML3D test set, both models are trained for the same number of iterations and the generated samples share the same noise.
Qualitative comparision for unconditional video generation on CLEVRER dataset between baseline and baseline + LayerSync. Both models are trained for the same number of iterations.
Qualitative comparision for finetuning Wan2.1 and CogVideoX-2B on SSv2 dataset for text to video generation.
Wan2.1
Wan2.1 + LayerSync
Wan2.1
Wan2.1 + LayerSync
Wan2.1
Wan2.1 + LayerSync
Wan2.1
Wan2.1 + LayerSync
CogVideoX
CogVideoX + LayerSync
CogVideoX
CogVideoX + LayerSync
CogVideoX
CogVideoX + LayerSync
If you find LayerSync useful, please cite our work.
@misc{haghighi2025layersyncselfaligningintermediatelayers,
title={LayerSync: Self-aligning Intermediate Layers},
author={Yasaman Haghighi and Bastien van Delft and Mariam Hassan and Alexandre Alahi},
year={2025},
eprint={2510.12581},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.12581},
}