LayerSync: Self-aligning Intermediate Layers
Authors: Yasaman Haghighi*, Bastien van Delft*, Mariam Hassan, Alexandre Alahi
Affiliation: Γcole Polytechnique FΓ©dΓ©rale de Lausanne (EPFL)
π Abstract
We propose LayerSync, a domain-agnostic approach for improving the generation quality and the training efficiency of diffusion models. Prior studies have highlighted the connection between the quality of generation and the representations learned by diffusion models, showing that external guidance on model intermediate representations accelerates training. We reconceptualize this paradigm by regularizing diffusion models with their own intermediate representations. Our approach, LayerSync, is a self-sufficient, plug-and-play regularizer term with no overhead on diffusion model training and generalizes beyond the visual domain to other modalities.
π½ Available Checkpoints
This repository contains pretrained model checkpoints:
checkpoint.pt: Trained model after 800 epochsrepresentation.pt: Checkpoint used for representation quality evaluation experiments
π Usage
Download the checkpoints and use them with the LayerSync codebase.
To generate images using the provided scripts:
torchrun --nnodes=1 --nproc_per_node=N sample_ddp.py ODE \
--model SiT-XL/2 \
--num-fid-samples 50000
π Citation
@misc{haghighi2025layersyncselfaligningintermediatelayers,
title={LayerSync: Self-aligning Intermediate Layers},
author={Yasaman Haghighi and Bastien van Delft and Mariam Hassan and Alexandre Alahi},
year={2025},
eprint={2510.12581},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.12581},
}
π Links
- Paper: arXiv:2510.12581
- Project Page: vita-epfl.github.io/LayerSync.io
- Code: github.com/vita-epfl/LayerSync