AutoWeather4D: Autonomous Driving Video Weather Conversion via G-Buffer Dual-Pass Editing
Abstract
AutoWeather4D is a 3D-aware weather editing framework that decouples geometry and illumination through a dual-pass mechanism, enabling efficient and physically accurate weather modification for autonomous driving applications.
Generative video models have significantly advanced the photorealistic synthesis of adverse weather for autonomous driving; however, they consistently demand massive datasets to learn rare weather scenarios. While 3D-aware editing methods alleviate these data constraints by augmenting existing video footage, they are fundamentally bottlenecked by costly per-scene optimization and suffer from inherent geometric and illumination entanglement. In this work, we introduce AutoWeather4D, a feed-forward 3D-aware weather editing framework designed to explicitly decouple geometry and illumination. At the core of our approach is a G-buffer Dual-pass Editing mechanism. The Geometry Pass leverages explicit structural foundations to enable surface-anchored physical interactions, while the Light Pass analytically resolves light transport, accumulating the contributions of local illuminants into the global illumination to enable dynamic 3D local relighting. Extensive experiments demonstrate that AutoWeather4D achieves comparable photorealism and structural consistency to generative baselines while enabling fine-grained parametric physical control, serving as a practical data engine for autonomous driving.
Community
AutoWeather4D: Autonomous Driving Video Weather Conversion via G-Buffer Dual-Pass Editing
Introducing a feed-forward framework for autonomous driving video editing. By explicitly decoupling geometry from lighting and performing direct editing on the G-buffer, our method achieves efficient, physically controllable, and photorealistic transitions across multiple weather conditions and times of day.
With this framework, editing real-world driving videos becomes as intuitive as using a game engine: it allows for flexible control over scene illumination while naturally rendering dynamic weather effects like rain, snow, and fog.
๐ Project Page: https://lty2226262.github.io/autoweather4d/
๐ arXiv: https://arxiv.org/abs/2603.26546
๐ป Code: https://github.com/lty2226262/AutoWeather4D
๐ค Huggingface Paper: https://huggingface.co/papers/2603.26546
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DynamicVGGT: Learning Dynamic Point Maps for 4D Scene Reconstruction in Autonomous Driving (2026)
- Easy3E: Feed-Forward 3D Asset Editing via Rectified Voxel Flow (2026)
- WeatherCity: Urban Scene Reconstruction with Controllable Multi-Weather Transformation (2026)
- DriveFix: Spatio-Temporally Coherent Driving Scene Restoration (2026)
- ReconDrive: Fast Feed-Forward 4D Gaussian Splatting for Autonomous Driving Scene Reconstruction (2026)
- Motion Forcing: A Decoupled Framework for Robust Video Generation in Motion Dynamics (2026)
- GA-Drive: Geometry-Appearance Decoupled Modeling for Free-viewpoint Driving Scene Generatio (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2603.26546 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper