UniRecGen: Unifying Multi-View 3D Reconstruction and Generation
Abstract
UniRecGen combines feed-forward reconstruction and diffusion-based generation in a shared canonical space to produce complete and consistent 3D models from sparse inputs through disentangled cooperative learning.
Sparse-view 3D modeling represents a fundamental tension between reconstruction fidelity and generative plausibility. While feed-forward reconstruction excels in efficiency and input alignment, it often lacks the global priors needed for structural completeness. Conversely, diffusion-based generation provides rich geometric details but struggles with multi-view consistency. We present UniRecGen, a unified framework that integrates these two paradigms into a single cooperative system. To overcome inherent conflicts in coordinate spaces, 3D representations, and training objectives, we align both models within a shared canonical space. We employ disentangled cooperative learning, which maintains stable training while enabling seamless collaboration during inference. Specifically, the reconstruction module is adapted to provide canonical geometric anchors, while the diffusion generator leverages latent-augmented conditioning to refine and complete the geometric structure. Experimental results demonstrate that UniRecGen achieves superior fidelity and robustness, outperforming existing methods in creating complete and consistent 3D models from sparse observations.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Easy3E: Feed-Forward 3D Asset Editing via Rectified Voxel Flow (2026)
- AirSplat: Alignment and Rating for Robust Feed-Forward 3D Gaussian Splatting (2026)
- RnG: A Unified Transformer for Complete 3D Modeling from Partial Observations (2026)
- Pano3DComposer: Feed-Forward Compositional 3D Scene Generation from Single Panoramic Image (2026)
- MV-SAM3D: Adaptive Multi-View Fusion for Layout-Aware 3D Generation (2026)
- OneWorld: Taming Scene Generation with 3D Unified Representation Autoencoder (2026)
- FTSplat: Feed-forward Triangle Splatting Network (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.01479 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper