Unified Thinker: A General Reasoning Modular Core for Image Generation
Abstract
Unified Thinker addresses the reasoning-execution gap in image generation by decoupling a reasoning module from image generators and using reinforcement learning to optimize visual correctness.
Despite impressive progress in high-fidelity image synthesis, generative models still struggle with logic-intensive instruction following, exposing a persistent reasoning--execution gap. Meanwhile, closed-source systems (e.g., Nano Banana) have demonstrated strong reasoning-driven image generation, highlighting a substantial gap to current open-source models. We argue that closing this gap requires not merely better visual generators, but executable reasoning: decomposing high-level intents into grounded, verifiable plans that directly steer the generative process. To this end, we propose Unified Thinker, a task-agnostic reasoning architecture for general image generation, designed as a unified planning core that can plug into diverse generators and workflows. Unified Thinker decouples a dedicated Thinker from the image Generator, enabling modular upgrades of reasoning without retraining the entire generative model. We further introduce a two-stage training paradigm: we first build a structured planning interface for the Thinker, then apply reinforcement learning to ground its policy in pixel-level feedback, encouraging plans that optimize visual correctness over textual plausibility. Extensive experiments on text-to-image generation and image editing show that Unified Thinker substantially improves image reasoning and generation quality.
Community
reasoning-based image generation and editing
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation (2025)
- ThinkGen: Generalized Thinking for Visual Generation (2025)
- EditThinker: Unlocking Iterative Reasoning for Any Image Editor (2025)
- A Reason-then-Describe Instruction Interpreter for Controllable Video Generation (2025)
- RePlan: Reasoning-guided Region Planning for Complex Instruction-based Image Editing (2025)
- What Happens Next? Next Scene Prediction with a Unified Video Model (2025)
- MIRA: Multimodal Iterative Reasoning Agent for Image Editing (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper