In the text-to-image generation field, recent remarkable progress in Stable Diffusion makes it possible to generate rich kinds of novel photorealistic images. However, current models still face misalignment issues (e.g., problematic spatial relation understanding and numeration failure) in complex natural scenes, which impedes the high-faithfulness text-to-image generation. Although recent efforts have been made to improve controllability by giving fine-grained guidance (e.g., sketch and scribbles), this issue has not been fundamentally tackled since users have to provide such guidance information manually.
In this work, we strive to synthesize high-fidelity images that are semantically aligned with a given textual prompt without any guidance. Toward this end, we propose a coarse-to-fine paradigm to achieve layout planning and image generation. Concretely, we first generate the coarse-grained layout conditioned on a given textual prompt via in-context learning based on Large Language Models. Afterward, we propose a fine-grained object-interaction diffusion method to synthesize high-faithfulness images conditioned on the prompt and the automatically generated layout. Extensive experiments demonstrate that our proposed method outperforms the state-of-the-art models in terms of cross-modal text-layout alignment and high-faithfulness image generation.
Despite the satisfactory performance achieved by recent SD-based models, synthesizing high-faithful images in complex scenes is still challenging, such as problematic spatial relation understanding and numeration failure, as shown in the following figure. However, to achieve high-faithfulness image synthesis, the existing models suffer from the following challenges:
In Figure 2, we illustrate the overall architecture of the proposed layout-guided diffusion model, consisting of two modules:
Results on the Spatial, and Semantic inputs. The ground-truth (GT) image, the ground-truth layout with the generated image (GT*), the layout generated from LayoutDM, and our results for each prompt are shown from left to right.
Results on the Numerical, Mixed, and Null inputs.
You may refer to previous works such as GLIGEN, Stable Diffusion, and GPT3.5, which serve as foundational frameworks for our LayoutLLM-T2I framework and code repository.
@article{qu2023layoutllm,
title={LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image Generation},
author={Leigang Qu, Shengqiong Wu, Hao Fei, Liqiang Nie, Tat-Seng Chua},
journal={Proceedings of the {ACM} International Conference on Multimedia},
year={2023}
}