LayoutLLM-T2I:

Eliciting Layout Guidance from LLM

for Text-to-Image Generation

1. NExT++ Lab, National University of Singapore
2. Harbin Institute of Technology (Shenzhen)
*Equal Contribution         #Correspondence

Abstract

In the text-to-image generation field, recent remarkable progress in Stable Diffusion makes it possible to generate rich kinds of novel photorealistic images. However, current models still face misalignment issues (e.g., problematic spatial relation understanding and numeration failure) in complex natural scenes, which impedes the high-faithfulness text-to-image generation. Although recent efforts have been made to improve controllability by giving fine-grained guidance (e.g., sketch and scribbles), this issue has not been fundamentally tackled since users have to provide such guidance information manually.

In this work, we strive to synthesize high-fidelity images that are semantically aligned with a given textual prompt without any guidance. Toward this end, we propose a coarse-to-fine paradigm to achieve layout planning and image generation. Concretely, we first generate the coarse-grained layout conditioned on a given textual prompt via in-context learning based on Large Language Models. Afterward, we propose a fine-grained object-interaction diffusion method to synthesize high-faithfulness images conditioned on the prompt and the automatically generated layout. Extensive experiments demonstrate that our proposed method outperforms the state-of-the-art models in terms of cross-modal text-layout alignment and high-faithfulness image generation.

Motivation

Despite the satisfactory performance achieved by recent SD-based models, synthesizing high-faithful images in complex scenes is still challenging, such as problematic spatial relation understanding and numeration failure, as shown in the following figure. However, to achieve high-faithfulness image synthesis, the existing models suffer from the following challenges:

  1. Layout Planning requires abstract spatial imagination and analysis capabilities. The limited annotated layout data and intrinsic inductive bias make it difficult for existing diffusion methods to accurately and aesthetically generate layouts. Although notable efforts have been dedicated to synthesizing complex scenes by manually providing guidance information, these strategies suffer from weak flexibility and low efficiency since they heavily rely on extra labor-intensive guidance.
  2. Relation Modeling, e.g., the high-level spatial and semantic relations, plays a pivotal role in understanding, imagining, and depicting complex scenes for T2I models, but it is still under-explored owing to the complex and ever-changing environments in real life.

Teaser

Framework

In Figure 2, we illustrate the overall architecture of the proposed layout-guided diffusion model, consisting of two modules:

  1. Text-to-layout induction module infers a coarse-grained layout via an LLM conditioned the given textual prompt.
  2. Layout-guided image generation module synthesizes the final image based on the prompt and the generated layout, .

Teaser

Generated Images

Examples: Spatial, Semantic

Results on the Spatial, and Semantic inputs. The ground-truth (GT) image, the ground-truth layout with the generated image (GT*), the layout generated from LayoutDM, and our results for each prompt are shown from left to right.

Teaser

Examples: Numerical, Mixed, Null

Results on the Numerical, Mixed, and Null inputs.

Teaser

Related Links

You may refer to previous works such as GLIGEN, Stable Diffusion, and GPT3.5, which serve as foundational frameworks for our LayoutLLM-T2I framework and code repository.

BibTeX

@article{qu2023layoutllm,
  title={LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image Generation},
  author={Leigang Qu, Shengqiong Wu, Hao Fei, Liqiang Nie, Tat-Seng Chua},
  journal={Proceedings of the {ACM} International Conference on Multimedia},
  year={2023}
}