Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Follow publication

Member-only story

GLIGEN: The Game-Changer for Object/Character Placement With Stable Diffusion

3 min readMar 7, 2023

Thanks to a joint effort by researchers at UW Madison, Columbia University, and Microsoft, users can now accurately position objects in their images, giving them more control and customization options when creating AI images with Stable Diffusion.

This novel approach is likely to inspire several creative disciplines at once and lead to entirely new possibilities.

What is GLIGEN and how does it work?

GLIGEN stands for “Grounded-Language-to-Image Generation” and extends the functionality of existing text-to-image diffusion models.

In their recently published paper “GLIGEN: Open-Set Grounded Text-To-Image Generation”, the research team shows how their new approach adds object placement to Stable Diffusion’s growing arsenal of capabilities.

What about ControlNet?

ControlNet is great for a number of things, but for precise object placement, one…

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Tristan Wolff
Tristan Wolff

Write a response