AI Archviz: FLUX ControlNet for Architectural Exploration
Architecture has always been a source of inspiration for me, and combining it with AI tools like FLUX ControlNet opens up new creative possibilities. In this project, I set out to explore how AI can help visualize architectural spaces in a variety of styles, quickly and effectively.
Three generated styles combined into one image: modern, brutalism and Bauhaus.
Exploring FLUX ControlNet in ComfyUI
This experiment uses a FLUX ControlNet within ComfyUI, a versatile tool for controlling image generation workflows. The process began with an input image I generated using Leonardo.ai, depicting a modern kitchen. Using FLUX ControlNet, I applied edge detection to this input image, transforming its architectural structure into a guide for the diffusion process. This ensured that the core geometry of the kitchen remained consistent across all styles I explored.
Additionally, I’ve been experimenting with depth maps as an alternative input method. While this post focuses on edge detection, I’m excited to dive deeper into depth-based workflows in future projects.
Creative Style Exploration
The creative potential of this workflow is where it truly shines. By integrating architectural design with AI-driven image generation, I was able to reimagine the same kitchen in four distinct styles:
Modern: Keeping it sleek and minimalist with warm wood tones and abundant natural light.
Brutalism: Emphasizing raw concrete textures, sharp edges, and a stark industrial aesthetic.
Bauhaus: Highlighting clean geometry, functional forms, and harmonious material choices.
Art Deco: Bringing in bold patterns, glossy surfaces, and luxurious metallic accents.
Each style was achieved by feeding descriptive prompts into the diffusion process, serving as the stylistic foundation for each version and controlling the aesthetic outcome.
Workflow in Action
The video showcases the image generation process from start to finish. It highlights how the input image, edge detection, and ControlNet guide the diffusion process to ensure consistent structure while allowing for stylistic variations.
Generation time for a 1024x1024 image, on a computer with 24gb VRAM, was about 1.5 minutes.
Here are the results: four visualizations of the same kitchen, each rendered in a unique architectural style.
Modern
Bauhaus
Brutalism
Art Deco
Striking the Right Balance in Control
One of the most intriguing—and frustrating—parts of this experiment was balancing control and creative freedom in the AI workflow. If you constrain the AI too tightly, the results can feel unnatural or overly rigid, leading to odd distortions and artifacts. On the other hand, if you loosen the constraints too much, the AI may veer off course, generating entirely new architectural designs that no longer match the original vision.
Finding this balance—where the structure remains consistent but the styles can still evolve organically—is something I plan to explore further. This process is not only about controlling the AI but also understanding its creative boundaries and how to guide it effectively without stifling its generative potential.
This project is more than just a creative exercise—it’s a step toward understanding how AI can enhance workflows in architectural visualization. By enabling rapid exploration of multiple styles, FLUX ControlNet and other tools like Stable Diffusion make it easier to iterate and experiment with design concepts.
For someone with a background in computer graphics and a passion for AI, this intersection feels like a natural evolution of my work. I’m eager to continue exploring how these technologies can redefine creative workflows—not just in architecture, but in any field where design, technology and storytelling intersect.
I plan to explore more ControlNets and experiment with Stable Diffusion to push the boundaries of what’s possible. Depth-based inputs and generative workflows for exteriors and larger-scale architectural environments are next on my list.