Recently I was tasked with finding a good solution for my father's new desk setup. With the limited space they have, we have decided to put the new desk next to their TV. After spending some time on Pinterest, Reddit and Instagram. I had a rough idea of how I would want to structure their new living room.
I was imagining a custom setup that would connect their already existing TV stand to a desk next to it. Since it was going to be a custom desk, I couldn't simply send some sample images to my father. My architect girlfriend once showed me the collages they use for their projects and I went on to create a little collage of my own in Goodnotes on my iPad.
Maybe this gives and idea to me or my architect girlfriend but I knew if I showed it to my father he wouldn't be impressed by it. So I though of feeding the collage to an AI.
The first results were not great but promising. I also noticed that the AI was taking the collage too literally, we as humans know the collage is there to act as an image representation of the concept, meanwhile the AI is trying to render the collage image straight into reality.
StableDiffusion is a complex program. It is much more technical then DALL:E or Midjourney that I had previous experience with. It also instantly striked me as a lot more powerful than those. I stumbled open this video of a turkish Youtuber showing how to convert a drawing of an interior space to a render like image using StableDiffusion with some special libraries.
I downloaded the necessary libraries, called plugins in StableDiffusion, and started messing with it. The results somewhat improved but I knew they could be much better. After creating 50 or so images I decided my collage was the source of the issue, I had to create a sketch of my own if I wanted to tap into the full potential of the StableDiffusion model I was using.
So I dusted off my drawing skills and went to work on my iPad. After 15 minutes of skilful(!) drawing, I had a satisfactory line drawing copy of my conceptboard.
I fed the drawing to the StableDiffusion and voila! Great result even after first try. I tweaked my drawing a little and the results only improved.
StableDiffusion has impressed me a lot. Seeing the drawing and the image side by side is incredibly impressive to anyone I showed so far. This demo proved me again how impressive AI can be if used properly.
One bubble after another, I am pleased to see the tech industry get excited about a real value creating product this time round. The future is bright!