This is a great question. So we decided to answer it in detail.
What is an AI generated image?
AI text to image creators, like Stable Diffusion or Midjourney, scan your text input against all of its image-text pairs in its artificial neural network (ANN). After scanning the images, the AI image creators generate an image that is an amalgamation of all the images it has scanned.
Sounds great - why would an Event Producer use Silverley Visuals when they could use AI's visualisation?
True. Why would they? Instead of telling you, we thought we'd show you. We're using StableDiffusion because it's a more sophisticated version of MidJourney and, unlike Midjourney, you don't need to sign up and pay which means you can test what we're showing you as we go.
Let's take one of our most recent projects.
AI Event Visualisation - Round 1
We followed StableDiffusion's guide on good prompts to give the best possible input.
This was the Client Brief which we used in the AI prompt:
Old Billingsgate Market from the outside looking towards the arched entrance of the building. A Forumla E car should be outside with a step and repeat board showing the Formula E logo. There should be a red carpet outside next to the car and the red carpet should lead up to the entrance. There should be candles on the steps to the right and left of the red carpet. It should look like a 3D generated image by Silverley Visuals on the website Silverley Visuals. The resolution should be 8k. The lighting should be set at golden hour with the sun just setting onto the building. The colours should be soft, like an evening sunset glow
This is what we came up with:

We've hit the brief in every detail and the parts we didn't mention - the cigar lounge outside, the welcome desk inside, the people to animate the setting.
This is what AI came up with:

We don't know about you, but while it's red, it's not even pretending to be a carpet - and we have no idea where the candles, car, exterior, lighting etc. have all gone. We're not even convinced the architecture or perspective are correct.
AI Event Visualisation - Round 2
We then realised we'd spelt "Formula" as "Forumla" and so tried again:

AI Event Visualisation - Round 3
We then wondered if using our images as the reference images was a problem as there aren't that many of them, so we tried again:

That definitely say Getty Images...
AI Event Visualisation - Round 4
So we tried again - perhaps asking AI to iterate was coming up with more and more generic images of Old Billingsgate and therefore we should revise our prompt. We also took out the Formula E car and put in a Formula 1 car as there would be more images of it. We took out the references to ourselves, the reference to a step and repeat board and left the rest:
Old Billingsgate Market from the outside looking towards the entrance of the building. A Formula 1 car should be outside with a marketing board showing the Formula 1 logo. There should be a red carpet outside next to the formula 1 car and the red carpet should lead up to the entrance. There should be candles on the steps to the right and left of the red carpet. It should not look photo realistic. It can look like an animation. The resolution should be 8k. The lighting should be set at golden hour with the sun just setting onto the building. The colours should be soft, like an evening sunset glow.

AI Event Visualisation - Round 5
We tried the prompt with Realistic Vision v2.0:

AI Event Visualisation - Round 6
We then downloaded the models version where you can input a model image and get Stable Diffusion to use the model as a checkpoint image to iterate from - but we couldn't open the software on our mac.
Why is it not working?
When AI generates an image it's an amalgamation of all the images it's scanned. If you have a famous place like Old Billingsgate Market in the centre of London, it's not famous enough: there are not enough images of it for AI to reproduce it accurately. It's why all the images are so blurry - and it's why you have the best effects occurring when AI is drawing from deep catalogues of images - like the exterior of the Eiffel Tower or the face of a celebrity. Except that, unless you're hosting an event in the Eiffel Tower or are a celebrity, that's actually not that useful.
Why is that important for me as an Event Producer?
If you want to create generic event visualisation using AI then that's totally fine. We typed in this quite generic prompt:
Luxurious event setting decorated with disco balls and a giant flower with pink lighting and entertainers. There should be seating arrangements with floral centrepieces
And got the below:

It's pink, it's luxurious, there are floral centrepieces and the sparkly ceiling is an ode to the disco ball.
As a piece of inspiration though? We probably wouldn't include it in our mood board.
What now then?
AI will improve and get better. It will scan more images and it will be more accurate. Right now, people are producing beautiful images based on other people's art work. And maybe one day computer will draw the image in our minds...
But for now?
Contact us
Our whole goal is to create you beautiful, accurate, bespoke events that allow you and your client to dream. We draw to scale, draw your vision and nothing we do is an amalgamation of anything else.
Why do we do it? Because you are individual, unique, bespoke, not an amalgamation of anything else. So why should you events be?
Comentários