Four creators share their Dall-E-generated images – and their hopes and fears about AI in art
By Anna Furman
‘We are seeing a reflection of ourselves’ – Rachel Rossin
I have a background in programming but I’m not an engineer, I’m more of a tinkerer. I’ve made a lot of my own neural networks over the years – trained on my own datasets of my image-making process – to mimic my drawing style and apply it like a filter over an image. These ranged from maybe 500 drawings to 10,000 images. To train the networks, it takes days, but I have a pretty good computer that I can crunch that data on.
In Hologram Combines, you can see part of that neural network exposed. I usually approach shows by creating my own virtual world of something that exists wholly in virtual reality, and then I clip from that world to make source material. I like to keep my own world self-contained – an internal, metabolic system. Because there’s such a saturation of images and media right now, but making my own set from my own visual language and logic is more fun than going out to Google, which is what this is trained on.
That’s visual-to-visual search, not text-to-visual, like Dall-E. It’s like playing tennis with myself. There’s advanced, node-based processes on a neural network that, in the case of Dall-E 2 or mini, there’s almost like five sub-neural networks that are happening at the same time – which is pretty incredible. Our AI is of course getting more sophisticated, but it’s also getting a little bit more quantum, meaning there are several sub-processes that are happening.
Another image using the prompt ‘biotech harpy in field at sunset’. Photograph: Rachel Rossin
I use text in an annotative way – more poetic and abstract than literal. I make something from a feeling, often body-based. It’s much more like dream logic than this network, which is very literal. I think it’s actually a lot more useful for people who are film directors because it’s fun for sketching or storyboarding. But creatively, I don’t really need it. It hasn’t made its way into one of my projects, formally. And I think it’s because I’ve worked with neural networks for a long time so the novelty has worn off.
This Person Does Not Exist is much better than Dall-E on faces. I couldn’t help but think, “What does it think a Rachel Rossin looks like?” I have the same name as the Bladerunner Rachael Rosen, so on Dall-E 2, when I search for my name there’s some of that. It’s a white Jewish lady with brown hair, which looks pretty similar to me. That’s the phenotype, I guess.
The thing that’s most remarkable to me is the context or verb, the action-based things. If I searched “the bird is running up the street and lost its toupee”, it knows what you want to see. It’s going to be interesting when we can start to fold this into making films. Processing is going to get more powerful – it’s here to stay.
There’s a curatorial aspect that we’re ignoring. There’s this expectation that we’re creating a sort of God, but we have to remember that machine learning, neural networks, artificial intelligence – all of these things are trained on human datasets. There’s a trickle-down effect that happens because so much of our perception is folded into the technology, maybe arbitrated by engineers at Google and OpenAI. People are surprised when artificial intelligence is racist or sexist, like somehow forgetting that all of these things are trained on human datasets. It’s basically a different type of Google search, that’s all that’s going on. It’s putting trust in the internet.
It’s important to remind people what artificial intelligence actually is. We are seeing a reflection of ourselves, and it seems like a magic black box.
Rachel Rossin is a multimedia artist and self-taught programmer.