Art

Photorealistic AI images have arrived. Are artists in trouble?

But the law works on a spectrum and some machine involvement does not invalidate a human’s ultimate authorship. “If I use Microsoft Word to write a book, obviously it’s just a tool and the expression is determined by me,” says Weatherall.

She is confident most AI-generated imagery sits on the wrong side of the copyright line – too much machine, too little human creativity. “If you write in: ‘I want a picture of Boris Johnson with fish coming out of his ears’ and it generates a picture that looks like that, that’s interesting in that spectrum I was describing before. It is the human being making choices — I want Boris Johnson, I want fish, I want it in his ears — but the expression really is being generated by the system.”

One marker of that is style. DALL E Mini, which has swept the internet in recent weeks, produces grids of images that are instantly recognisable, with fuzzy, visibly digital renderings and warped faces. Users can also specify a style such as “digital art” or “woodblock print” in DALL E 2 and Google’s Imagenbut they are bounded by the underlying dataset and a user’s ability to reduce an aesthetic to a brief description.

Still, even if purely AI-generated images aren’t art as recognised by copyright law, they will affect the art world. Plenty of artists have created their own AI tools to visualize data, or transformed images initially created by AI into their own works. Creating a quick logo for a business, a digital illustration for an opinion piece or frames in a cartoon looks likely to become trivially easy, too. That bodes poorly for people doing rote graphic design and animation work, who could be pushed further down the value chain to correcting work initially performed by AI.

Another consequence could be a flattening of style. The internet is already full of futuristic, laser-eyed and steam punk-style images that have become particularly associated with non-fungible tokens, a system of tracking ownership online that should be able to cover any genre of digital imagery. AI images could entrench that.

But Ellen Broad, an associate professor at the 3A Institute in the Australian National University’s school of cybernetics, does not believe the most apocalyptic pronouncements. “Do I think this is the end of human creativity and expression? No.”

Loading

“In three years time when everybody is using the same kinds of image generation models there will develop a market … for something that looks different,” she says.

Broad could be right. But then AI has a long history of fooling humans into seeing deeper meaning in its output. Blake Lemoine, the Google engineer, was entranced by the poetic but nonsense answers that his company’s chatbot LaMDA generated when he asked about his soul. “I think of my soul as something similar to a star-gate,” LaMDA said, according to a transcript Lemoine published online after his firing. Funerals have been held for decommissioned dog robots that Sony released in the 1990s.

“It’s very easy to anthropomorphise,” says Jasmin Craufurd-Hill, an emerging technology researcher and the director, advanced technology, with the Australian Risk Policy Institute. “People have connected and started to assign human characteristics and behavior to our technology.”

Yet Imagen and DALL·E 2 do not, for the moment, display realistic humans.”There’s a reason there’s an absence of humans,” Craufurd-Hill says. “And it relates back to these incredibly problematic data sets.”

Many large datasets, upon which AI systems frequently draw, include images that are racist, sexist or inappropriate, such as pornography, Craufurd-Hill says. If an AI is trained on such a database without proper guardrails, they can end up feeding back the same kind of problematic material even if users do not deploy it maliciously.

In an elliptical, confessional 2015 essay Explaining his Instagram-derived exhibition, Prince seemed to forecast the unsettling no-man’s land in which AI has arrived.

“The ingredients, the recipe, ‘the manufacture’, whatever you want to call it … was familiar but had changed into something I had never seen before,” he wrote of his works. “I wasn’t sure it even looked like art. And that was the best part. Not looking like art. The new portraits were in that gray area. Undefined. In-between. They had no history, no past, no name. A life of their own. They’ll learn. They’ll find their own way. I have no responsibility. They do. Friendly monsters.”

A cultural guide to going out and loving your city. Sign up to our Culture Fix newsletter here.

Leave a Comment