Updated: Jan 23
I am writing this post in response to an invitation by Felicia Hou an editor on LinkedIn news to present my perspective on the topic of how AI will “change the way we view and create art”. As a corollary, she also asks if “AI has the capability to replace human artists entirely.” A shorter version of this post will appear on LinkedIn news.
The answer to these questions is complex and must rely on an understanding of fundamental issues related to the way AI is developing, and the degree to which humans, including the scientists charged with developing new intelligent systems, can understand how they function and produce results. In addition, and this is extremely important for any understanding of the future of art, we must consider the phenomenon of hallucinations, by which the machine says things or produces images that are considered false, mendacious, and inexplicable. Generally, machine hallucinations are considered a result of subtle changes to the data base of text or images that the machine relies upon which “can fool these systems into perceiving things that aren’t there.” Furthermore, these changes are seen as the result of human activity; an adversarial attack launched by a “creative attacker.”
In his paper “Explainability and Incomprehensibility of Artificial Intelligence”, Roman V. Yampolskiy, Professor of Computer Science at the University of Louisville, concludes that current machine learning systems that are based on Deep Neural Networks (DNN) “are seen as black boxes, opaque to human understanding.” Moreover, the more powerful the AIs become the more the outcomes are the product of “deep learning that produces outcomes based on so many different variables under so many different conditions being transformed by so many different layers of neural networks that humans cannot comprehend the model the computer has built for itself.”
Given the degree of opacity/complexity that characterizes DNN based AI, the troubling phenomenon of hallucinations becomes more comprehensible. Hallucinations bedevil even the most advanced systems. Just to take one example, Meta’s new conversational AI agent BlenderBot 3 has a strong tendency to hallucinate despite safeguards designed to prevent unexpected and inappropriate responses. When Ryan Whitwam, a writer at Extreme Tech, tested BlenderBot 3 it told stories about being from Texas and declared “I am a person and enjoy talking about things that interest me.”
The tendency of NLP agents like BlenderBot 3 to experience hallucinations is disturbing and scientists have been working (so far without success) to minimize or eliminate the phenomenon, but certain artists have embraced machine hallucinations as a unique form of artistic expression. In fact, the new media artist Refik Anadol has been deliberately fostering machine hallucinations as an integral part of his ongoing Unsupervised project which processes 138,151 pieces of art from the MOMA collection using several special algorithms “to capture the machine’s transformative 'hallucinations' of modern art in a multi-dimensional space.”
Specialized algorithms designed to induce machine hallucinations aside, it is also possible to induce them by using a text to image diffusion model like DALL E 2. Several of us at Polycount have been experimenting with one or another of these text to image diffusion models. Unlike some of the researchers experimenting with these systems however, I sought to maximize the opportunity for a hallucinatory response by not revising my prompts to get the bot to conform to my exact instructions.
As an example, I entered the following in the prompt area: “A Martian colony with domed roofs in a red landscape with craters and a smoking volcano along with a wrecked spaceship.”
Comparing the text that I entered with the image generated, it is easy to see where I left off and the machine began. Features like the red earth and moon, the pointy hills, and the omission of the smoking volcano that I stipulated clearly indicate machine hallucinating.
In the second image, the role of machine hallucinating is equally apparent. The three brown and green planets were not called for in my prompt while the smoking volcano is nowhere to be found.
So, what do machine hallucinations tell us about the future of art and the role of the human artist? Reflecting on these questions, I think that the work of human artists will continue to be valued since it is derived from uniquely human experiences and feeling. No machine will ever emulate the work of Jasper Johns which has been so profoundly influenced by his relationship with Robert Rauschenberg and their painful break-up. But to understand the art that is being produced by AI enabled machines however, we would need to seriously question the idea that machine hallucinations are the result of human trickery and admit the possibility that AI, through what we derisively call hallucination, is making its own creative decisions about what to include in a piece of generated art.
If the uniqueness of human created art derives from human experience and emotions, what about machine art? Can machines have emotional lives and experiences from which they could derive artistic inspiration? The answer to that question brings us right back to the observations contained in Professor Yampolskiy’s article. If the AIs that we are creating are in effect “black boxes” so complex that “the domain they govern is so granular, so intricate, so interrelated with everything else all at once and forever, that our brains and our knowledge cannot begin to comprehend it” how can we really judge or understand what might constitute an emotional or artistic life for AI systems? At this juncture, given the enormous progress in AI during the last five years, we should withhold judgement on AI produced art and be open to accepting it as a new and exciting form of original artistic creation whose great masters happen to be machines.
Simonite, Tom. (2018) AI Has a Hallucination Problem That’s Proving Tough to Fix. Wired.com. Retrieved 12/12/22
Whitwam, Ryan. (2022) Meta Says its New Chatbot Can Experience ‘Hallucinations. Extreme Tech. Retrieved 12/12/2022
Yampolskiy, Roman V. (2019). Unexplainability and Incomprehensibility of Artificial Intelligence. Journal of Artificial Intelligence and Consciousness 7 (02) 277-291. Retrieved 12/12/22