Generative AI Questions the Value of Artists – and That’s a Good Thing

Generative AI has grown to the point of having its own mythos. There are those who swear by its success – claiming it will replace whole industries and those who see it as a profitless trend that is sure to fade. 

Perhaps, like most things in life, the capabilities and best uses for generative AI tools are more nuanced.  

The conversations surrounding the use of generative AI are as heated as they are binary. The advent of this technology begs many questions of society. It asks us what makes art “art” and what gives work value. It asks us to consider the future of content and art – and our place in contributing to it.

While it may be nerve-wracking for artists and creatives to watch corporations start to devalue their skills in favour of saving money, the reaction to generative AI is falling in favour of artists. Not only that, but the conversations started by generative AI benefit creatives – demanding we question what gives art value. And, so far, the answer seems to be human imagination. 

As AI continues to gain momentum in the digital content space, organizations around the world have begun to discuss the ethical dilemma facing corporations and artists alike. Unesco lists multiple ethical concerns with the use of AI, including a lack of transparency, a lack of neutrality, and a lack of privacy. 

In addition to ethical red flags like these, there are also pivotal copyright implications to the use of generative AI. This has led to numerous lawsuits and marks a potential turning point for copyright legislation.

Audiences are also taking note of these ethical dilemmas, with AI generating just as much controversy as it does content. In 2023, CNET faced a huge backlash from readers after discovering that the company was using AI to write a large portion of its content. The company has since released a statement and halted all use of AI in content generation. 

Interestingly, CNET had included a note disclosing that their articles were generated by AI in the footer of the articles. But this wasn’t good enough for audiences who saw the use of AI as degrading the value of the publication. 

Another example of audience backlash in recent years is the use of AI to create a book cover for the new UK paperback edition of bestselling author Sarah J Maas’s House of Earth and Blood. The art was not only generated by AI but licensed from Adobe Stock images instead of being made by the publisher’s creative. Readers were upset to say the least – swarming to Instagram to voice their complaints. 

These two cases highlight the value audiences place on originality and human thought and expression – the value they place on art. 

Aside from the ethical controversy of heavy AI use in content generation, both visual and written, generative AI raises questions about the potential loss of ideas and thought leadership.

What happens when writers and artists are dropped from conversations, and society relies on AI to write editorials and depict the world? Generative AI can only progress as far as the content fed into it takes it. So if humans stop writing, stop drawing, stop contributing – then art and content as we know them become stagnant. Never improving, never growing, never creating anything, well, creative. 

What this line of questioning has in common with the reader backlash seen when publications and publishers use AI, is that the value of human thought and creativity is what gives art its value. Artistic integrity is shown as paramount and without it, the value to the consumer is immediately lost and in the long run, its value as a whole continues to depreciate. 

Artists and creatives have been fighting for recognition and value since, well, probably forever. This is especially true in corporate spheres where the need for efficiency and money-saving undercuts the value that creatives bring to the table. But what generative AI does is it brings forward potentially the biggest threat to creatives seen to date and in response to that threat, consumers have chosen to stand by the artists. 

Since the generative AI boom of summer 2023, there have been many steps taken to protect the value that human creativity brings. One such step comes from an unexpected ally, Google, which has chosen to prioritize authentic content in its algorithm and punishes content it suspects as AI-generated. 

Regulations were also announced at the end of October, as part of a larger Executive Order laying down new rules and regulations for AI. In this order, President Biden lays out a plan for ensuring that AI-written content is labelled as such – in an effort to protect Americans from fraud. But what this also does is protect writers and artists alike. While generative AI affects artists globally, regulations in large jurisdictions, like the US, tend to have a trickle-down effect on other areas, particularly when the internet is involved. 

The public response to art and literature generated by AI has been overwhelmingly in favour of artists, and with ongoing lawsuits from authors, publishers, and the New York Times, tech companies like OpenAI are going to have to rethink their business model if they hope to succeed. 

Leave a comment