AI vs. Artist: The Future of Creativity
Cornell Tech @ Bloomberg, Business, New York City, Technology
Photo Credit: Bloomberg / Provided
By Jess Campitiello
In the last year, artificial intelligence (AI) art generators such as Stable Diffusion, DALL-E 2, and Midjourney have been making a splash across the headlines. To the general public, what started as a fun, harmless, and novel way to pass some time quickly turned into professional discourse over artistic integrity.
Generative AI programs are trained by scraping large online public datasets to learn what words relate to corresponding images. Once this has been done, users can input a text prompt which causes the program to compile what it has learned and composite it into a single image output. Björn Ommer, who worked on Stable Diffusion, explains its workings in more detail here.
Accusations of art theft arose surrounding the use of creatives’ works to train these models, as image outputs emulated prominent artists’ styles without their consent. Backlash grew as an AI-generated piece won first place in an art competition. On Reddit, one user was banned for posting their own work because it too closely resembled AI art. Ultimately, the outrage has led to multiple lawsuits unfolding against these programs.
Simultaneously, the generative AI platform Scenario raised $6 million in seed funding and Microsoft made a ‘multiyear, multibillion-dollar investment’ in OpenAI, the company behind Dall-E 2 and ChatGPT.
“There’s a Cambrian explosion right now with this technology,” said Scott Belsky, Adobe’s Chief Product Officer and Executive Vice President of Creative Cloud, at the most recent Cornell Tech @ Bloomberg Speaker Series event. Alongside over 800 partners, Adobe has been working to add generative AI to its products while still maintaining user trust.
When asked about how artificial intelligence and machine learning fit into the future of art, there were three main points that Belsky mentioned:
Creative confidence is only going to grow.
Our creative confidence typically peaks as kids, Belsky explained. Starting off with simple crayon drawings, many children find a creative outlet within art. Crude approximations of houses, animals, and loved ones are held in high esteem as they are tacked up on the fridge. However, as we grow older and we meet critics — those who insist on colors within the lines and accurate representations on the page — our confidence goes down. And with that, so too does many peoples’ efforts to pursue artistic endeavors. This technology not only makes creativity widely accessible, but also allows individual artistic confidence to grow since the barrier to entry is quite low.
Professionals can access endless creative possibilities.
Generative AI allows artists to create countless visual concepts at the click of a button, making the process of fielding potential design ideas with clients exponentially more efficient. Giving clients a couple of quick AI ‘thumbnail sketch’ options will allow artists to finalize their work faster, as they will not be spending time creating preliminary visuals. The fabricated art is not meant to be used as a replacement but rather as a springboard for the artist’s own creative output. AI can even offer design choices that the artist wouldn’t have considered otherwise, mentioned Belsky. Integrating this technology into artists’ tools has the potential to grant them an “edge to a breakthrough.”
Outcome-oriented versus process-oriented people.
Belsky described two different generative AI user types: those who are outcome-oriented, and those who are process-oriented.
Outcome-oriented individuals are less interested in the details of the final product and more focused on receiving the product itself. For example, if an outcome-oriented person were to ask an AI program to generate images of a ‘cowboy in space,’ they would be happy to use the first image presented that suits their needs. The speed of the output is key.
On the other hand, process-oriented individuals look at what is generated by the AI program and use it as a tool to create something unique for their needs. Inputting the same prompt, they may be drawn to and emulate the overall composition of one of the outputs but would move the cowboy’s arms into a specific position or make any other number of edits to create a final piece that is precisely what they want.
We are all on a spectrum of outcome-to-process-oriented, explained Belsky.
Do artists have a right to be scared? Belsky says that while AI will not be replacing creatives in their spaces as it cannot emulate a human eye for aesthetics, their concerns are justified and there are many important questions that need to be considered moving forward with these powerful technologies. “Has their content been trained on these models? If so, how are they being compensated? Can you generate things in the style of a specific person? People can be ethical and say ‘no,’ but other players can say ‘it’s a free for all.’”
Belsky concluded his thoughts on the subject by stating that nothing in Adobe’s Cloud has ever been stored for the use of training a generative AI, but that the company plans to be very specific about this in its future terms and policies so that there is no ambiguity on the matter.
Watch the full Cornell Tech @ Bloomberg interview:
The Cornell Tech @ Bloomberg Speaker Series is hosted by Scarlet Fu of Bloomberg Television and Bloomberg QuickTake in partnership with Bloomberg, Cornell Tech, and Tech:NYC.
Jess Campitiello is the Digital Communications Specialist at Cornell Tech.