Motifsnap

Every art director’s nightmare about AI art

Now that our Agora report on Oregon’s local news and information ecosystem is complete, I’ve got a few hours to play with the generative Artificial Intelligence art technology we’ve heard about, which can generate any digital picture imagined from the words you put into a prompt. The end product is stunning. OpenArt includes a collection of artwork made by the top three AI systems: DALLE 2, Midjourney, and Stable Diffusion if you want to see what the increasing community is making.

We’ve been hearing about generative AI for a while, but I’ve always thought it will have an influence in the far future. However, with the availability of web-based technologies that provide high-quality output, that future is already a reality. There is little question that generative AI technologies will revolutionize the way we produce and consume media, causing disruption in the creative economy and communication sectors. While text-to-text technologies currently help authors create tales, I’ll concentrate on text-to-image generative AI.

There are several controversies. I absolutely understand and respect the discussion about the ethics of imitating artists’ styles and likenesses that are copyrighted. People who contribute to this ecosystem should be paid and recognized for their skills and efforts. Generative AI art poses a danger to many artists. However, for those that accept this new digital brush, it has the potential to be a strong tool in their creation process.

What concerns me the most is the lack of forethought in developing laws for deep fakes, AI-powered photographs, and realistic art, which might cause damage and deception. But, returning to the core technology of generative AI, I’m curious to see how Stable Diffusion’s Creative ML OpenRAIL-M license and de-centralized mitigation measures can hold users ethically, legally, and morally accountable for injuring others and spreading misinformation. As is customary, we are incapable of developing policies ahead of innovation and must continuously catch up.

Imagine when the technology is as simple as a Photoshop plug-in. Imagine no longer. Last month, plug-in creator Christian Cantrell, former Director of Experience Development and director of Adobe Design Prototyping, tweeted this video (also see below).

As remarkable as the technology is, learning how to communicate with the algorithms underlying the tool is still required. What you enter into these prompts instructs the AI model on how to render. The better you grasp the “language,” the more accurately it can recreate the picture in your memory. Prompt Engineering is the term for it, and OpenArt has created a Prompt Book to assist you grasp successful strategies. Of fact, there is already a prompt marketplace. And it seems to be a potential career. This brings me to my second point of worry.

In addition to the ethical and legal concerns that generative AI technology poses, I believe that the media and communication industries are not prepared for this major transition. And how are we, as educators, preparing our pupils for such a world? These additional questions arise:

In our publications, how do we classify and characterize generative AI content?
When is it permissible and when is it not to post generative AI images?
What are some of the most exciting and troubling instances of other people employing generative AI to produce images?
What is the right balance of employing generative AI to “help” vs. “create” in the creative process?
What ethical and legal considerations must we have in mind while utilizing and releasing generative AI content?
What safeguards, if any, should be in place to avoid news photography abuse?


What questions arise for you?

Shopping cart close