FEATURE: Eye on AI: Ethical dilemmas

Kidscreen's AI columnist weighs in on the new responsibilities and potential harms this tech poses for creatives.
September 21, 2023

By: Evan Baily

A Reddit user recently posted concept art of “national superheroes” for several different countries. Greece’s hero is Zeus in sunglasses and combat boots. Japan’s is rocking a look I’d call “rave kimono befriends jellyfish.” And Turkey’s is, well, dervish-ish. A year ago, this project might have taken an artist months to complete. But in July 2023, it probably only took Redditor u/marehori a few hours, using ChatGPT to generate the descriptions of the heroes and Midjourney to render them. 

Generative AI—including large language models (LLMs) and image, music and video generators—has the potential to fuel an explosion of creativity in animation. But is it actually creativity? And should we be using the word “artist” to describe both someone who types a text prompt and someone who writes a script or designs a character from scratch? 

My instinctive answers are “no” and “no.” Animation, like all art, is magic—profoundly human magic. At the molecular level, it’s made of intention. And AI has no intentions. But there’s a good chance that my instincts are wrong here. 

New creative tools and art forms have met with the same distrust and resistance since some Neolithic hipster said, “Hey, what if we painted outside the caves?” Your aunt and mine probably agree that abstract expressionist paintings look like something a four-year-old could’ve made. And self-appointed guardians of tradition dismissed John Cage as noise, and harrumphed when Merce Cunningham sat in a chair and called it dance. Generative AI will take some getting used to, but I think in time we’ll come to see it as a legitimate, if fundamentally different, way of making art. 

Whether you buy that or not, here’s the stark reality: AI is coming. As tech entrepreneur Joshua Browder put it, trying to stop it would be “like dinosaurs suing to stop the Ice Age.” That means we need to start focusing on how, not whether, to use it—and how to do so ethically. 

AI ethicists Tristan Harris and Aza Raskin offer a useful lens in their excellent talk “The AI Dilemma,” noting that, “when we invent a new technology, we uncover a new class of harms…and new responsibilities.” 

With that in mind, what are some of the potential harms we should be looking out for as AI infiltrates animation? And what new responsibilities do we need to take on? 

Job destruction 

Amid industry-wide disruption and studio budget cuts, everyone’s under pressure to reduce costs, and there’s widespread fear about algorithms replacing artists. 

Runway AI CEO Cristóbal Valenzuela says that’s the wrong way to look at it. Instead, he believes that AI-assisted filmmaking will be a total paradigm shift—not the same pipelines with fewer humans, but entirely new ways of making entertainment.

If he’s right, this is a once-in-a-lifetime opportunity to reinvent our industry from the ground up. So what’s our responsibility here? In my view, it’s to not let this radical shift be reduced to an exercise in force reduction and padding profit margins. We need to spur a deeper and more holistic conversation about what else to prioritize in this moment of revolutionary change—and ensuring that the creative people who’ve built this industry remain at the center of the content creation process should be at the top of everyone’s list.

Unreliable LLMs 

LLMs absorb the biases in their training data, steal without attribution, and according to a recent study led by AI research firm Anthropic, suffer from “sycophancy bias”—a tendency to serve up the answers that they think users want. And that’s just the stuff we know about. We’re learning more every day about the ways these models can go rogue. 

Our new responsibility? If we’re using LLMs, we need to be careful not to expose audiences to problematic stereotypes (the Reddit post I referenced in the first paragraph highlights the pitfalls here), clone other storytellers’ ideas, or pander to audience expectations. And needless to say, careful vetting of LLM output is even more important when it comes to making entertainment for kids. 

Stunting artistic growth

One of the harms I’m most worried about with generative AI is its potential to rob artists of the opportunity to grow. We get good at making things by failing and learning from our mistakes. If we take failure off the table by giving creative people access to a tool that offers to instantly and effortlessly solve their problems, it’ll be debilitating to them and damaging to our community. 

A key responsibility here, especially for seasoned creatives, will be resisting facile solutions, and helping those who’ve never known a creative process without the training wheels of AI understand that letting yourself get a little lost for a while can lead to better outcomes. 

Rootlessness 

When we finish a film, it’s more than just a film; it has a sliver of me, a sliver of you, and a sliver of everyone else on the team in it. It’s a living record of how we came together and drew on our life experiences and creative convictions to make something we all believe in.

But AI doesn’t believe in anything; it has no life experiences or convictions, and it can’t make decisions—it can only triangulate from the ones it finds in its training data. So using contributions from an image generator or LLM might enhance a project aesthetically, but doing so weakens the ties between the creators and the material. This poses a real threat to the humanity and soul of the work. Our responsibility here: Make sure that the defining creative content in a project comes from people, not AI. 

Model collapse—and culture collapse? 

According to new research by scholars in the UK and Canada, as AI models are fed data generated by other models, they degenerate. This can cause them to “misperceive reality even further,” according to Ilia Shumailov, one of the authors of the study, and eventually “collapse,” producing unusable output. 

By using generative AI to make content that will then be ingested by other AIs, are we at risk of creating a content echo chamber with narrower and narrower creative horizons, leading eventually to a kind of “culture collapse”? 

Our responsibility here: Resist getting trapped by tropes. Take risks. Reject anything a model serves up that feels warmed over. Feed the culture, don’t feed off it. 

Fear of change 

I love the collaborative nature of the animation production process, and thinking about it working differently (for example, solo artists making entire films using only text prompts) feels like harm.

But over the course of my career, I’ve worked with some brilliant creators who, without resources, relationships, or formal training, have found ways to put their stories on the screen. I can’t help but wonder what they could have accomplished with access to these powerful new tools—and how many new voices AI will empower that we otherwise wouldn’t get to hear.

So the last new responsibility I’ll offer up is this: If you think you’ve spotted a potential harm, pause and ask yourself whether it’s actually the problem—or if you are. 

As Runway’s Valenzuela says, we’re in the “early innings” of AI. So my thoughts here are just scratching the surface—and they’re sure to be off base in ways both large and small. As generative AI becomes more commonplace, new harms and responsibilities will surface at a breakneck pace. 

If you have ideas about other things we should be looking out for—or feedback about this piece—I’d love to hear from you (ai@conbail.com).

Evan Baily is a TV/film producer and showrunner who also consults for entertainment, media, consumer products and tech companies.

This story originally appeared in Kidscreen’s August/September 2023 issue. 

About The Author

Search

Menu

Brand Menu