The panel “AI & ChatGPT: The Impact on Content Creators” took place on Saturday at 2:30PM in the Hilton Galleria 7, but if you missed the packed-room discussion, you can still catch it on DCTV’s Twitch 2. It is a balanced exploration into the pros and cons of the use of AI in creative spaces, with the strongest arguments for the benefits on one end of the table and the loudest negatives from the other, and those on the fence, well, in the middle. It is a complicated issue and, as in all things, the best way through is with more information.
The six-person panel was made up of professionals in technology fields, from software development to security and operations analysis, who also work in creative spaces, either for fun or profit. Podcasters, writers, filmmakers, all. The moderator Tyra Burton encouraged thoughtful conversation across the panel, which included Kurt Boutin, Bob McGough, Bobby Blackwolf, Jeff Ello, and Phillip Pournelle. The questions they were all there to discuss were simple and yet one hour was not enough time for everybody to really dig into the topic.
Is AI a Dystopian Future?
The initial response from most of the panel was to approach the use of AI with caution and a generous amount of human intelligence. “Don’t make the machine your higher power,” advised Pournelle.
The technology may be popular and topical right now, but it has been around for years and yet is still in its infancy. What we know about AI today will be drastically different a month from now. As technology evolves, the panel decided unanimously, society needs to adjust actively to preserve the jobs and creative freedom of humans in the near future. “On the macro level, we’re going to be fine,” said McGough. “The micro level is gonna suck.”
As someone who has experimented with AI possibilities across writing, art, and music prompts, Boutin was quick to point out the positive contributions that AI can bring. It provides a platform for people to explore their own creative inklings even in areas they don’t necessarily have talents in, such as with writing stories or even illustrating kids’ books. He pointed out that while he might not be a writer, he could be an editor once the AI created the narrative from the prompts he gave it.
Given the right kind of prompts, AI promotes access to skills that may have been lost to disability or illness, allowing artists to create art again, something that the panel agreed was a positive use-case. It was also pointed out that artists would know how to create better prompts to get exactly the artwork they wanted from an AI like Midjourney.
Creatives who are primarily podcasters or video makers on TikTok or Instagram can generate placeholder-images for their other content quickly that will be relevant to their topic and bring in extra traffic just by adding a quality AI image to their posts. Ello highlighted that independent filmmakers may not be able to budget for a storyboard artist to help guide their shooting scripts, but they can probably get good success at blocking things out using AI created images.
AI is currently most reliable at creating repeating variations and structured responses, relying on statistical probability to generate their language responses, which means that across multiple types of work, AI was determined to be a good brainstorming tool. From generating lists of name ideas, to filling in or proofing programming code, an AI could generate more useful responses, more quickly, than an internet search. While it still requires input from human prompts, McGough pointed out, using a well-trained AI for coding languages was safer than finding solutions online because a computer would be less likely to use programming language that had malicious code in it.
The Negative Nuance
The cautious optimism of the panel was frequently undercut by discussion that veered consistently into the problem points with AI options. Those problems sometimes seemed less about the technology and more directly about the humans in charge of using and improving it.
The ethics of the use of AI was a chief concern, because as Boutin said, using other people’s work as your own is fraud, and a computer can only replicate the input it is given from other sources. An AI can blend elements from other artists’ styles, copy the sound of a voiceover artist’s voice, and repeat those elements into new patterns, but it can’t create something genuinely new, unseen, or artistic.
The panel identified another ethical concern in that advancing AI too far could cause a “massive distribution of labor.” Many people in the tech industry are already losing their jobs because a computer can do the same critical tasks more quickly and nearly free. Pournelle warned that 80% of general practitioner doctors could be replaced by some form of an AI diagnosis system in the not-too-distant future.
The people who make the decisions to cut employees are often uninformed, chasing hype and trends, with the bonus promise of a decreased labor cost meaning an increased profit. Considering how unreliable much of the current AI systems are in their human-replacement tasks, companies that rush to follow the trend of downsizing for profits risk alienating customers and sacrificing customer service for free labor. Pournelle highlighted that the general public still has the strength in position to let companies know their AI-only business models are not acceptable, just by utilizing the power of the dollar.
The panelists also discussed the impact on the skilled trades involved; fewer jobs lead to fewer people learning how to do the trade, and these are already limited hiring pools. Blackwolf likened it to the craft of woodworking, which is now considered a hobby, but was once an entire career. Programmers and artists and other skilled people will become even harder to find.
Blackwolf and Pournelle warned about the problem of data poisoning, a flaw of AI production that requires the AI to receive a steady flow of new writing, code, artwork, or music to keep improving. The data must be from a known, controlled source to preserve the usefulness of the AI. An artificial intelligence program will not generate original ideas, only combinations of existing patterns. That runs the risk of a computer basing content off other computer-generated content, increasing the risk for error.
Data corruption is also a risk because the AI programs don’t understand the context of the prompts they are given. Blackwolf gave the example that an AI could not tell the difference between a language prompt and a math prompt. Words input to prompt an answer to a numerical equation will return an answer in words, not numbers, based on the likelihood of those words showing up in the prompt order in other parts of the data. It’s statistics to predict patterns, rather than perform math. Because of this side effect, AI will give memes and internet jokes the same level of importance as researched and verified information, depending on the data source the AI was trained on.
A Case for Cautious Optimism
The final consideration was more of an afterthought for the panel: AI-created content is still unknown legal territory, as well. Not only is there a lot of technological growing to do, there are also a lot of legal loopholes to close to determine who owns, benefits, or is credited for work created using AI. It is up to the legal system to protect consumers and artists, and up to the consumers to advocate for themselves utilizing the same business bottom-line as any other product.
The tools are there, and the technology is becoming more prevalent every day. “If done right, it could improve the lives of many people, greatly,” said Pournelle. He and the rest of the panel stressed that it depends on how individuals use the tool. “The key is: it’s an excellent servant, but it would be an extremely poor master.”