Opinion

People must be more wary of content validity

by: Advika Anand
Graphics Editor

Since 2024, generative Artificial Intelligence (AI) video technology has improved immensely in both quality and accessibility. Tools such as OpenAI’s Sora, Luma Dream Machine, and Runway Gen-3 have made it possible to create artificially generated video clips that are highly realistic, longer, and more coherent than ever before, making it difficult to distinguish between what is real and what is not. These videos appear frequently in advertisements, social media content, professional television production, and more. This brings up the vital question: Should these videos have grounds to exist in the first place, and what kinds of regulations should be imposed to minimize their online presence and the following outcomes? Because of this, individuals must be more aware of the content they consume online and be wary of AI that is harder to recognize, and social media users in particular should refrain from using and promoting AI-generated content/videos.

A prime example of deceptive AI is Sienna Rose, a widely known AI-generated musical artist. Although she is not human, she has a myriad of social media posts in her name that promote her music, with many people falling for this charade. Rose also has a sizable fan base, with around 3.3 million monthly listeners on Spotify. Clearly, this “user” is deceiving viewers and spreading misinformation about her true identity and the operation’s intent. Social media platforms should not, in any way, allow this to happen. Not only does it undermine a site’s authenticity, but it also targets susceptible individuals. These sites have countless users who dedicate time and money to watching their videos, and they cannot continue to be misinformed, especially regarding politics or health advice. Moreover, platforms like Instagram, TikTok, and YouTube should do a better job of labeling these AI-generated videos as a means to counteract the inaccurate messages they send.

Furthermore, AI art unfairly competes with human musicians, filmmakers, and designers who require months or even years to produce their authentic work. Engaging with these AI videos online only exacerbates the situation by further overlooking the human innovation needed to succeed in these fields. As graphic designer Grace Warren pointed out, “Feeling threatened by AI is a real thing, you are competing against it, and it makes you feel extra pressure to do even better.”

Because AI is, in fact, not human, it does not need to follow societal rules, allowing it to exploit humans’ “gullibility” for views, money, or fame. This lack of integrity in AI affects everyone and society should no longer gloss over or normalize it. Instead, we must take action to understand the difference between artificial and real content and to be aware of the content we consume. In this era, users are responsible for their own digital literacy to ensure that authenticity online is preserved, and that starts with acknowledging and eradicating AI-generated content from our daily lives.

(Sources: BBC, Imgix, Psychology Today)

Categories: Opinion

Leave a Reply