The Urgent Need for Responsible Use of Generative AI

Author:Murphy  |  View: 22467  |  Time: 2025-03-23 13:11:20
Photo by Google DeepMind on Unsplash

What is this about?

"Why do you think responsible Generative AI (GenAI) is important and urgent?" This is a question being posed today by policymakers, researchers, journalists, and concerned citizens alike. Rapid progress in GenAI has captured public imagination, but also raised pressing ethical questions. Models like ChatGPT, Bard, and Stable Diffusion showcase the creative potential of the technology – but in the wrong hands, these same capabilities could foster disinformation and manipulation at unprecedented scale. Unlike previous technologies, GenAI enables the creation of highly personalised, context-specific synthetic media that is difficult to verify as fake. This poses novel societal risks and complex governance challenges.

In this blog post I will dive into four aspects (Scale & Speed, Personalisation, Provenance, Diffusion) that distinguish this new age of GenAI from previous times and highlight why this now is the right time to look into the ethical and responsible use of AI. In this piece, I aim to answer the question "Why now?" by highlighting the critical aspects. Potential solutions will be explored in a subsequent article.

Why is it important?

Responsible GenAI is not just a hypothetical concern relevant to tech experts. It's an issue that affects all of us as citizens navigating an increasingly complex information ecosystem. How can we maintain trust and connection in a world where our eyes and ears can be deceived? If anyone can produce compelling yet completely fabricated realities, how does society arrive at shared truths? Unchecked, the misuse of GenAI threatens foundational values like honesty, empathy, and human dignity. But if we act collectively and quickly to implement ethical AI design, we can instead realise generative technology's immense potential for creativity, connection, and social good. By speaking up and spreading awareness, we can influence the trajectory of AI in a more aligned direction.


Scale and Speed

Generative models enable creation of realistic fakes at staggering scale, with unprecedented speed and ease. A single person can generate endless customised audio, video, images and text just with simple prompts and clicks. This introduces an entirely new level of efficiency and throughput for manufacturing manipulated content. Teams of human trolls cannot compete with AI systems that churn out endless tailored fakes around the clock. With enough computing power, bad actors could flood social networks and completely dominate authentic voices via sheer artificial volume. As generative models become more accessible and convincing, orchestrating mass-scale misinformation campaigns no longer requires much expertise or resources.

This is not a new phenomenon, of course. Twitter bots, for example, have been around for quite some time, and they roughly account for 25% of all tweets, namely ~215 million tweets per day. But as GenAI improves, distinguishing between bot-generated and human content will become increasingly challenging.

Personalisation

GenAI can craft content tailored precisely to exploit an individual's vulnerabilities and experiences. This enables psychological manipulation more powerful than blanket misinformation. Hyper-targeted fakes designed to resonate with personal context subvert human discourse by destroying notions of shared truth and reality. When any person can be fed their own unique set of AI-fabricated "facts", how can society arrive at consensus? Such personalisation risks driving polarisation and tribalism, eroding empathy and connection between groups.

This, of course, is a hot topic in light of the upcoming 2024 US election. For example, in May 2023, Donald Trump shared a doctored video of CNN anchor Anderson Cooper on his social media platform Truth Social. Reuters puts it poignantly when they say:

Welcome to America's 2024 presidential race, where reality is up for grabs.

Provenance

Unlike earlier technologies such as Photoshop, verifying generative fakes through forensic analysis is extremely challenging. The obfuscated provenance grants bad actors plausible deniability and freedom to erode notions of objective truth. Even diligent individuals cannot realistically verify provenance of all generative content they encounter. This asymmetry enables misinformation even when generative content is not convincingly realistic upon close inspection.

In 2022, a deepfake video was created that appeared to show Ukrainian President Volodymyr Zelenskyy surrendering to Russian forces. The video was shared widely on social media, and it led to some people believing that Zelenskyy had actually surrendered. The obfuscated provenance of the video made it difficult to determine whether it was real or fake, and even diligent individuals would have had a hard time verifying its provenance. The ambiguity of the video's origin allowed it to circulate widely, even if some might find its authenticity questionable upon detailed viewing.

Diffusion

Once highly realistic fakes are produced by generative models, they can spread rapidly via social networks, messaging apps, and other digital platforms.

While this connects to the "Scale and Speed" section, it's crucial to look at this from a different perspective: Deepfakes are often designed to be emotionally engaging. They may show people doing or saying things that are shocking, scandalous, or otherwise attention-grabbing. This makes them more likely to be shared on social media, where people are constantly looking for new and interesting content. The more people who see a deepfake, the more likely it is that someone will believe it is real. Even though each individual fake may not fool careful scrutiny, at scale the sheer volume of diffusion can overwhelm efforts to track and counter misinformation. Virality gives generative fakes a reach and impact that are challenging to rein in once they are out "in the wild". Platforms already struggle with moderating simpler forms of misinformation – content created by GenAI raises the bar even higher.

To give a concrete example: In March 2023, an AI-generated photo of Pope Francis went viral on social media with one tweet amassing nearly 21 million views – it even received the nickname "Balenciaga Pope". According to the New York Post the artist who created the image did not enjoy the publicity at all, quite the opposite:

Pablo Xavier, the AI artist who allegedly generated the image, claimed that he "didn't want it [the pictures] to blow up like that" and admitted it's "definitely scary" that "people are running with it and thought it was real without questioning it."


Conclusion

These unprecedented capabilities – scale & speed, personalisation, obfuscated provenance, and diffusion – fundamentally transform the nature of misinformation in a way that demands urgent debate. How do we deal with a technology that enables gaslighting people en masse and destroying consensus reality? What governance is needed to maintain trust and truth in online discourse? Can we rein in harmful applications of Genai while nurturing beneficial ones? There are no easy answers, but having earnest discussions now is critical to steer this technology toward ethical outcomes.

The stakes could not be higher when it comes to preserving human agency, dignity, and our shared reality in an AI-driven world. As generative models become more powerful and accessible, we need ethical guardrails and smart governance to prevent dystopian outcomes. There is urgency to act quickly and thoughtfully – before hypothetical risks become reality through automation and diffusion of AI manipulation. The window to shape the future of GenAI in a just, beneficial direction is now.


Heiko Hotz

Tags: AI Deepfakes Ethics In Tech Genai Responsible Ai

Comment