LLMs, new Weapons of Mass Disinformation?

Author:Murphy  |  View: 22484  |  Time: 2025-03-23 18:29:23

THE DUAL-EDGED SWORD OF LARGE LANGUAGE MODELS (LLMs)

Image generated by the Author using Midjourney 5

Welcome to the Double-Edged sword of LLMs series!

Let's not mince words, the grand reveal of ChatGPT, a Large Language Model (LLM), was a phenomenon that swept the globe off its feet, unveiling a brave new world of Natural Language Processing (NLP) advancements. It was as if the curtains lifted and the public, along with governments and international institutions, witnessed the bold strides that this technology had taken under their noses. What followed was a veritable firework display of innovation. Take, for instance, ThinkGPT, a nifty Python library that equips LLMs with an artificial chain of thoughts and long-term memory, almost making them ‘think' (no pun intended). Or AutoGPT, another library capable of handling complex requests and spawning AI agents to fulfil them.

These are only two examples of the hundreds of applications developed on the top of LLMs' APIs. I've been impressed by the ingenuity with which people have seized these newfound tools, creatively repurposing the Lego blocks handed out, quite liberally might I add, by corporate giants such as OpenAI, Facebook, Cohere, and Google. But here's where I put on my serious cap, folks. As our beloved Uncle Ben wisely admonished (comic book aficionados, you know the drill; if not, I suggest you make haste to the nearest Spiderman issue), "With great power comes great responsibility." Frankly, I'm not entirely convinced these companies exercised due responsibility when they set their brainchildren loose for the world to tinker with.

Image courtesy of Wikipedia

Don't get me wrong. I've been knee-deep in applied NLP technologies for the past six years to create novel Intelligence solutions for National Security, and I'm a staunch believer in their transformative potential (even before the rise of the Transformers in 2017 – Have a look at my my article about OSINT and LLMs). I foresee a future where "modern" will be a quaint, antiquated term because these technologies would have reshaped society as we know it. But, like the flip side of a coin, there's a danger lurking – not the rise of a malevolent Artificial General Intelligence (AGI) hell-bent on a Skynet-esque human wipeout (and yes, I sincerely hope you're familiar with that reference!). Rather, I'm alluding to the unintended misuse and, worse, the deliberate perversion of this technology.

So, ladies and gentlemen, welcome to "The Dual-Edged Sword of Large Language Models (LLMs)," a series designed to cast a spotlight on the shadowy recesses of this groundbreaking technology. I'm not here to be a naysayer to the progress of advanced AI. Rather, I aim to spark a vibrant debate on the nature and applications of these technologies, striving to harness their potential for good, not ill.

What will you find in this article ?

This article is divided into two main sections, each corresponding to the era before and after the emergence of Language Learning Models (LLMs). Feel free to explore at your convenience!

  • The first section delves into the various facets of propaganda, its historical usage, and how technology has been progressively leveraged to enhance its effectiveness. We are talking about on social sciences, manipulation techniques, Web 1.0, Web 2.0, and how these elements were harnessed for large-scale mis/disinformation campaigns, such as during Brexit and the Trump Campaigns.
  • The second section takes a more focused look at LLMs, examining their transformative effect on our consumption of information and the potential risks they pose in this particular domain. I also offer my perspective on how we arrived at this point and suggest potential measures to mitigate these adverse impacts.

Propaganda and technology: a love story

Propaganda is a crafty technique in the pursuit of power, deployed not through the brute force of war but the subtle art of persuasion (a respectful nod to Clausewitz). This strategy, as old as governance itself, has been utilised by empires and nation-states since the early days. Some historians trace its first use back to the Persian Empire, around 515 BC!

As societies expanded and political structures became more complex, rulers found a variety of methods necessary to maintain order and cooperation among their people. Among these strategies, which include the classic "bread and circuses" and others, propaganda found its spot. While it was a very raw and blunt version of what we can experience today, and far from being the star of the show, it certainly had a role to play.

First game-changer: the invention of the printing press.

This revolutionised how information was disseminated. Suddenly, narratives once mainly confined to palace courts and carried by messengers found a larger audience in the wider population. With its expanded reach, information began to transform into a more potent instrument of influence. The pen, or in this case, the press, began to flex its muscles, not overshadowing, but certainly standing firm alongside the sword. Thus, the dynamics of influence took on a fresh, new form.

Why do I bring up this topic today, at a time when the printing press is gradually fading into history? To remind ourselves that the key to successful information campaigns is their reach, the people they influence. And the more they do, the more powerful they get.

How Britain prepared (for WWI), picture courtesy of Wikipedia

Now, let's hit the fast-forward button to the 20th century.

Second game changer: Social sciences and the levers of manipulation

With advancements in social sciences and the emerging field of crowd psychology, European nations found new ways to sway their populations. The printing press provided the audience, the social sciences the method. The work of French author Gustave Le Bon became a playbook for leaders of autocratic regimes in the 1930s. They used it to tap into their citizens' frustrations and fears, offering them a simplified worldview where clearly defined enemies threatened their nation, their way of life, and traditions. These leaders portrayed themselves as the all-knowing guardians who would lead their people to victory.

Democratic regimes resorted to it too, helping them to steer their population in accepting (or at least not opposing) the constraints and restrictions that came with living under wartime conditions and participating to the war efforts. Some might argue this was a necessary step for the greater good.

Nonetheless, it's crucial to keep in mind that while the reduction of complex realities is sometimes viewed as a necessary evil during strenuous times, it should never be promoted. Resorting to oversimplified truths and emotionally charged language to incite emotional rather than rational reactions can fan the flames of fear, hatred, and division. Such tactics have the potential to trigger catastrophic consequences and were instrumental in facilitating unthinkable human tragedies, as the Holocaust all too tragically demonstrated.

The information age was yesterday: Web 1.0 & 2.0

Fast forward to 1993 – the birth of the World Wide Web – third revolution. Stemming from an idea for "linked information systems," computer scientist Tim Berners-Lee released the source code for the world's first web browser and editor. Suddenly, the vision of Web 1.0 was a reality, carrying with it an ambitious hope for the betterment of humanity. After all, with access to the world's collective knowledge, how could we not elevate ourselves?

The intention was idealistic and genuinely optimistic. Imagine a world where anyone could access content published by the best universities and think tanks, engage in open discussion, and hold their institutions accountable through transparent access to information. It was the dream of a more enlightened society, driven by the power of information.

Instead, we got… LOL Cats. Well, not entirely, of course. There were and still are meaningful contributions and impressive strides in knowledge sharing (and this is still true with the rise of LLMs). But alongside them, a new culture of entertainment, idle scrolling, and attention-grabbing headlines took root.

Visualisation of Internet routing paths, image courtesy of Wikipedia

The rise of the Web 2.0, characterised by the creation and multiplication of social media platforms, brought this dynamic into sharp focus. Initially hailed as new mediums for connecting humanity, they also became mirrors reflecting our divisions, amplifying them through algorithms and echo chambers. The discourse once contained within specific forums and blogs of the Web 1.0 spilled into the mainstream, shaping our perception of reality in ways we're only beginning to understand. Lobbyists and campaigners now have a clear understanding of where to focus their efforts and whom to target, as the majority of the adult and teenage population is now online. The potential mediums for influence campaigns on the web have shifted from hundreds of community websites and blogs to a few dominant social media platforms which now host and monitor these communities using semantic search engines and data analytics tools, therefore simplifying their logistics and amplifying their effectiveness by several orders of magnitude.

We finally arrive to the present day. The once utopian promise of the Internet has veered off-course. A technology intended to inform us has become a battleground for our attention. The information age was yesterday.

This isn't to say that the greater good has been entirely lost, but rather that the shadows are growing harder to ignore. The near-ubiquitous adoption of the internet and social media has given rise to unintended consequences that many of the early pioneers likely never foresaw. While these platforms were hailed as the ‘great democratisers' of information, they also inadvertently created an environment where misinformation thrives. The power to share information quickly and widely can be an incredible force for good, but it also provides a potent vehicle for propagating misinformation and propaganda. As users, we were promised a feast of knowledge, but now, we're scrambling to distinguish fact from fiction, truth from illusion. The recent trend of ‘fake news' and its ability to gain rapid traction online is a glaring testament to this.

What happened in 2016? Echo chambers and targeting algorithms.

The perfect storm: virality, data analytics and crowd manipulations

In 2016 occurred a powerful convergence, bringing together the advancements of social sciences, Web 1.0, and Web 2.0 technologies, thereby creating an unprecedented storm in the political arena. This was the year of the Brexit referendum and the US presidential elections, with Donald Trump and Hillary Clinton clashing head-to-head. These events were characterised by four key phenomena: targeted messaging to undecided voters, orchestrated campaigns against expert opinion, the implementation of sophisticated targeting algorithms, and the propagation of the echo chambers phenomenon. Suddenly, social media platforms, initially designed as harmless tools for information sharing and fostering connections, evolved into potent instruments of misinformation and propaganda. They disseminated content at a speed that outpaced the capacity of neutral parties and experts to verify the information's authenticity.

Photo by John Cameron on Unsplash

Take the Brexit referendum, for instance. The Leave campaign launched an audacious claim that the UK's departure from the EU would liberate an additional 350 million pounds weekly for the NHS. Despite this assertion being promptly debunked by independent fact-checkers, it found resonance among a seizable number of voters. The question arises, why? The answer partly lies in the evolving use of social media analytics, enabling campaigners to gauge the ‘sentiments', not merely the opinions, of various communities regarding the European Union. This data revealed that a large portion of the British people was uncertain about the benefits of EU membership and primarily concerned with more immediate issues, such as immigration and the state of the NHS. Armed with this insight, the campaigners designed highly customised messaging strategies, identifying the right groups to target with the help of social media analytics. The inherent virality of these platforms did the rest.

Meanwhile, on the other side of the Atlantic, Donald Trump's presidential campaign was employing similar tactics. Bold assertions, like the pledge of having Mexico fund a border wall, found acceptance among many voters, despite being extensively debunked.

Cambrigde Analytica: patient, or virus, zero

A notable player in both of these events was the consulting firm Cambridge Analytica, whose controversial role in these political happenings has been vividly chronicled in the Netflix documentary ‘The Great Hack'.

The firm collected data from millions of social media profiles to execute highly targeted voter influence strategies. Drawing on the insights of crowd psychologist Gustave Le Bon, the firm exploited fears, lack of knowledge, and frustrations to influence public sentiment. Yet, these strategies didn't operate in isolation. Algorithms deployed by social media platforms contributed to an ‘echo chamber' effect. These algorithms selectively showed users content aligning with their existing views, reinforcing their beliefs, and, in some cases, pushing them towards more extreme positions. Additionally, as mentioned above, undecided voters were identified and subjected to an onslaught of highly tailored messages, designed to sway their stance. In this way, technology was used not just to spread specific narratives, but to create conditions ripe for their acceptance.

Foreign actors, the Russian States in particular, were also involved, releasing classified campaign contents and use Facebook and Twitter to propagate rumours to discredit the candidates that were the least favourable to their own agenda as revealed by the US Senate's Intelligence Committee on Russian active measures campaigns and interference in the 2016 US elections.

Note that the crafting of each of these messages were the responsibility of teams of data analytics and social sciences experts, spending days creating the content and planning the campaigns – something that will change with the use of foundation models.

Say hello to the era of Machine-generated truth

I enjoy referencing a poignant proverb when discussing the advancements of Deep Learning with senior leadership audiences: "The road to hell is paved with good intentions." The misappropriation of transformative technology for detrimental ends seems to be a recurring human pattern. Various examples lend credence to this view, including the application of nuclear fusion, which led to both nuclear energy and the nuclear bomb. The same is true for Artificial Intelligence, and more specifically, Large Language Models (LLMs).

Hacking human language

These technologies possess the potential to dramatically enhance our efficiency, yet they also embody a near existential risk. I contest the notion that the benefits of AI merely need to outweigh its negative repercussions for it to serve humanity. If the outcome is such that humans can no longer differentiate between truth and manufactured falsehoods, then the myriad other revolutions that LLMs could facilitate become moot, drowned in the chaos created by machine-speed misinformation and the potential crumble of our democratic institutions.

Indeed, the release of the latest LLMs has introduced a new dimension to the manipulation of information, one that could potentially undermine the very fabric of our democratic societies. GPT-4, Meet Claude, BARD and their siblings, with their ability to generate human-like text at an unprecedented scale and speed, have essentially acquired the capability to ‘hack' language, the principal means of human communication.

Language is the cornerstone of our societies. It is the medium through which we express our thoughts, share our ideas, and shape our collective narratives. It is also the vehicle through which we form our opinions and make our decisions, including our political choices. By manipulating language, LLMs have the potential to influence these processes in subtle and profound ways.

Photo by Jonathan Kemper on Unsplash

The capabilities of Large Language Models (LLMs) to generate content that aligns with a specific narrative or appeals to particular emotions have become the missing pieces in the disinformation campaigns of 2016. Recall the extensive efforts invested in crafting political messages that would resonate with specific communities, and the teams of experts required for such tasks? The advent of LLMs has rendered this entire process nearly redundant and obsolete. The creation of persuasive and targeted content can be automated, making it possible to generate vast amounts of disinformation at a scale and speed that is beyond human capacity. This is not just about spreading false information, but about shaping narratives and influencing perceptions. The potential for misuse is enormous. Imagine a scenario where these models are used to flood social media platforms with posts designed to stoke division, incite violence, or sway public opinion on critical issues. The implications for our democratic societies are profound and deeply concerning.

Deepfakes and the post-truth era

The danger is compounded when LLMs are combined with other foundation models such as Stable Diffusion or Midjourney. These models can generate hyper-realistic images and videos, creating a potent tool for disinformation campaigns. Imagine fake articles backed up by seemingly authentic pictures and videos, all generated by AI. The ability to fabricate convincing multimedia content at scale could dramatically amplify the impact of disinformation campaigns, making them more effective and harder to counter.

Take for example the Deepfake video of President Volodymyr Zelensky surrendering live on social media. While this event was too easy to debunk due to its significance, it is a proof of the disrupting powers that large transformer based models, when coupled together, can have. Another striking event was the mention of this very logic by Richard Blumenthal, US senator, during a Senate hearing on Artificial Intelligence (full video at the end of this article) after having played a audio recording of him reading his introductory remarks, only to reveal that the text and audio had been AI-generated: "What if it [GPT-4] had provided an endorsement of Ukraine surrendering, or Vladimir Putin's leadership"…

Furthermore, the virality of social networks can accelerate the spread of disinformation, allowing it to propagate at machine-speed. This rapid dissemination can outpace the efforts of serious journalistic publications, think tanks, and other fact-checking organisations to verify and debunk false information. For instance, consider a scenario where a fake terrorist attack is propagated across social media, backed by dozens a fake smartphone videos and photos capturing pieces of the attacks, supported by hundreds of social media posts and fake articles written and crafted to mimic the BBC, CNN or Le Monde publications. Traditional media outlets, in their fear of missing out and losing audience to competitors in an increasingly challenging market, might feel compelled to report on the event before its veracity can be confirmed. This could lead to widespread panic and misinformation, further exacerbating the problem.

Why don't we pause the development of LLMs ?

Transformational technologies, by their very nature, have the power to effect profound change. In capitalist societies, they are viewed as vital commodities ripe for exploitation, and that translates into a gold rush to vendor lock and sell their outputs. This manifests in geopolitics as Nation-States strive to control and master these crucial assets of power. This was true with nuclear fusion, and it is now true for Artificial Intelligence. The evidence lies in the surge of policies, specialised offices, and teams at National Security institutions over the past three years, solely committed to the creation and control of AI-driven technologies. AI has succeeded Data as the buzzword in all significant Defence organisations and government departments.

For instance, the Pentagon consolidated various departments responsible for data management, artificial intelligence development, and research, culminating in the establishment of the Chief Digital and Artificial Intelligence Office in February 2022. The United Kingdom has now instituted a dedicated Defence Artificial Intelligence Strategy (published in June 2022), a move also mirrored by France, NATO, China, Russia or India.

Why am I telling you this? I am convinced that the unregulated release of powerful AI models, including LLMs, to the general public can be attributed to at least two primary factors. The first stems from the prevalent lack of technological literacy among government officials, an observation drawn from numerous presentations I've made to such audiences.

But the second factor – and arguably the most critical – is the widespread conviction among government officials and executives in medium to large companies that imposing AI regulations would hinder progress. They fear that such regulatory constraints would disadvantage their organisations and nation-states, particularly when compared to countries charging forward unrestrained in the AI landscape.

Photo by MIKE STOLL on Unsplash

This is an ongoing narrative in the US Congress, Senate, Defense and National Security committees: regulating AI now would slow down its development, and the US will fall behind China and therefore be at a strategic disadvantage. This narrative is skilfully crafted and promoted within Western nations primarily by defense-oriented AI-first companies. The most prominent among these are those backed by the Peter Thiel ecosystem. Advocates of the "move fast and break things" approach, companies like Palantir and Anduril, stand out (as an amusing detail, note how both these names reference the most potent magical artifacts from ‘Lord of the Rings').

However, we must not overlook the European Union's attempts to regulate the unchecked development of AI, particularly from a data and intellectual property protection perspective. Nevertheless, given that most leading language learning model (LLM) creators are American, these regulations would inevitably be applied ex post facto, i.e., after the AI models have been deployed worldwide. By this point, it is already going be too late.

Mastering critical technologies is undeniably a prerequisite for gaining an edge in Great Power competitions, yet this should not serve as a justification for the absence of debates, simplistic argumentation, or unchecked deployment of these technologies. We need to bear in mind at least two significant examples: the leak of the LLaMa model from Facebook, and the existing Chinese AI regulatory policy. These topics will be the focus of subsequent articles in this series.

But as a teaser, ponder this: Would the LLaMa leak have transpired if Facebook were subject to meticulously crafted cybersecurity regulations regarding AI model deployment? Moreover, consider China's AI regulatory framework. By various metrics, it is far more advanced and stringent than any of its Western counterparts. This challenges the notion that China, free from unnecessary red tape, moves full steam ahead with the development and deployment of advanced AI solutions.

So what, and what's next?

As we stride forward into the era of machine-generated truth, the onus falls on governments, corporations, and society at large to establish safeguards that mitigate the risks while reaping the benefits. Transparency in AI development, responsible usage norms, robust machine led and human controlled fact-checking mechanisms, and advanced AI literacy are just some of the proactive measures that need urgent attention. These topics, especially the last one, will be central to another article in this series.

It's high time we deliberate on the responsible use of LLMs, fostering a culture of AI ethics and inclusivity. Technology, in the end, is a mere tool – its impact hinges on the intentions of those who wield it. The question is, will we let it be a tool for fostering a more enlightened society or a weapon for fanning the flames of division? We're only at the beginning of understanding these powerful tools.

Full disclaimer: as a former professional in the fields of Defence and National Security, I have a tendency to view human technological exploitation with a degree of pessimism and suspicion, unfortunately. So, when I see recent news in AI regulations, particularly those narratives pushed by the creators of language learning models themselves, it certainly catches my attention. Take, for example, OpenAI CEO Sam Altman's recent appearance before a US Congress committee. While I very much welcome his ideas, to me this seems like a calculated move to secure their advantage, raise barriers for newcomers, and create this ‘moat' concept, something that was brought up in a Google internal note that made the news a few weeks ago. But again, I am aware that might be my bias talking and I will try to stay aware of it all along this series. Ultimately, this might be what we need: if governments are unwilling or unable to impose stringent regulations on AI development, then it may fall to the private sector to take the lead. However, this approach would inherently come with individual company agendas and unique strategies.

Stay tuned for future articles where we will delve deeper into the potential misuse of LLMs in manipulating political discourse, their role in exacerbating socio-economic inequality, and how they might be used to circumvent privacy norms. We'll also explore potential strategies and policies to manage these issues, from technology to regulatory oversight, and the need for public awareness and education about these evolving technologies. Our journey has just begun.


Liked this article ?

Let's get acquainted! As I said we are only at the beginning of our journey. This series is yours as much as mine. Comments, discuss, share, criticise! The goal is to spark discussion.

And if you want to go further, let's connect! You can find me on LinkedIn or follow me on Medium!

Thanks for your support, and I shall see you in the next one!

Tags: Artificial Intelligence Deep Dives Large Language Models Politics Propaganda

Comment