With $109.5 billion of growth expected between now and 2030, the global AI cybersecurity market is booming – and it's not hard to see why. According to a recent survey of security professionals, three-quarters (75%) have observed an increase in cyberattacks. Of these, the research found that an even greater proportion (an overwhelming 85%) blamed AI.
What is AI's role in cybersecurity, then? Is it enabling our online freedoms and safety or undermining them? It's a question Techopedia wanted to explore. But, unlike speaking to human experts, these inquiries focused instead on a different kind of expertise…that of AI itself. So they asked five popular AI language models – ChatGPT, Claude, Llama, Perplexity, and Bard (now Gemini) – what they considered the internet's top cybersecurity threats.
Plot twist: it was AI! Yes, all five AIs, in differing ways, implicated themselves as a primary culprit in the internet's ongoing battle with malicious hackers, fraudsters, and thieves.
AI is Fueling the Flames of Cyberattacks in 2024
There are many ways in which AI adds to the world's online security (a whole branch of inquiry, defensive AI, is dedicated to this). However, in Techopedia's cybersecurity-related conversations with five different AI platforms, 80% directly referenced AI's own role, not in enabling internet safety – but in imperiling it.
All five AIs discussed AI's role in creating and spreading malware and ransomware – a key vector in phishing attacks – while several discussed AI's role in enabling more sophisticated cyber-attacks through utilizing "advanced encryption methods and exploiting zero-day vulnerabilities." AI can also be implicated in a wealth of other types of cyber intrusions, including DDoS (Distributed Denial of Service) attacks, brute force attacks, and identity theft.
Phishing Attacks are Becoming More Sophisticated with AI
Phishing attacks are becoming increasingly complex. The cleverness of these scams is growing in tandem with their impact on organizations and individuals, too – business email compromise (BEC) attacks, according to the FBI's Internet Crime Report, account for around $2.7 billion in losses annually.
What's more, AI is playing an evolving role – according to AI, at least.
"Phishing techniques have become increasingly sophisticated, often leveraging AI and machine learning to create highly convincing fake messages and websites," ChatGPT contributed, while Perplexity AI also mentioned "AI-assisted phishing attacks." Meanwhile, Llama also referenced "AI-powered phishing attacks and AI-powered malware" in three classic cases of AI implicating the top cybersecurity threat on the internet… AI itself!
AI is Enabling Fake News and Spreading Misinformation
In Techopedia's analysis of AI's response to the internet's top cybersecurity threats, four of the five AI platforms consulted flagged deepfakes as a critical concern.
Deepfakes – synthetic, AI-generated media designed to manipulate or replace existing video, image, or audio content with a fabricated version – can be harmless comedic fodder. However, deepfakes can also be integral in fanning the flames of misinformation and 'fake news' campaigns, and – with 2024 bringing critical political elections in the UK and US, plus major geopolitical struggles raging on in Europe and the Middle East – this can have grave and tangible real-world consequences.
Llama reaffirmed this, saying, "Deepfakes and AI-generated scams can have serious consequences, such as influencing political decisions or causing public panic." Claude, another AI, furthered that: "The internet makes it easy for false or misleading content to spread rapidly on social networks and other platforms, which can manipulate public opinion, influence elections, promote extremist views, and more."
Bard was the only AI to reference the evolving complexity of this type of AI-fueled cybercrime, writing, "Deepfakes are becoming increasingly sophisticated, making it harder to discern real from fake content. This, coupled with the spread of misinformation and disinformation, can have a chilling effect on democracy, fuel social division, and erode trust in information sources."
State-Sponsored Attacks are Growing in Number
Given that January 2024 alone brought six major state-sponsored cyber attacks – with the governments of Australia, Canada, and Ukraine all falling prey to the long tendrils of state-sponsored cybercriminals – it's no surprise that AIs were alarmed.
"These attacks can target critical infrastructure, steal intellectual property, and influence political processes," ChatGPT wrote. Bard added that "cyberattacks targeting critical infrastructure like hospitals and schools are becoming more frequent and disruptive."
Perplexity AI called state-sponsored and critical national infrastructure (CNI) attacks a "significant concern, with major elections taking place in various countries," while Llama instead pointed the finger at "geopolitical tensions; some countries use cyber warfare as a form of espionage or sabotage."
Data Breaches are Rife – But it's Not All AI's Fault
Another finding from Techopedia's conversations with five AI tools? That AI is playing a role in data breaches as an evolving, ever-increasing cybersecurity threat.
Perplexity AI wrote that 2024 will see the "likelihood of data leaks [increase] and the development of new methods to bypass authentication," while ChatGPT wrote that "with increasing amounts of personal data stored online, data breaches remain a significant threat."
However, statistics suggest that humans aren't entirely guilt-free; around nine in ten (88%) of data breaches have their roots in human error. Plus, a strong argument exists in favor of AI's role in mitigating human fallibility. According to a 2023 survey by IBM, the use of automation and AI saved organizations around $1.8 million in costs relating to data breaches and helped companies identify and contain these leaks by an average of over 100 days.
AI is Becoming an Increasingly Treacherous Ethical Minefield
Four of the five AIs in Techopedia's findings mentioned data privacy as a major issue characterizing the AI debate in 2024 and beyond.
Claude was the most vehement critic here, stating that "there is a greater risk of companies and governments collecting user data without consent. Location tracking, browser history monitoring, and backdoors in devices and apps all contribute." (It's absolutely right, too – as we recently covered in a report about data brokering and its scary implications on anonymity.)
But data privacy was far from the only ethical conundrum discussed: surveillance, hate speech, cyberbullying, digital inequities, freedom of expression, internet censorship, and algorithmic bias and discrimination all surfaced as parts of the ever-growing ethical mire of AI.
Bard put it best, stating that "the gap between those with and without access to technology and the internet persists, limiting access to education, healthcare, and economic opportunities, while further widening existing social and economic disparities."
AI's Top Tips For Staying Safe on the Internet
It wasn't simply a critique of their fellow AIs that Techopedia's conversations with five different AI language models revealed; it was a few handy tips for staying safe online, too.
These include educating your staff about existing and emerging hacking methods – particularly AI-enabled ones – and creating strong, unique passwords to safeguard your accounts and data. AI language models also recommend implementing multi-factor authentication, protecting your home network, and keeping your software updated.
About the author:
Rob Binns is a writer, editor, and content strategist based in Melbourne, Australia. He's driven a wide range of content across industries and sectors: with deep cybersecurity and VPN expertise, plus specialisms in the digital payments, business software, and ecommerce spaces.
Editor's Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire.