Generative AI tools like ChatGPT and Google Bard are some of the most exciting technologies in the world. They have already begun to revolutionize productivity, supercharge creativity, and make the world a better place.
But as with any new technology, generative AI has brought about new risks—or, rather, made old risks worse. Aside from the much-discussed potential "AI apocalypse" that has dominated headlines in recent months, generative AI has a more immediate negative impact: creating convincing phishing scams.
Cybercriminals create far more sophisticated scams with generative AI than traditional phishing scams. According to Visa research, scammers are fooling even the savviest internet users by launching pig butchering, inheritance, humanitarian relief scams, and triangulation fraud. Let's look at how.
Pig Butchering Scams
Despite what the name suggests, pig butchering scams have nothing to do with pork. They're a form of sophisticated romance scam in which scammers build relationships with their victims – typically on social media - before persuading them to invest in a fraudulent cryptocurrency scheme.
According to Visa, 10% of surveyed adults reported being targeted in a pig butchering scam. These scams are so common because perpetrators themselves are often victims of human trafficking. Organized crime groups lure vulnerable individuals from abroad with the promise of legitimate work or a better life before forcing them to work in a "fraud factory," launching pig butchering scams on further victims.
Generative AI plays a crucial role in pig butchering scams as the perpetrators often don't speak the same language as their victims. Free tools like Google Bard and ChatGPT allow them to build convincing relationships with their targets despite language barriers.
Similarly, generative AI tools, alongside fraud factories, allow criminal groups to run pig butchering scams on a vast scale. In November 2023 alone, the United States Department of Justice seized $9m in profits from a single pig butchering scam network.
Inheritance Scams
Inheritance scams are nothing new, but generative AI has made them far more common and effective. According to Visa, 15% of US adults reported being targeted in inheritance scams.
Victims of inheritance scams typically receive an email or physical letter claiming a long-lost relative has died and left them a significant sum of money. Attackers will typically request money or personally identifiable information (PII) from the victim to begin the money acceptance process—a tell-tale sign of a scam.
Other indicators of an inheritance scam include senders urging victims to keep their inheritance a secret or suggesting they must act immediately to avoid losing money.
Generative AI tools enable inheritance scammers to create convincing letters, emails, or personas to fool their victims. Scammers can use these tools to personalize communications, scanning publicly available information to generate tailored messages that elicit trust from potential victims.
Humanitarian Relief Scams
Humanitarian relief scams are particularly immoral. Scammers capitalize on genuine humanitarian crises – the famine in Yemen, the drought in Somalia, or the conflict in Gaza, for example – to collect donations they keep for themselves.
Scammers message their victims directly or request donations through social media posts that include a link to cryptocurrency wallets. By creating fake accounts that claim to have donated to the phony charity, scammers convince genuine users that the charity is legitimate.
These scams typically include highly emotional language or images to convince victims to "donate." Visa's research suggests that most humanitarian relief phishing emails contain malicious attachments to steal personal information, while some include links to fake websites.
Again, generative AI tools allow scammers to overcome language barriers, create false accounts and communications at scale, and engage in convincing conversations with their victims.
Triangulation Fraud
Triangulation fraud, also known as triangulation scam or drop shipping scam, was another popular scam identified by Visa. It involves three main parties: the scammer, the victim (buyer), and a legitimate third-party seller or retailer. Here's how it typically works:
- Scammer - The scammer creates a fake online storefront or marketplace, often using stolen or fabricated credentials. They list products for sale at attractive prices, enticing potential buyers.
- Victim (Buyer) - The unsuspecting buyer visits the scammer's website or online store and orders the desired product, believing they are purchasing from a legitimate seller.
- Third-Party Seller - The scammer then places an order with a legitimate third-party seller or retailer (e.g., Amazon, eBay, or another online marketplace) for the same product, using the victim's shipping address as the delivery destination.
- Shipping - The legitimate seller ships the product directly to the victim's address, unaware they are being used in a fraudulent transaction.
- Profit for Scammer - The scammer pockets the difference between the price paid by the victim and the wholesale price paid to the legitimate seller. They make money without ever handling the product or fulfilling the order themselves.
Triangulation fraud is a huge problem, with the Financial Services Information Sharing and Analysis Center (FS-ISAC) estimating that financial losses to merchants range from $660M to $1B+ for one month in 2022 alone.
Drop shipping scammers can use generative AI for several purposes, including:
- Fake Store Creation - Scammers could use generative AI to create convincing fake online stores or seller profiles on e-commerce platforms. These stores might feature fabricated product listings, logos, and branding, appearing legitimate to unsuspecting buyers.
- Product Image Generation - Generative AI models could generate realistic product images for non-existent or counterfeit items. These images could be used to populate the fake store's listings, enticing victims to make purchases.
- Automated Order Placement - Once a victim places an order on the fake store, generative AI-powered bots or scripts could automatically place orders with legitimate third-party sellers at higher prices. These bots could simulate human-like interactions with legitimate sellers to avoid detection.
- Communication with Victims - Generative AI chatbots or scripted responses could be used to communicate with victims, providing order confirmations, shipping updates, and responses to inquiries. These interactions aim to maintain the illusion of legitimacy and reassure victims throughout the process.
- Data Analysis and Optimization - Scammers could use generative AI algorithms to analyze purchasing patterns, product popularity, and pricing strategies to optimize fraudulent operations. This analysis could help them identify profitable products to list on their fake stores and refine their tactics over time.
In summary, generative AI is facilitating the creation of sophisticated online scams such as pig butchering, inheritance scams, humanitarian relief fraud, and drop shipping scams to the tune of millions of dollars. Internet users today must be more vigilant than ever to avoid being defrauded.
To ensure your users know how to spot and respond to a scam, talk to an expert about Terranova Security’s security awareness training solution here.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire.
Meet Fortra™ Your Cybersecurity Ally™
Fortra is creating a simpler, stronger, and more straightforward future for cybersecurity by offering a portfolio of integrated and scalable solutions. Learn more about how Fortra’s portfolio of solutions can benefit your business.