The advent of artificial intelligence (AI) has demonstrated its potential to catalyse beneficial innovation across various industries, including the cryptoasset sector.
However, as with any burgeoning technology, there is a concern that malicious actors may exploit these advancements for illegal activities, capitalising on the hype, novel capabilities, and regulatory gaps.
Recognising the early signs of illicit activity is crucial for fostering sustainable innovation and addressing emerging risks at their inception.
While AI-enhanced crypto crime has not yet become a prevalent threat, Elliptic, a blockchain analytics firm, emphasizes the importance of proactively identifying and mitigating potential emerging crime trends to promote long-term sustainable innovation.
Elliptic has recently published a comprehensive blog titled "The State of AI-Enabled Crypto Crime: Emerging Typologies and Trends to Look Out For," which highlights the concerning increase in the use of AI to perpetrate various crypto crimes.
Additionally, their new horizon scanning report, "AI-enabled Crime in the Cryptoasset Ecosystem," reveals that AI-facilitated crimes within the cryptocurrency ecosystem are still in their early stages.
Timely and strategic interventions by stakeholders can prevent these activities from becoming widespread.
Senior crypto threat researcher at Elliptic, Dr. Arda Akartuna, said:
“[...] these trends are currently in their relative infancy and avenues for prevention do exist. Stakeholders across industries will need to come together to devise best practices early on, so that these trends do not become mainstream."
The report also notes a significant increase in tokens associated with AI-related keywords such as GPT, OpenAI, and Bard, with approximately 4,500 of these tokens residing on the BNB Smart Chain.
The emergence of AI-driven crypto crimes heralds a new phase of cyber threats, as the Elliptic report exposes how cutting-edge technologies are being misused for deepfake scams, state-sponsored attacks, and other complex illicit activities.
The report identifies five key typologies of how crypto criminals could leverage AI to augment their criminal endeavors, based on current indicators.
It also underscores the increasing use of generative AI in cryptocurrency scams, identifying it as a critical area where crypto professionals and law enforcement can join forces to combat these emerging threats.
Participants in the cryptocurrency industry are likely familiar with investment scams, many of which now employ deepfakes of celebrities and public figures to promote fraudulent schemes.
Notable individuals such as Elon Musk, former Singaporean Prime Minister Lee Hsien Loong, and the current and former Presidents of Taiwan, Tsai Ing-wen and Lai Ching-te, have been impersonated in these scams.
These promotional deepfakes are commonly disseminated on platforms like TikTok and X.
The report noted:
“Crypto giveaway and doubling scams are increasingly using deepfake videos of crypto CEOs and celebrities to encourage victims to send funds to scam crypto addresses.”
Other scams involve the use of AI to fabricate aspects of a crypto 'business' to enhance its credibility.
In 2022, Binance's former Chief Communications Officer, Patrick Hillmann, was the subject of a deepfake scam where his likeness was used to deceive potential victims within the crypto community.
AI can be misused in several ways to make crypto scams and fraudulent activities more persuasive:
-Suggesting celebrity or official endorsement: Deepfakes, as seen with the impersonations of Singaporean and Taiwanese leaders, can falsely suggest that a project has legitimate or official support, thereby increasing its appeal to potential victims.
Former PM Lee warned:
“This is extremely worrying. People watching the video may be fooled into thinking that I really said those words. Please remember, if something sounds too good to be true, do proceed with caution.”
-Streamlining scam operations: Large-scale scams, such as the Sha Zhu Pan ('pig butchering') crypto romance scams originating from Southeast Asia, involve prolonged and intricate communications with victims. There is limited evidence suggesting that these illicit operations are exploring AI to enhance their efficiency.
-Deepfake executive impersonations: A small number of high-profile cases have involved scammers posing as high-level executives during video conferences for corporate espionage or authorising significant transactions. At least one such case targeted the CCO of a major cryptocurrency exchange.
-Generating fake marketing materials: AI-generated images and videos can lend an air of legitimacy to scam websites by depicting supposed employees, headquarters, office spaces, and other visuals, creating the illusion of a genuine investment company without revealing the true identities or locations of the scammers.
Fortunately, there are several red flags that can help individuals avoid falling prey to deepfake scams.
To authenticate a video, one can examine the synchronisation of lip movements with speech, the presence of shadows where expected, and the naturalness of facial activities like blinking.
The report stated:
“There are, fortunately, a number of red flag indicators that can help prevent you from falling victim to deepfake scams. To verify the video's authenticity, you can check whether lip movements and voices synchronise, make sure shadows appear where you expect them to, and check that facial activity such as blinking looks natural.”
Creating a token on many blockchains is a straightforward process, a fact that scammers have exploited to generate hype and inflate token prices before liquidating their holdings for substantial profits.
This action causes prices to plummet, leaving investors with losses and worthless investments in what is known as a "rug-pull."
Additionally, organised groups engage in sudden buying and selling of tokens to profit from market manipulation, commonly referred to as "pump-and-dump" schemes.
Scammers may also hype their tokens by falsely claiming affiliations with major events or companies.
Coin Laundering Process by ChatGPT-related Scammers
AI has been a recent focal point for scam tokens, with hundreds listed on various blockchains containing variations of "GPT" in their names, such as "GPT4 Token," "CryptoGPT," and "GPT Coin."
While some of these tokens may represent legitimate projects, others have been promoted in amateur trading forums by scammers who falsely assert official connections with ChatGPT or other reputable AI entities.
Source: Elliptic
Advancements in AI, exemplified by tools like ChatGPT, have sparked a vigorous debate about their potential use in code auditing and bug detection, as well as the risk that malicious hackers could exploit these same capabilities to identify and engineer exploits.
Reports from Microsoft and OpenAI have documented attempts by Russian and North Korean threat actors in this vein, yet white hat hackers argue that the technology is not yet sufficiently advanced for such applications.
ChatGPT and similar mainstream AI tools have improved in recognising and rejecting malicious prompts, prompting cybercriminals to seek out AI services without ethical constraints on dark web forums.
This demand has been met by paid tools such as HackedGPT and WormGPT, which openly advertise their abilities to assist with carding, phishing, malware development, vulnerability scanning, hacking, crafting malicious smart contracts, cyberstalking, harassment, identity theft, disseminating private sensitive material, and other unethical activities for financial gain, whether legal or illegal.
These "unethical GPTs" have garnered mixed reviews from their users, and blockchain analytics platforms possess the capability to trace payments made to their administrators by subscribers.
Certain crypto scammers may execute a single scam operation and then retire after amassing sufficient illicit funds or once their scheme has been widely exposed.
However, many threat actor groups engage in a pattern of cyclical scamming operations.
They create scam investment, airdrop, or giveaway sites, promote them extensively on social media and messaging apps, and then execute a "rug pull" once the scam nature of their sites becomes a subject of controversy among victims.
The cycle then restarts with a new site and fresh marketing tactics.
Cycling through scam sites can be a resource-intensive process, and some illicit groups are seeking to enhance efficiency through the application of AI.
One scam-as-a-service provider has boasted of using AI to automatically design scam website interfaces, optimised for search engine optimisation (SEO) considerations.
Elliptic investigator shows the cross-chain obfuscation patterns of funds originating from drainer operator wallets
Identity theft and the creation of false documents are among the most entrenched criminal activities on the dark web.
Cybercrime forums frequently feature advertisements from cybercriminals who pride themselves on their proficiency with photo editing software, offering to produce images of counterfeit passports, ID cards, and utility bills within minutes.
Now, some of these document fabrication services are investigating the use of AI to expand their operations.
One such service, which uses the likeness of Keanu Reeves's John Wick character in its advertising, has both claimed and denied using AI to manipulate images.
Elliptic has identified a cryptocurrency address used for payments to this service, which has received enough transactions to generate nearly 5,000 fake documents in just one month.
It is important to reiterate that the benefits of emerging technologies like AI far outweigh their potential for criminal exploitation.
Anne Neuberger, the US Deputy National Security Advisor for Cyber and Emerging Technologies, delved into the mounting apprehensions surrounding the misuse of AI.
She emphasized that AI is not solely relegated to commonplace scams but is increasingly wielded in intricate criminal endeavours:
“Throughout numerous dark web cybercrime forums, Elliptic has identified chatter that explores the use of LLMs to reverse-engineer crypto wallet seed phrases, bypassing authentication for services such as OnlyFans, and providing alternatives to image ‘undressing’ manipulation services such as DeepNude.”
The intersection of AI and cryptocurrency offers both tremendous opportunities and significant challenges.
While AI can bolster security and efficiency in the crypto space, its potential for misuse underscores the necessity for a strategic and informed response.
The report recommends a set of prevention measures collectively known as DECODE, which stands for Detect, Educate, Cooperate, Defend, and Enforce.
The majority of these threats are still in their early stages, and with measured early responses from responsible industry partners, they may be effectively mitigated before they become widespread and ensure that technologies like AI can continue to innovate sustainably.
But with cyber threats evolving at an alarming rate, the safety net shrinks further.
Will AI-driven crypto crimes eventually cease to exist? No.
But what we can do is to be more informed and alert to mitigate the risks and have a better fighting chance.