AI-powered deepfake scams are becoming a growing concern in the cryptocurrency world, with security experts warning that these attacks may extend beyond just video and audio, posing a significant threat to crypto wallets and digital asset security.
Rising AI Deepfake Scams in Crypto
According to a Sept. 4 report from Gen Digital, malicious actors using AI-powered deepfake scams have significantly increased their efforts in the second quarter of 2024. The CryptoCore group is already responsible for stealing over $5 million in cryptocurrency using this technique.
Although the stolen amount seems relatively small compared to other common crypto scams, experts believe that AI deepfake attacks could escalate, presenting a growing risk to digital asset security.
Threats to Wallet Security from AI Deepfakes
Web3 security firm CertiK has raised concerns about the increasing sophistication of AI deepfake scams. A CertiK spokesperson warned that these attacks could extend beyond video and audio to include tactics like facial recognition manipulation. Hackers could exploit wallets that use facial recognition technology to gain unauthorized access.
The firm stresses the importance of awareness within the crypto community, noting that understanding the evolving nature of these threats is key to protecting digital assets.
AI Deepfakes: A Continuing Threat
Luis Corrons, a cybersecurity evangelist at Norton, explained that AI-powered attacks would likely continue targeting crypto holders due to the significant financial rewards they offer with relatively lower risks for hackers. Cryptocurrencies are appealing targets because transactions are often high-value and anonymous, making them an attractive option for cybercriminals.
Corrons also pointed out that the lack of clear regulations in the crypto space gives cybercriminals fewer legal consequences, further fueling the risks associated with crypto-related scams.
How to Detect AI-Powered Deepfake Attacks
Although AI-driven attacks are a significant concern, security experts suggest that there are steps users can take to protect themselves. Education is a critical first step, according to CertiK.
A CertiK engineer highlighted the importance of understanding the threat landscape, using available tools, and being cautious about unsolicited requests. They recommended enabling multifactor authentication for sensitive accounts as an extra layer of security.
Additionally, Luis Corrons advised users to look out for “red flags” when evaluating whether they’re encountering a deepfake. These include:
- Unnatural eye movements, facial expressions, or body movements.
- Lack of emotion in a video or audio clip, which may indicate manipulation.
- Facial morphing or visible image stitching where the person’s face doesn’t align with the emotions expressed.
- Inconsistencies in audio or awkward body movements, such as misalignments or mismatched body shapes.
Being aware of these signs can help users avoid falling victim to increasingly sophisticated AI deepfake scams.