Federal Government Warns AI Being Used to Create Deepfakes for Financial Schemes
Multiple warnings are issued to consumers and corporations about criminals using artificial intelligence (AI) to swipe financial information, often using deepfakes to trick victims into sending over information.
The Federal Bureau of Investigation (FBI) issued a memo for the public to be alert for such schemes, noting that AI “increases the believability” of such schemes. According to the memo, generative AI reduces time and effort needed for criminals to deceive their targets by allowing criminals to take use information that’s already been inputted by a user, and then create something new based on that information.
The FBI notes that “These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud” including foreign actors using AI to better translate their messages into English.
These tricks can be used for text-based, video-based, and graphics-based schemes and are deployed in a wide-range of scams including romance scams, investment scams, spear phishing, and other fraudulent activities. Oftentimes the criminals will use the information to gain the trust of a victim, to then convince them to send over money, using deep fake technology to make everything seem as legitimate as possible.
In addition, generative AI is being used to create better fake IDs, business documents, fake websites and even for vocal cloning to bypass voice automated detectors and gain access to financial accounts.
The FBI asks anyone who believes they have been a victim to file a report with the FBI's Internet Crime Complaint Center at www.ic3.gov.
FinCEN Warning
Meanwhile, the Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) issued an alert for financial institutions over frauds stemming from generative AI.
Like the consumer warning, FinCEN noted an uptick in reports of criminals using deepfake media to create fake identity documents, and then using them to scam financial institutions throughout 2023 and 2024.
“While GenAI holds tremendous potential as a new technology, bad actors are seeking to exploit it to defraud American businesses and consumers, to include financial institutions and their customers,” said FinCEN Director Andrea Gacki. “Vigilance by financial institutions to the use of deepfakes, and reporting of related suspicious activity, will help safeguard the U.S. financial system and protect innocent Americans from the abuse of these tools.”
The alert also lists red flags for financial institutions to watch for, such as a customer declining to complete multi-factor authentication, an internally inconsistent photo, and a customer presenting multiple identity documents that are inconsistent.