In the first week of February, Hong Kong police announced that a multinational based in the region was the victim of an blow estimated at US$26 million, which would be around R$130 million according to the current price.
The fraud took place through deepfake, a technique that uses artificial intelligence (AI) to create fake videos and audios.
With this tool, criminals managed to execute a million-dollar fraud, but how exactly did this happen?
Details of the crime that was helped by AI
Criminals manage to steal R$130 million using AI – Image: Reproduction
It all happened through a fake video call, in which the fraudsters used deepfake to imitate the company’s senior executives in video and voice.
The incident came to light on January 29 when authorities were alerted to the fraudulent activities. Until then, the company had already made 15 bank transfers, totaling US$26 million.
As the police are still investigating this case, no suspects have been detained so far. Furthermore, the victim company remains confidential.
In practice, the scammers targeted an employee in the multinational’s financial department, posing as the corporation’s financial director, based in the United Kingdom.
To create the fakes, they used publicly available videos and audio of their targets.
As we briefly mentioned, this technique of deepfakepowered by AI, allows you to create fake videos and audio that realistically reproduce people’s appearance and voice.
Therefore, AI made it possible for criminals to imitate the voices and images of executives, persuading the victim to follow their instructions to transfer money to designated bank accounts.
However, it is worth mentioning that the deepfake videos were previously recorded, without involving dialogue or real interaction with the employee, which increased the sophistication and made the scam more difficult to detect.
Police highlighted that the video call involved multiple participants, all of whom were impostors, except the victim.
The case highlights emerging challenges as generative artificial intelligence, such as those used in deepfake, becomes more advanced and accessible.
Experts warn of the potential for misinformation and misuse of these technologies, highlighting the importance of public awareness to combat such fraud.
To date, there is no technological security solution capable of completely eliminating the problem.
The episode serves as a reminder of the need for vigilance and preventative measures to protect companies and individuals against digital threats increasingly sophisticated.