Deepfaking it: the new cybersecurity frontier

We’re heading for the somewhat bizarre scenario of a deepfake attempting to defraud my AI-driven avatar whilst I’m off somewhere enjoying myself…:

[…] However, new types of deepfake have now entered the frame with the aim of committing fraud. Indeed, the use of deepfake video and audio technologies could become a major cyberthreat to businesses within the next couple of years, cyber-risk analytics firm CyberCube warns in a recent report.

“Imagine a scenario in which a video of Elon Musk giving insider trading tips goes viral, only it’s not the real Elon Musk. Or a politician announces a new policy in a video clip, but once again it’s not real,” says Darren Thomson, head of cybersecurity strategy at CyberCube.

“We’ve already seen these deepfake videos used in political campaigns; it’s only a matter of time before criminals apply the same technique to businesses and wealthy private individuals. It could be as simple as a faked voicemail from a senior manager instructing staff to make a fraudulent payment or move funds to an account set up by a hacker.”

In fact, such attacks are already starting to occur. In one high-profile example in 2019, fraudsters used voice-generating artificial intelligence software to fake a call from the chief executive of a German firm to his opposite number at a UK subsidiary. Fooled, the UK chief executive duly authorised a payment of $243,000 to the scammers.

“What we’re seeing is these kinds of attacks being used more and more. They’re not overly sophisticated, but the amount of money they’re trying to swindle is quite high,” says Bharat Mistry, technical director, UK and Ireland, at Trend Micro.

“I was with a customer in the UK and he was telling me he’d received a voicemail, and it was the chief information officer asking him to do something. Yet he knew the CIO of the organisation was on holiday and would never have phoned. There was no distinguishing factor, so you can see how clever it is.”


Original article