Investors must beware deepfake market manipulation

0
102
Investors must beware deepfake market manipulation

An event that should make any investor cringe broke out online last month. A deepfake video of an alleged explosion near the Pentagon went viral after it was reposted by RT and other media outlets, sending shockwaves across the U.S. stock market.

Thankfully, US authorities were quick to post a statement on social media announcing the video was fake — and RT issued a coy statement acknowledging that “it’s just an AI-generated image”. The market then rebounded.

Yet the incident creates a sobering backdrop for this week’s visit to Washington by British Prime Minister Rishi Sunak and his joint US-UK initiative to tackle the risks of artificial intelligence.

Recently, there have been growing alarms within and outside the tech industry about the dangers of superintelligent, autonomous AI. Last week, more than 350 scientists published a joint letter warning that “mitigating the risk of AI extinction should become a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

These long-term threats of “extinction” make headlines. But experts like Jeff Hinton — an academic and former Google employee considered one of the “Godfathers of AI” — argue that the most immediate danger we should worry about is not that machines will function independently, but that humans will. abuse them.

Most notably, as Hinton recently argued at a conference at Cambridge University, the proliferation of AI tools could dramatically exacerbate existing cyber problems such as crime, hacking, and misinformation.

Washington is already deeply concerned that deepfakes could poison the 2024 election. They have already had an impact on Venezuelan politics this spring. This week, Ukrainian hackers broadcast a deepfake video of Vladimir Putin on some Russian TV channels.

But the financial sector is now becoming another focus.Kaspersky Consulting last month published an ethnographic study The Dark Web, citing “huge demand for deepfakes” and “prices (range) per minute of deepfake videos ranging from $300 to $20,000 per minute”. So far, they have been mainly used in cryptocurrency scams, it said. But Pentagon deepfake videos show how they can impact mainstream asset markets. “We may see criminals using it for deliberate (market) manipulation,” one U.S. security official told me.

So is there anything Sunak and US President Joe Biden can do? not easy. The White House recently held formal discussions with the EU on a transatlantic AI policy (with the UK excluded as a non-EU member). But the initiative has yet to yield any concrete agreements. Both sides acknowledge the urgent need for cross-border AI policy, but EU authorities are more keen than Washington for top-down regulatory control — and determined to distance themselves from U.S. tech groups.

As a result, some U.S. officials suspect that a bilateral AI initiative with the U.K. might be easier to coordinate internationally, given the recent release of a more business-friendly policy paper. Through the so-called Five Eyes security pact, the two countries already have a close intelligence bond, and both have a large share of the Western AI ecosystem (and financial markets).

Several ideas have been proposed. One is the creation, under Sunak’s push, of a publicly funded international AI research agency, similar to CERN, the center for particle physics. Hopefully this will lead to the safe development of AI and the creation of AI-enabled tools to combat abuses like misinformation.

There are also proposals for a global AI watchdog similar to the International Atomic Energy Agency; Sunak is keen to have it based in London. A third idea is to create a global licensing framework for the development and deployment of AI tools. This could include measures to establish “watermarks” to reveal the origin of online content and identify deepfakes.

These are very smart ideas that can — and should — be deployed. But that’s unlikely to happen quickly or easily. Creating an AI-style CERN could be costly, and fast-track international support for an IAEA-style watchdog would be hard to come by.

The big question plaguing any permissioned system is how to bring the wider ecosystem onto the web. The tech groups that dominate cutting-edge AI research in the West — such as Microsoft, Google and OpenAI — have told the White House that they will work with licensing ideas. Their business users will almost certainly join in too.

However, it will be much harder to pull corporate thieves — and criminal groups — into permissioned networks. And there is already a wealth of open source AI material that can be abused. For example, the Pentagon’s video deepfake appears to use a rudimentary system.

So the uncomfortable truth is that, in the short term, the only realistic way to counter the risks of market manipulation is for financiers (and journalists) to deploy more due diligence — and government detectives to hunt down cybercriminals. It would be a good thing if comments from Sunak and Biden this week help to raise public awareness of that. But no one should be fooled into thinking that knowledge alone can solve the threat. Buyer emptor.

gillian.tett@ft.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here