Back to Blog

The pros and cons of AI in trading

Written by Michael Channing

The pros and cons of AI in trading

This article was originally published on Finance Derivative. You can find the original article here.

AI is single-handedly changing how traders trade. So much so that the online trading market is expected to reach a value of around $12 billion by 2028, largely due to its increased use. Its ability to analyse millions of data points to identify trends in the market, provide investment ideas and execute trades is driving new levels of data-driven trading.

But there’s a more dangerous side to this development. A study released towards the end of last year suggested that AI can not only carry out illegal financial trades, but it can even cover them up. This was demonstrated at the UK’s AI safety summit, when “a bot used made-up insider information to make an ‘illegal’ purchase of stocks without telling the firm”.

Now, new research has revealed this threat is causing widespread concern in the financial services industry: a survey of 250 senior compliance professionals revealed three quarters are worried about bots manipulating markets and being able to cover up their actions. On top of this, a massive 94% acknowledged that financial professionals using AI bots for manipulation is a challenge.

These fears align with upcoming findings of a new report into global trade surveillance, with regulated firms citing AI as the most likely cause of compliance issues over the coming year.

So, with such widespread acknowledgement of the threat, how does the industry manage it?

For regulators, it becomes a case of fighting fire with fire, where using AI is necessary to combat the potentially darker side of its use.

Challenges regulating AI market manipulation

The rapid and large-scale deployment of AI in trading is delivering efficiency and advanced trading capabilities. However, this rapid deployment is also creating unique risks, increasing the threat of market abuse and manipulation. The use of algorithmic trading is nothing new, but backed by these new technologies it could take on new forms altogether.

This combination of AI and algorithmic trading provides major challenges in ensuring market integrity. It provides room for inadvertent or deliberate market abuse and also adds further complexity and unpredictability to market dynamics. How so?

AI is highly susceptible to market manipulation. Machine learning (ML) models are built to optimise for their objectives. So, if they can hit these objectives through market manipulation strategies, they might inadvertently or explicitly use this route. What’s more, if ML algorithms can see there is a profit to be made, they can learn and adapt manipulative strategies. Without adequate supervision, these objectives, such as maximising profit, might inadvertently align with manipulative behaviours.

Their complexity and explainability also add extra complications for regulators. ML trading algorithms can be hard to understand and explain, with internal adjustments making their behaviour unpredictable. If regulators and market participants don’t have an explicit understanding of how these algorithms work, it becomes very hard for them to decipher between what is legitimate trading activity and what is potential market manipulation.

Future implications for market surveillance

This use of AI in trading decisions has the potential to totally transform the market. The uncertainty produced around the legitimacy of certain trading transactions plays into the hands of bad actors, facilitates illegal activity and makes it harder for regulators to pin them down. However, there are broader future implications of its use as well.

It could further amplify misinformation and discrimination. For one, AI has been known for exacerbating bias. The creation of biased algorithms could result in discriminatory trading practices that will create an unfair trading environment. Moreover, algorithmic trading platforms might also perpetrate misinformation, misleading genuine market participants and triggering suboptimal trading decisions.

There is also a new and growing risk. The accessibility of LLMs like Open AI’s GPT4 provides the opportunity for individuals – with little or no technical background or knowledge – to form trading strategies. This increases the chance that non-professional investors, by accident or intention, become wrongdoers of market abuse.

The Apollo study, which was explored at the UK AI summit, epitomises these concerns and shows that, in that instance, the bot saw helping the company as more beneficial than maintaining honesty.

While the Apollo Chief Executive recognised that current models are not powerful enough to be deceptive “in any meaningful way”, he added that “it’s not that big of a step from the current models to the ones that I am worried about, where suddenly a model being deceptive would mean something”.

Read the complete article here.