Over the last few years, I’ve seen a lot of studies across financial services that indicate artificial intelligence (AI) is the future. Most of them however, can’t identify how and why it will transform operations in any practical sense. They also don’t highlight how much work on the part of humans has to go into training these models. Much like Tyra Banks’ or RuPaul’s roles prove in popular reality TV shows, model training isn’t for the faint of heart.
It will come as no surprise that AI can’t fix broken processes or resolve poor quality data without a clear goal and defined parameters. Its most successful application tends to be in areas in which its pattern recognition strengths are fully deployed. But have no doubt, you’ll have to work hard to achieve those goals.
Marketing literature and press releases rarely tell you about the months (and even years, depending on the number of models) of work vendors and financial institutions have dedicated to training these AI models. As we have seen with the dog vs croissant recognition tool, the AI doesn’t always get things right first time. In order for AI to truly deliver value, it needs a human feedback loop to tell it when it gets things wrong and to enable it to recalibrate its suggestions.
It also needs a high volume of clean and consumable data to crunch through that is as near to the real data it will be working with as possible. This sounds simple, but if you think about the industry’s data privacy, control and ownership constraints, this essentially means building out a lot of dummy data at the start of the exercise. This isn’t something that everyone recognises going into these projects and this often causes delays at the outset.
Applying AI to the right processes is another consideration. Is there client demand for this particular feature or functionality? Are your clients comfortable with AI being applied in this area? If not, why not and can you get them past those concerns?
One of my standard questions when speaking to vendors’ references is whether they feel their vendor is spending enough time and investment on next gen technology such as AI. Interestingly, I get a real range of responses when it comes to post-trade and core functionality. Some clients are really keen to see the potential of AI realised wherever it is trialled (as long as it doesn’t cost the earth), whereas others recoil in horror at the notion of AI being applied to control functions. The last thing they want is a black box that they can’t explain to their regulator sitting at the centre of their operations.
Vendors slapping an AI label on things that aren’t AI-powered does not help the general misinformation and misunderstanding about the technology’s potential and its limitations. There’s a lot of confusion about what AI is and isn’t overall in securities services. Often it is confused with simple robotic process automation such as screen scraping technology, sometimes it is assumed to be further ahead in terms of application than it is in reality. We’re not quite at the level of HAL from 2001: A Space Odyssey yet folks (thankfully).
Firms absolutely should be experimenting with next gen technologies, but if you’re hoping for something production-ready to be delivered quickly, you’re likely to be disappointed. And for projects that are client facing, firms need to clearly articulate how the AI will be applied, the tangible benefits and which controls and protections are in place to avoid a black box approach. Clients that want to get hands on early, can also get to work on helping to train the models. That hard work will be the difference between success and failure. And as RuPaul says, that will determine whether the AI is here to stay or it has to sashay away.