The past couple of weeks, I’ve attended a couple of key industry events, one in Copenhagen focused on the wide world of post-trade and asset servicing in the Nordic region and the other in London, solely focused on global investment operations. Though the details discussed about various market challenges at both events varied, one common theme emerged: everyone and their dog is trying to figure out what to do with artificial intelligence (AI). That’s both the exciting, buzzy generative AI (which was also trending at Sibos) and the bog standard, pattern recognition-focused AI that’s been around for some time now.
The level of experimentation is high across both the buy-side and the sell-side, but we’re not talking about mega-bucks here. A lot of the work is focused on varied and very specific use cases, using commonly available tools and techniques. For instance, a large asset manager at InvestOps in London noted that his firm has over 100 people trialling AI in areas varying from human resources and sifting through CVs through to supporting the investment research team via trend analysis. These employees are far from AI specialists and are instead using the likes of ChatGPT or pattern recognition to sift through either internal data or external information to aggregate relevant information.
At both events, speakers cautioned their peers on the need to be mindful of regulation related to data sharing and ownership. Data leakage is a definite risk if your firm is using a tool hosted externally on an open platform, but if that tool is deployed internally within the parameters of your own organisation and only connected to internal data sources, then it is likely safer from a regulatory risk perspective. The spectre of the General Data Protection Regulation (GDPR) looms large over most firms, especially given the high fines that have been faced by some firms already.
The ethical and governance considerations of AI are also front of mind for most firms, particularly those on the buy-side that are extra mindful of the G in ESG compliance requirements. AI models reflect the coder that built them and biases can be inherent in these models. This could prove particularly problematic in areas such as HR where some CVs may be discounted from shortlists because of biased models, for example. Everything needs to be carefully scrutinised and the human can never be fully out of the loop.
At both Posttrade360 and InvestOps, panellists noted that the word “intelligence” in AI is a bit of a misnomer and that these tools are just that: tools. They are there to support staff members and reduce low value, grunt work rather than attempt to replicate the complexities of human decision-making. This is a task for which they have been poorly designed and as one asset manager noted, even if they perform well, they still make mistakes that must be reviewed carefully on an ongoing basis.
A speaker from SEB’s innovation lab noted at Posttrade360 that generative AI could prove beneficial for the future of AI overall in its ability to produce dummy data on which other AI models could be trained. This could be close to real transaction data or dummy client data that doesn’t breach regulatory requirements and allows these models to iterate and learn. She noted that firms often currently struggle to find enough data on which to train their AI models and this could be rectified by creating data that is safe enough to be shared among institutions without breaching client confidentiality or regulatory requirements.
Aside from the future of AI, data was another common theme across both events. It seems the industry is finally getting to grips with its gnarly data challenges, or is starting to, at the very least. New shiny tools are often reliant on clean, verified data. Even if that data is sometimes fake.