AI, ethics and the investor community

The impact of coding bias means that effective governance of AI requires human understanding of the processes and data inputs involved, writes Virginie O’Shea, founder of Firebrand Research, who explains why a black box approach just won’t cut it for the investment community.

We’ve spent the best part of a year talking about gamification and the retail investor, but how do you effectively address the problems underlying app design, including their potential design bias? And what about the wider role of data analytics and artificial intelligence (AI) in the investment process and governance of these changing dynamics? And what role should regulators play in this area?

I’ve listened to some fascinating discussions over recent months on the topic of AI, data analytics and the impact of coding bias on recommendations via trading and investment apps. Ethics and AI are something we all need to consider carefully together as we automate more processes and potentially replace humans via robo-advisors or other client-facing, AI-based technology applications. We should even consider them for those apps behind the scenes – for example, what kinds of automatic workflows are we generating and are we spotting the risks in the process at the right time?

A particularly robust discussion happened last week during the Securities and Exchange Commission’s Investor Advisory Committee session on AI, where external speakers highlighted some examples of the coding and data biases embedded within consumer applications that reflect the community of coders working on these apps. For example, one of the speakers raised the data biases in credit scoring where certain neighbourhoods and addresses may score differently than others, therefore impacting any applications that use these credit scores for approvals or investment purposes. What impact might this data have on black-owned start-ups seeking investment? Could it exclude these firms from vital investment opportunities because of automatic screening, for instance?

If we have a primarily white male community of coders from similar backgrounds and with similar interests involved in producing the models underlying these apps, what kinds of risks do we overlook due to ‘group think’? Where are the blindspots and how can we reduce them in algorithms and analytics tools? Given the importance of environmental, social and governance (ESG) strategies, these dynamics should be front and centre for the capital markets, with the emphasis on the G part.

Both the models and the data being fed into that application need to be considered carefully from a bias perspective. Effective governance of AI requires human understanding of the processes and data inputs involved – a black box approach just won’t cut it.

It’s clear the regulators are aiming to get up to speed on this area and numerous papers have been issued off the back of these discussions. The UK’s Bank of England and Financial Conduct Authority (FCA) set up a public-private forum on AI in 2020 and the group published its report last month, highlighting the need for greater control of the AI development and management process. Point 16 of its executive summary highlights the need for “diversity of skills and perspective” and the report as a whole places huge emphasis on explainability of models, data and outputs as part of the governance process.

The EU level regulators and the US government are also prioritising the topic of AI across industries – just look at the discussions about the US Bill of Rights for AI and EU framework for AI for proof.

Most financial regulators stop short of directly regulating technology – they rarely favour one technology platform over another, for example – but AI is likely to be an area that continues to ping the investor protection radar. Ensuring that markets are fair and accessible to all means that as well as holding humans to account, regulators need to be able to effectively police any human replacements in the mix.

However, there will continue to be challenges in ensuring that the teams working on these projects within capital markets are diverse, unless further direct action is taken by the industry in this particular area. You just need to look at the speaker lists for AI in capital markets technology conferences to see the problem – it’s one of the most manel-heavy areas in the industry.

There are many facets to the ethics conversation about AI – staffing, technology design, data – so expect these to continue to provoke industry debate for some time to come.

«