Earlier this month, Securities and Exchange Commission (SEC) chair Gary Gensler gave a speech to the National Press Club about artificial intelligence (AI) and the regulator’s numerous concerns about its widespread adoption. Gensler did raise some potentially positive use cases for the next generation technology in the supervisory arena such as enabling better fraud and insider trading detection through advanced pattern recognition. However, he indicated that there are numerous issues underlying AI that are of concern to the SEC and other regulators across the globe.
Some of the points raised by Gensler are reminiscent of the concerns regulators have raised in the past about developments such as algo trading or index funds. The existential risk of AI for the capital markets certainly wasn’t underplayed by Gensler during his speech. To this end, he noted that systemic risks could be posed from too many firms becoming reliant on similarly constructed AI models. The notion of correlation risk is something we’re all familiar with—if all AI models are built by similar teams, in a similar manner, then when volatility occurs, they are likely to correlate in their movement.
This is an argument for the diversity of coding teams and data scientists, which is something technology teams have been grappling with for a while. With diverse backgrounds and diverse perspectives, these teams are better able to design technology—not just AI—that reflects the requirements of a broader group of individuals. I’m sure I don’t need to go into the details of how seatbelts are better designed to save the lives of men than women, which means women are 73% more likely to be injured in a crash, to prove my point. Correlation risk should be on everyone’s minds within the technology space overall as we become increasingly dependent on that technology to assist in our working (and daily) lives.
Bias of inputs and outputs is also something to keep in mind. The data that is fed into an AI could be problematic for all kinds of reasons—data security, data privacy, data bias. Data ownership is a concept that has already been much-discussed by regulators in Europe and the General Data Protection Regulation (GDPR) is likely to be the tip of the iceberg in terms of future regulation in this space. Ownership of data becomes problematic especially in the area of generative AI and we can see some of those conversations happening within the entertainment industry at the moment. Intellectual property lawyers are going to have a field day across all sectors!
Gensler also raised the need for teams to be able to explain and understand the technology that they are using. This is a challenging proposition for anyone not particularly technology minded, especially given the increasing complexity of things like neural networks. However, regulators are going to intensify their scrutiny of AI use as they investigate the potential underlying risks and a black box isn’t going to go down well.
Misinformation is also another compelling risk within the generative AI space. We’ve already seen that ChatGPT is good at making things up in a convincing manner. Cybercriminals are, no doubt, already using a lot of this to their advantage and things are only likely to get worse. Spam emails can be personalised to such extent that they are indistinguishable from real emails. But think about the data that we use for trading or risk management—what if these signals are manipulated? If even images and video can be realistically faked, how do we tell what is happening in the real world and what is misinformation?
Overall, Gensler raised a lot of interesting points, albeit in his usual circuitous manner (you can read the full speech here if you have the time). AI can be both a useful tool and a particularly dangerous one, and that’s not even touching on the multiplicity of ethical considerations!