Why has the corporate actions process remained fragmented and error-prone for so long, despite advances in technology elsewhere in financial services? Most people would probably agree that the asset servicing process is broken – perhaps it’s always been broken.
With the introduction of new and emerging technologies, we’re seeing a genuine desire to finally address corporate actions pain-points.
When I worked at a global custodian just over 30 years ago, we were dealing with the same dynamics, and not a lot has changed since then, the industry is still facing the same challenges I experienced back then.
Sure, we’ve made progress in terms of messaging standards – like ISO 15022 and ISO 20022 – and we’ve seen adoption between sub-custodians, global custodians, CSDs, and their participants, but these standards have limitations and are subject to interpretation.
Corporate actions are a product of legal documents. They originate in legal offering documents – things like prospectuses. The challenge is that there is very little standardisation around how corporate actions events are created from these legal documents. The problem starts at inception – it’s a fundamentally non-standard process.
This means we take highly complex, unstructured information – sometimes hundreds of-pages long – and we turn it into a single digital record. Those details are then shared with upwards of 200 different participants that serve thousands of banks and institutional investors.
This one record or event cascades into millions of downstream messages, and every recipient along the way must parse it, interpret it, and then reconcile it against all the other versions they receive from other agents or data providers. It is extremely fragmented and leaves room for error.
What’s worse is that all these interactions are sequential and point-to-point. There’s no unified network or platform that connects interested parties. If, for example, a question comes up in France, it goes from the global custodian to the sub custodian to DTCC to the issuer’s agent.
It goes up and down the chain, step by step. None of the upstream parties are connected to the downstream parties. It’s like a game of telephone because by the time the message reaches the asset manager, it might have passed through six or seven hands.
Fragmentation and the cost of redundant interactions to reconcile interpretation of events passed through the asset servicing chain is the nexus of this inefficiency. The asset servicing process needs to harness the power of networks and community to collaboratively understand the core terms of an event.
You look at what happens in other industries. You don’t call a helpdesk anymore; you go to a community, you Google it, you get the collective wisdom of the crowd and expert moderation. That’s the power of connected communities.
A question need only be asked once and answered once if the results can be shared concurrently with the community. But in corporate actions, it’s never been built that way. We’ve never connected the questions with the authoritative source of the answer and every question is asked and data is reconciled in isolation.
The real irony is that there’s only one legal record, but that single truth gets fragmented into 50 different versions by the time it reaches the investor. The buy-side is not reconciling information per se, just their agents’ differing interpretation of an event. A single event creates literally millions of interactions between systems and institutions. It’s impossible to imagine data will remain intact while it passes through so many hands.
Where do you see the biggest sources of operational risk or inefficiency today when it comes to corporate actions?
The operational risk really comes down to the fragmented workflow. I mentioned earlier how this starts as a single legal record. DTCC receives a corporate action announcement directly from the issuer or their agent, based on a legal document like a prospectus. We digitise that corporate action and push it downstream. From there, however, it splinters.
When we send out that announcement, it goes to about 200 direct participants and each of them could have 10,000 underlying accounts. So immediately, that announcement spreads exponentially. Every time it hits another layer – sub-custodian, global custodian, asset manager, beneficial owner – it gets repackaged, reformatted, sometimes even reworded. Everyone along the way must reconcile what they receive against other sources: different custodians, different vendors, and different market data feeds.
We’ve heard from some clients that it costs more than $50 to process a single email inquiry related to a corporate action. They literally receive more than a million similar enquiries per month. That’s tens of millions of dollars spent just answering questions – most of which trace back to the fragmentation of a single message.
It is important to note that a corporate action doesn’t just affect operations – it impacts trading desks, securities lending, portfolio valuation, even index management. The same piece of data gets bounced around internally.
Subsequently, you’ve got downstream systems that need to be updated in sync with each other, and that’s incredibly difficult when you’re dealing with data that was fragmented the moment it left the CSD.
This is why CSDs globally are in a unique position because we see the event first and we’re the source of truth. We’re the only entity in the chain that witnesses the event directly from the issuer. Yet, most of the reconciliation and inquiry happens downstream, among parties that are relying on second hand information. That’s highly inefficient.
What if, instead, we allowed downstream recipients – custodians, asset managers –to ask questions of us directly? What if the authoritative answer came from the CSD and was visible to everyone in the network? That could eliminate thousands, maybe millions, of duplicative interactions. That’s the vision we’re pushing toward: transforming a fragmented workflow into a collaborative, networked model where everyone is connected to the same source of truth.
Is it realistic to expect legacy systems to fully integrate with these new models, or do you see firms needing to build entirely new stacks?
Let me be clear – legacy systems aren’t going away anytime soon. We’re still going to be relying on the same accounting platforms, settlement engines, and bespoke corporate action processing tools for a while. The question is: how do we bridge from where we are to where we need to go?
The good news is humans are adaptable. We can build new workflows, websites, interfaces – we can change behaviour. That’s the easier part. If you give me a website where I can ask a question and see responses from others who have had the same question, I can be up and running in five minutes. That’s human workflow and how we make fast progress.
Machines and technology implementations take longer. The transactional layer – the systems that move cash, exchange shares, and evidence the impact of a stock split or a nominal value change – those systems are deeply embedded. The next step would be to build bridges between those legacy systems and the more dynamic, networked, and cloud-based front ends we’re starting to develop.
This is where I think technologies like Snowflake, APIs and smart contracts will play a huge role. We can’t replace everything overnight, but we can start to connect the dots more efficiently. The move away from point-to-point messaging to shared data environments – that’s a structural shift. While it’ll take time, I think we’re going to see more progress in the next two to three years than we’ve seen in the past twenty.
What do you see as the biggest barriers – technological, regulatory, or even cultural – that the industry needs to overcome to get there?
The biggest barrier is behaviour. For years we’ve invested in reconciliation and workflow tools, things that help us process faster. But we’ve never stopped to ask: why are we reconciling so much in the first place? Most of the time, it’s because we’re relying on second-hand information. If we could just ask the authoritative source – the CSD – directly, we could eliminate layers of confusion.
We haven’t embraced networks. In capital markets, especially in asset servicing, we’re still doing things point-to-point.
We need to change the model. We need to build community, shared visibility and trust in authoritative sources of truth. And I believe we can, because I talk to people every day in this space, and I think they’re ready to try something new.
How far away are we from seeing this new model become the norm rather than the exception?
In the next two to three years, I expect we’ll see significant transformation in corporate actions. Technologies like AI, cloud, digital assets – they’re no longer theoretical and are being deployed.
We’ve all seen that the current model isn’t sustainable. As more data comes out and as more firms feel the cost pressure, the appetite for change is growing. It’s just a matter of getting the foundations in place. Once the new workflows are built and the network starts forming, I think we’ll see adoption accelerate quickly.
We’re at a turning point and I’m optimistic.