News services Wall Street & Technology and Finextra recently published pieces on the Security and Exchange Commission's (SEC) effort to prevent trading debacles and ABN Amro's fine after a "fat fingered" trade, respectively. As a thirty-year veteran of transaction automation, I ask you: can the financial services industry prevent some of these disasters through better quality software?
I believe the answer is yes!
Application designers and developers need to do more than just test for simple conditions like invalid dates. Financial institutions -- and other parties that participate in networks that clear and settle financial transactions -- need to do more than comply with new requirements to avoid software glitches. They must create and demand, respectively, solutions whose logic expects users to make mistakes.
The SEC voluntary guidelines on software testing and reliability -- Automation Review Policy, or ARPI and ARPII -- are being replaced by yet-to-be-finalized regulations. The initial proposal for ensuring the capacity, integrity, resiliency, availability, and security of automated systems relating to the U.S. securities markets, Regulation Systems Compliance and Integrity (a.k.a. Regulation SCI), applies to four categories of market participants:
- Self-regulatory organizations (i.e., registered national securities exchanges, registered clearing agencies, FINRA, and MSRB)
- Alternative trading systems that exceed specified volume thresholds
- Disseminators of market data under certain National Market Systems plans (i.e., "plan processors")
- Certain clearing agencies exempt from SEC registration
With the proposed 373-page regulation having been recently published, it will take some time for the industry to fully absorb the implications, but it seems that parts of the regulation are no-brainers -- directives that most participants in the industry are already required to comply with -- such as annual disaster-recovery testing, and prompt notification of the regulator and other participants of any serious system problems.
While these disaster-recovery testing and notification provisions make sense, what is difficult to imagine is how any regulatory body could come up with a set of human-error-prevention requirements specific enough to raise the bar on software quality.
However, I believe that financial regulators should be able to recognize the value of a focus on technology governance, which is designed to anticipate and resolve errors (human or otherwise) before they result in disaster.
The initiative to govern the critical flow of data -- an approach that guarantees the quality and integrity of data in motion and at rest, validates internal and external integration points, and ensures the availability of a modern infrastructure's most critical components -- has received, with each passing year, more and more of the billions spent annually on financial services technology.
Governance is a priority for participants looking to improve quality and avoid disasters. Transactions are increasingly conducted by one counterparty transmitting instructions to another counterparty -- either in batch or real-time mode -- on the basis of an agreed-upon set of business rules. Those rules include not only authentication, but also context-specific rules about transaction values, timeframes, intermediaries, and more. If those business rules are validated (i.e., governed) as part of the transmission process, potential errors or rule violations can be identified and corrected immediately, before transactions are posted.
Additionally, the movement toward governance forces us to put a greater premium on the talented business analysts -- the unsung, usually underpaid heroes of software development -- who document business rules for software engineers. In my experience, business analysts usually make the difference between a foolproof software solution and a solution prone to human error. I can't imagine, for example, any of the business analysts I've worked with failing to specify what would happen if a client entered a negative value in an amount field, nor can I imagine any of them failing to ensure that that action was tested as part of the QA process. These are the people who know users make mistakes, and it is largely because of business analysts that governance has achieved its recent import.
Yes, improving software quality and ensuring the smooth functioning of our networks are important goals. The disastrous "glitches" that have plagued the industry in recent years -- despite all of the huge investments in technology -- are damaging in ways that go beyond measurable financial costs. They've contributed to the public's loss of trust in the industry and caused real hardship to those affected, even if temporarily.
But more regulation is not the answer.
The answer is governance -- maintaining an iron grip on the flow of data within and without the organization. Financial industry participants need to consider augmenting their existing software environments to ensure that their mission-critical data flows are validated as early as possible in the process -- whether in the form of real-time APIs to mobile and cloud devices, or large file transfers between partners -- and they need to do it in both a technical and a business context.
In a world where the reliable and accurate flow of data is the lifeblood of our financial system, the organization that fails to effectively govern their own data -- that cannot accept that people make mistakes, that cannot understand why they must find a foolproof solution to prevent disasters -- risks much more than punitive fines. They risk much more than the embarrassment of a tarnished reputation or broken trust.
They risk nothing less than losing the business itself.