Market participants began engaging in 2010 in a continuous change agenda driven by first- and second-order regulatory triggers. Only the best, most agile and technologically superior firms were able to accommodate these changes without racking up worrying levels of technology debt. The post-crisis years witnessed waves of new regulations. The inadequacies of a laissez-fare approach to the supervision of capital markets became clear when a shockwave of liquidity problems hit interconnected, ‘too-big-to-fail’ organisations.
Service-oriented architectures with well-managed middleware and common data formats provide the best means of coping with the regulators’ hunger for data and, with good IT processes in place, can provide adequate responsiveness to business needs without jeopardising the integrity of a company’s systems architecture. Superficially, however, these views may appear at odds, especially when existing systems architectures express anti-patterns, such as point-to-point integrations or duplicated data sources.
In circumstances such as these, which are all too common, responsiveness is often bought with a large dose of technical debt that manifests in new, non-golden data sources or tactical developments in spreadsheets or desktop database software. This then becomes a vicious circle that plagues organisations with poor systemic architecture. It also manifests itself as brittleness and complexity, which makes it very difficult to effect core changes without extensive impact analysis and testing, thus pushing changes toward the architectural periphery where impact can be contained; but, inevitably, technical debt increases as a result.
The consequences of such a situation are pernicious as business processes become inefficient, exception-riddled, overly manual and difficult to control, often leading to organisational fatigue, operational risk and increased costs.