Skip to content

PERDARR’s peril

Where is the wisdom we have lost in knowledge?

Where is the knowledge we have lost in information?

TS Eliot – The Rock (1934)

January 2016 may still seem a distant prospect, but in the context of fundamental corporate reorganisation it is next week. This is the final deadline for the GSIFI implementation of the Basel Principles for Effective Risk Data Aggregation and Risk Reporting (BCBS 239 or PERDARR). The 14 principles are a blend of vague and blindingly obvious; however, obvious is rarely synonymous with fully-implemented. G-SIBs had a compliance schedule of three years following the date from which they were designated as significant, given that many banks have been “significant” since November 2011, progress should be well on the way. It isn’t- in the December 2013 BIS survey of 30 GSIBs, 20% accepted they were “materially non-compliant” with at least half the principles and 50% expected to be non-compliant with at least one principle by the deadline. A disappointing result, given that the principles represent the most basic preconditions for survival in a data-driven world, they are a profit-driven imperative rather than a compliance obligation. Participants rated themselves higher for compliance with the risk-reporting principles than for those concerned with data aggregation, neatly illustrating their failure to understand the principles as a whole. As the demands for data-efficiency move ever more centre-stage, it is likely that the PERDARR principles will be refined and expanded, both in scope and detail; Tier 1 banks should regard the regulation very much as a work in progress, tier 2 and below should see it as a blueprint for their future.

11 of the principles relate directly to data aggregation and risk reporting, the remaining three address supervision, enforcement and jurisdictional cooperation. The main body of 11 contain a degree of overlap and may be more usefully grouped into five broad categories:

Governance\Infrastructure– PERDARR capabilities and processes should be approved at board level and should inform the whole data architecture and IT infrastructure. Data architecture should be capable of performing in times of crisis\stress as well as normal business.

Accuracy\Reliability– risk data integrity and accuracy controls should be as robust and consistent as those applicable to accounting and reconciled to other sources. Data aggregation should be largely automated. Reports should be reconciled and validated.

Comprehensiveness– a bank should be able to capture and aggregate all material risk data. Data should be aggregated across multiple groupings enabling a holistic view of existing and emerging risk. Depth and scope of reports should be consistent with the size and complexity of the bank’s business.

Clarity\Utility– results should be clear and concise while remaining comprehensive. Their purpose is to facilitate decision-making, even when tailored to specific recipients or in times of stress.

Timeliness\Frequency– data to be generated and aggregated with a frequency consonant with the potential volatility of the risks and their importance in the bank’s overall risk profile. Frequency of distribution is to be set by the Board and senior management.

 The report notes that there is an “exponential increase” in the amount and granularity of data for external reporting requirements; regulatory pressure to be able to improve internal aggregation and reporting of risk data; as well as business pressures to improve data-handling efficiency and to make better use of data. This has resulted in “an increased focus on individual responsibility for reported data” as well as greater attention on internal audit, assurance and governance. PERDARR’s purpose is disarmingly simple- to produce accurate, complete and useful risk reports quite frequently. Until recently, complete compliance would have required fundamental process and organisational re-engineering, effectively tearing up every aspect of a bank’s organisation. This non-trivial task was made especially depressing by the short half-life of the result. Any real-world system designed to facilitate data aggregation will lack the flexible scalability and rapid adaptability required by competitive pressures and constantly evolving regulatory demands. However, the business and regulatory imperative has coincided with a paradigm-shift in data conceptualisation. It is now possible to create a virtual data architecture, drawing information from disparate databases to create fully integrated\aggregated reports and insights, leaving the underlying structures untouched[1]. While fulfilling the PERDARR obligation to automate wherever possible, data can be gathered into a single virtual hub, allowing rapid assimilation, cross-validation, reorganisation and reporting. No-SQL, Lambda-type structures act as an overlay on existing legacy systems- cheaper, quicker, flexible and effective, representing a vastly superior alternative to destabilising, real-world rapid and enforced change. The potent combination of virtual database and pattern-recognition engine will make the much-hyped Big Data heaven (relatively) cheap to both assemble and access. Most unusually, those who have left their fundamental regulatory compliance until the last minute, may be better placed than their more diligent and foresightful peers.

 

[1] Much of the pioneering work on Big Data systems has been done by web giants such as Google- distributed filesystems and the MapReduce framework and Amazon- distributed key-value store; open source solutions such as Hadoop, HBase, Cassandra et al. have quickly followed.

Contact Us
Press enter or esc to cancel