Despite astronomical sums being spent by banks on surveillance – almost US$ 740m by 15 surveyed Tier I and Tier II banks alone in the first two years after MAR came into effect in the UK1 – electronic surveillance is still in its infancy, and gaps in efficacy and performance mean that there is appetite for further spending, development and automation.

Many banks currently operate with sub-optimal communications surveillance technology – the large number of false positives flagged by surveillance systems suggests inefficiency and the lack of automation and need for human intervention in analysing the large communications data sets places an overly heavy burden on surveillance teams.

The potential for large fines and penalties arising from regulatory breaches is driving industry professionals to look to new technologies – machine learning and other artificial intelligence – to help them address these pressing issues.

Taking Stock – Time for a New Approach

The need for effective surveillance in financial services is driven not only by the enhanced global regulation of the past decade – Dodd Frank, MiFID II and MAR amongst others – but also more fundamentally by the desire to avoid the market abuse that these regulations address as well as the resulting financial and reputational damage. Whilst it cannot be guaranteed that the big banking scandals of past years could have been avoided with more effective surveillance procedures in place, it is clear that many of the high profile unauthorised trading and market abuse cases would have been considerably harder to perpetuate had there been effective tools in place.

Although the desire for a holistic surveillance solution, bringing together trade, voice and e-comms monitoring in one unified platform, has not diminished entirely, it is now acknowledged that it may be sufficient to take a more pragmatic position, with process and organisational structure bringing together separate trade and communications surveillance systems. Effective communications monitoring is a key part of this approach.

Most banks and trading floors now utilise a variety of communications tools across text, voice and visual media applications. The volume and diversity of data to analyse and monitor is larger than ever before and significant benefit can be gained by reducing the size and complexity of the dataset early in the process – when it is possible to do so without undermining the surveillance integrity.

It is not uncommon for a single conversation to take place across a number of platforms and it is essential that any surveillance solution is able to recognise channel switching in order to reconstruct conversations comprehensively.

The use of lexicons and rules-based data searches to identify high-risk communications and behaviours is not an inherently problematic approach. It is often poorly executed, however, with organisations relying on overly simplistic rulesets and poorly maintained lexicons. Consequently, a multitude of alerts that require significant human intervention to review is generated, and many communications flagged as ‘risky’ are, in fact, false positives. The higher the volume of data to review, the higher the operational costs and the lower the accuracy of the analysis.

AI Technology to the Rescue

While a great deal of effort could be saved through the reduction of false positive alerts, care must be taken in how this is done. A reduction of false positives is highly desirable, but only if it can be achieved without sacrificing efficacy, and while loosening the rules would almost certainly lead to fewer alerts, events that need investigation may slip through the net.

Figure 1: Enhancing Communications Surveillance with a Unified Communications Platform

 

Source: GreySpark analysis

 

Intelligently Cleansing & Structuring High-volume Datasets

The high volume and diversity of recorded data is the first significant problem and traditional automated solutions struggle to normalise data and eliminate irrelevant content prior to human review. To deal with the high volume and diversity of data, a best-in-class communications surveillance platform must include:

1. Spam Filtering: Spam refers to content that is irrelevant for surveillance purposes, and includes internal newsletters, blog posts, marketing and other mass messaging. A good filter will rank content according to the likelihood that it is spam and automatically remove it according to a specified ranking threshold.

An intelligent spam filter will go further, however – it will learn from the human review of items identified as spam and increase the accuracy of subsequent rankings, becoming ever more efficient. Over time, it will optimise its ability to identify spam content specific to an organisation.

2. Auto-generated Content: There is also the question of system-generated content such as email headers or signatures, disclaimers or confidentiality statements. On the face of it, it would be desirable and straightforward to automatically remove this type of content from the dataset. However, as with all static rule-based surveillance, this is potentially a risky strategy, as it is easy to conceal messages in the guise of an email signature, for example, and so a more dynamic approach is needed.

An intelligent and transparent solution is required – a tool that analyses and learns from human review, but that also allows the user full control over what is removed and what remains subject to human monitoring.

3. Deduplication & Email Threading: Communication datasets can contain a large amount of duplicate content. In particular, email communications often include duplicates of the entire historical conversation in each new email message. Using a process known as ‘email threading’, emails can be grouped together in a conversation so that the end of the conversation can be identified and an alert only sent on the final email that contains the entire conversation. This powerful deduplication method can reduce alert volumes by up to 60 percent.2

A good threading and deduplication tool will not only recognise linear conversations, but can also identify when those conversations branch, forward, add or remove participants, attachments and content.

4. Voice Transcription: High quality voice and audio transcription is a crucial part of an intelligent communications surveillance platform. Achieving this accurately, when a variety of jargon, acronyms, slang terms, languages and codes are used is not trivial.

An intelligent transcription tool must be able to recognise words spoken in many languages, dialects and accents, as well as conversations in mixed languages and that include unexpected idiomatic speech patterns, for example, in order to normalise voice data into a simple searchable text format.

5. Video & Image Recognition: The platform must be capable of identifying and extracting text or other patterns that appear within videos, images and screenshots. Text is frequently embedded within imagery and a best-in-class solution must be able to capture the information.

A comprehensive communications surveillance platform will be able to identify text and other patterns within images and video alongside the more usual text and voice analysis. Machine learning is an effective way to achieve this.

Anomaly Detection & the Minimisation of False Positives

Having reduced the volume and normalised the data into searchable text, the remainder of the communication dataset must be analysed and reviewed to ensure that any behaviour that puts the firm at risk is detected.

Experience shows that the vast majority of alerts generated by surveillance systems are, upon inspection, entirely innocent. Ultimately, an effective surveillance system will be able to identify interactions which are risky and only those interactions, so that the human reviewer is able to quickly hone in on only the truly problematic communications.

6. Intelligent Risk Ranking: Ranking is part of the approach typically taken to evaluate the riskiness of remaining content. Indeed, an effective surveillance solution should rate each data item (or group of items) according to a risk scale, to enable review of relevant items and flagging of false positives.

A system that can continuously learn from the review decision on each alert – whether it is a true positive or a false positive – will quickly reduce the volume of alerts and, consequently, the time and cost of human interaction in the process.

7. Conversation Identification: Most conversations take place over more than one medium, in financial services as much as elsewhere. A typical conversation may begin with an instant message, for example, continue via a phone call and finish up over email.

For intelligent risk identification and analysis to take place, the surveillance platform must be able to identify individuals and link interactions that take place across the various platforms and media in order to bring together the complete picture.

8. Conceptual Search: Analysis of communications data by concept using unstructured AI categorisation capabilities, in addition to simpler searches for pre-defined words and phrases, can allow conversations to be sorted into groups of documents based on the concepts and topics discussed in the text. This categorisation will assist the user in identifying topics that are unusual or out of place.

The categorisation of communications by concept can facilitate a more accurate ranking and assessment of risk, as well as potentially enabling the detection of commonly used ciphers or code words.

9. Pattern Identification: In addition to the conceptual search, AI techniques can be used to identify other patterns in the dataset, including behavioural trends such as who interacts with whom, how, when, how often or whether interactions between certain groups of people are regularly ‘taken offline’.

The identification of unusual or unexpected behaviours will facilitate detection of anomalies and contribute to the risk ranking.

10. Advanced Phrase Searches: The use of lexicon search-based approaches to surveillance can be effective – AI analysis tools can assist the user in keeping lexicons and associated rules up to date, by using identified patterns to determine relevant key words, code words, phrases and behaviours.

An effective communications surveillance lexicon will be continually updated as behaviours adapt and evolve.

11. Other Metadata Analysis: It is not just the text and content of communications that can be used to highlight risky communications. Extracted metadata such as send time and number of conversation participants, enriched metadata like directionality and language identification, and people metadata including department and geography can also be analysed to identify unusual or suspicious behaviour.

An effective system will analyse all available communications data – not just the content of the text.

Effective Surveillance in the New Normal

A ‘well trained’ platform that incorporates some or all of the advanced techniques described in this article will go a long way towards addressing the communications surveillance issues faced by the industry today – particularly around the safe reduction of false positive alerts – and, whilst it is unlikely that the human review element will be eliminated, the burden will be significantly reduced, as only the riskiest items need be reviewed.

A good example of this type of platform is Relativity Trace, which is a robust and proactive compliance monitoring and communication surveillance system which is based on a solid foundation of artificial intelligence techniques and could be integrated with new or existing infrastructure to form part of an effective and compliant surveillance solution.

 

1 PWC, March 2019. PWC Market Abuse Surveillance Survey 2019. [pdf] Available at: <https://www.pwc.co.uk/forensic-services/assets/documents/market-abuse-surveillance-survey-2019.pdf> [Accessed May 2020]

2 Relativity, 2020. Relativity Trace: 3 Steps to Building an AI-Based Surveillance Strategy. [Accessed June 2020]

Scroll Up