As monolithic applications are increasingly superseded by container-based software, often hosted in the cloud, financial firms need to be mindful of the challenges of monitoring their complex and dynamic container-based environments. Dynamic provisioning using container technology introduces a level of complexity not found in monolithic applications and the monitoring of this environment – to gauge, for instance, performance optimization and troubleshoot issues in the enterprise – requires a different approach to that used for monolithic applications. In a container environment, multiple containers can be running on the same host, which can make it difficult to isolate issues caused by a specific container and an application’s containers can be spread across multiple hosts, making it difficult to monitor them all from a central location. The opacity and impermanence of containers can also make it difficult to diagnose issues, monitor performance and locate issues before they become critical.
By GreySpark’s Rachel Lindstrom, Senior Manager

One of the key advantages of cloud computing technology is its scalability in terms of compute power, memory and storage. Today, cloud services based on virtualization and containerization are well established in financial firms. Both techniques enable the abstraction of one level of the technology stack from the one beneath it – a buffer, if you will – allowing software running above the abstraction layer to be moved around, typically from server to server as memory, storage or compute power become available.
Container instances can be ‘spun up’ or shut down to meet changing demands for applications and services in real-time and, as they are abstracted from the hardware, this can be on any server with available capacity to host that container instance, which is known as dynamic provisioning. It can offer many benefits to capital markets firms, including improved agility, cost savings and the ability to respond more quickly to changing market conditions. However, it also presents challenges for systems administrators in terms of resource management and troubleshooting.
This article examines the issues around the monitoring of dynamic environments and explores the way in which monitoring can help financial institutions locate issues that need to be resolved before they become critical and enables the better use their available technology resources.
Virtualization and Containerization Abstraction
An application technology stack is a concept used to explain the relationship between the hardware, networking and software components it takes to get applications running. The stack is generally layered from the most fundamental requirements up to the application layer itself. Figure 1a shows the key components of a typical technology layer. The bare metal layer includes the hardware needed; servers, wiring etc. The network layer enables the user to communicate with the other servers. The operating system enables the software to ‘talk’ to the hardware. The middleware layer acts as a bridge between an operating system and applications and the application layer contains the code to provide the features and functionality, front end and back end, of the application.
In the configuration shown in Figure 1a, the application can only run on the host’s servers, and utilize one operating system. This is how the earliest ‘monolithic’ applications were developed. Virtualization adds a degree of dynamic provisioning to the configuration, as is shown in Figure 1b. Both stacks have an additional software layer inserted; the virtualization layer, above which software is free to be moved from one server to another. Virtualization effectively squeezes more workload onto each server. In Figure 1a, only one machine was running, whereas in Figure 1b, the server is running two virtual machines, which means that less hardware was needed to run the two machines.
Containerization is similar to virtualization in that it also relies on an abstraction layer, as shown in Figure 2. Application containerization allows applications to be hosted on any hardware because all the libraries and dependencies that it needs are incorporated into the container. The application software runs above an abstraction layer created by the containers, and containers can be moved from server to server as memory, storage or compute power become available – either on premises or in the cloud. It is the basis of dynamic provisioning.
A key difference between a container and a virtual machine is that when the virtual machine is moved it needs to take with it a full copy of the operating system, whereas containers only need to take a stripped-down version of the operating system with them. In general, this means that a lot more memory, storage and compute power are needed to run virtual machines than containers.
A container does not typically include code for a whole application. Indeed, each container holds a logical chunk of the application code, as well all the necessary software components, libraries and configuration settings that are required to run the encapsulated application code. In other words, it contains all the components needed to ensure that it can run consistently and reliably regardless of the underlying infrastructure.
Both virtualization and containerization allow the hardware to be used more efficiently in terms of the utilization of a server’s CPU and storage but also in a macro sense where workloads are moved from server to server dependant on where unutilized CPU, memory and storage can be found. The movement of resources can be controlled automatically by resource management software, and the dynamic nature of this IT architectural approach has huge advantages for financial firms in terms of flexibility and efficiency. Containers can be spun up and taken down within seconds, if needed, which means that scalability is unmatched.
Dynamic Provisioning Challenges for Financial Firms
When financial firms deploy their own platforms and applications using container technology, the time to market can be achieved very quickly. Container instances can be spun up rapidly without the need to wait for someone to install a server. The downside is that the dynamic environment into which software is deployed is much more complex. Firms must monitor the whole technology stack for all their applications and services if they want a full understanding of what is happening within the enterprise, but this is not always easy as shown in Figure 3.
Any new IT provisioning model that supports dynamic provisioning systems must be able to maintain the highest levels of security and compliance, and this requires careful planning and design and also, once deployed, the platform requires comprehensive and effective ongoing monitoring and management that can meet the specific challenges that a containerized deployment presents.
The Complexity of Containers
In order to manage available resources, the system administrator must have visibility of the resources that are being used as well as their physical location, typically achieved through monitoring. Some challenges (see Figure 4) are specific to a containerized environment – regardless of whether they are run in the cloud or on-premises.
To address the key challenges summarized in Figure 4, as well as a plethora of others, tools are needed to provide insights into container usage, performance and resource utilization. In essence, these tools ensure that containerized environments operate efficiently – with the optimal use of available resources – and effectively – that issues can be identified and remediated quickly before they become critical.
Containers are managed, typically, using an open-source container orchestration system such as Kubernetes. This allows systems administrators to automate software deployment, scaling and management, and its orchestration capabilities can also help resolve resourcing issues. Limitations for resourcing can be set within Kubernetes to ensure that enough storage and compute power are available to all the containers that make up the application (point 1 in Figure 5).
It is also important to gauge how much of the resources are being used. Knowing the actual resource usage arms systems administrators with the data to manage resource limits and, hence, ensure that the firm’s resources are optimally utilized. To determine where resources are used, metrics and logs must be collected from the containers.
For some applications, containers can be run in multiple environments. For instance, features that provide the application’s basic functionality may run in containers in the cloud, but the authentication and payments functionality may be incorporated into containers that run on on-premises servers. This means that the monitoring architecture must be able to aggregate metrics from on-premises servers and potentially multiple cloud hosting locations or even from multiple cloud providers (point 2 in Figure 5)
Containers are built to isolate components of an application from the other parts of the application, and while this is very useful, it means that systems administrators need to ‘open up a port’ – or carry out some configuration – to retrieve monitoring metrics from a container. The dynamic nature of container instances, and in particular the shortness of their lives, makes opening a port impractical as reconfiguration would be needed every time an instance is shut down and another spun up. Containers, therefore, are designed to emit metrics that can be picked up by monitoring system.
Containers only emit the right metrics in the right format for the monitoring system if this is written into the container’s code. Typically, when an application is started up, it emits metrics, and they will be in a predefined format. The format depends on the tools, application programming interfaces (APIs), and software development kit (SDKs) used by the engineering team. These are used to generate, collect and export telemetry data (metrics, logs, and traces). To optimize the monitoring of containers, wherever they are running, financial firms must standardize the tools they use (point 3 in Figure 5), to ensure that a uniform format of the metrics is generated, and this is the first step towards the development of a ‘single pane of glass’ view of the operation and performance of the enterprise.
The final best practice for financial firms is to ensure that the Kubernetes platform itself is monitored (point 4 in Figure 5). The Kubernetes platform is vital to the operation of containers as it orchestrates, creates and destroys them and is vital to ensure that a certain level of service is maintained. If a container fails, Kubernetes will automatically spin up another one. Those failures can be captured, and converted into events and metrics that can tell the system administrator of the failure of a container so that she can investigate and diagnose the issue.
Beat the Challenges of Monitoring a Dynamic Container Environment
To monitor a dynamically provisioned container environment, whether in the cloud or on-premises, systems administrators must be enabled to monitor inside the application’s containers. Developers must standardize the tools they use to generate metrics that are emitted by each container and the container utilizations. They must also ensure that resource management software is adequately monitored.
Only with comprehensive monitoring across the application technology stack, from the big picture (resource utilization) to the smallest detail (health of the containers), will systems administrators have the necessary visibility of what is happening in the enterprise in order to troubleshoot issues and maintain its smooth operation.
ITRS Geneos supports the most complex and interconnected IT estates, providing real-time monitoring for transactions, processes, applications and infrastructure across on-premises, cloud and highly dynamic containerized environments.