Context
GLU.Ware is all about speed. Not just the ability to ‘Integrate at Speed’ but equally so, to ‘Process at Speed’. It’s our mission to ensure that GLU.Engines in a Clients ecosystem are able to scale horizontally and vertically so as to guarantee that those GLU.Engines never cause transactional bottlenecks.
Performance Testing GLU.Engines is thus an integral part of the GLU.Ware Product Quality Assurance discipline. The objective of our performance testing process is to identify opportunities to optimise the GLU.Ware code, its configuration and how it is deployed and in-so-doing to continuously improve the performance of GLU.Engines.
Our Performance Testing process provides GLU and our Clients with insight into the speed, stability, and scalability of GLU.Engines under different conditions.
Test Scenarios
We have defined three performance test scenarios to cover the spectrum of solutions which GLU.Engines can provide integrations for. To focus on maximum throughput we have defined a simple ‘Straight Line Scenario’; to explore the impact of latency on a GLU.Engine we have included the ‘Latency Scenario’; and to understand the impact of complexity we have included the ‘Complex Integration Scenario’.
The Straight Line Scenario is a simple Asynchronous JSON payload pass through, a delivered JSON Payload simply being offloaded downstream to a Rabbit Message Queue.
The ‘Latency Scenario’ is similar to the Straight line scenario except the payload is a USSD menu payload and it passed through a GLU.Engine which produces transactions in a Rabbit Message Queue. Those transactions are in turn consumed by another GLU.Engine from a Rabbit Message Queue and they are then passed to a stub which has been configured with variable latency in its response (to emulate latency in downstream Endpoint systems).
The Complex Integration Scenario involves multiple layers of orchestration logic, multiple downstream Endpoints including multiple protocol transformations and multiple synchronous and asynchronous calls to Databases and Message Queues.
Executive Summary of Performance Test Results
Straight Line Integration Scenario | Complex Integration Scenario | |
TPS | 4,400 | 754 |
CPUs | 8 | 4 |
Setup | Containers: 1 Docker Swarm Manager (4vCPU, 16 GiB) and x2 Worker Nodes (2 vCPU, 4 GiB) | VM (4 vCPU, 8 GiB Memory) |
Additionally, we have defined a Performance Test scenario for the GLU.USSD solution which is pre-integrated with the GLU.Engine.
USSD Solution | USSD with Latency Injection | |
TPS | 915 | 1 Silo – 350 (Latency of 100ms) 3 Silos – 702 (Latency of 100ms) |
CPUs | 16 | 4 |
Setup | Containers: 1 Docker Swarm Manager (8vCPU, 16 GiB) and x2 Worker Nodes (4 vCPU, 16 GiB) | VM (2 vCPU, 8 GiB Memory) – GLU.Engine Producer Containers: 1 Docker Swarm Manager (8vCPU, 16 GiB) and x2 Worker Nodes (4 vCPU, 16 GiB) – RabbitMQ VM (4 vCPU, 16 GiB Memory) – GLU.Engine Consumer & USSD |
GLU.Engines are CPU bound, so ‘vertically scaling’ CPU leads to a better than linear performance improvement. GLU.Engines can also be horizontally scaled behind a load balancer or a Docker Swarm Manger (proxy) if containerised.
GLU.Engines have the ability to absorb latency in End Points up to 100ms and still achieve considerable TPS, with increased TPS being possible if horizontal scaling is architected into the deployment architecture.
Performance Optimization Recommendations
For optimal performance of a system of GLU.Engines, as reflected in the TPS benchmark figures for the systems defined in this document, the following recommendations are advised:
- Performance of a system is dependent on the performance of each component within the system. A GLU.Engine is only one such component and as such it is important to ensure (monitoring and tracking) for all components connected to the GLU.Engine to ensure they are performing in line with expectations. It is essential to pro-actively monitor the ecosystem and the GLU.Engines specifically with alerts set for all metrics of interest including but not limited to CPU, Memory, Heap Size, Garbage Collection, Disk Space, Latency etc.
- The deployment architecture of GLU.Engines within the ecosystem has a direct bearing on their performance. Ideally there should be maintained a performance forecast ensuring required additional capacity is planned and implemented in a timely manner. GLU support is available to assist to guide on the required sizing of the deployment architecture. Forecasts should include transaction types, flows, TPS and payload sizes as these all have a bearing on performance.
- It is recommended to consult GLU support on specifications for GLU.Engines and the servers / VM’s / Containers / networks to help understand any constraints which may exist with the system architectures.
- It is recommended during ‘normal’ operations log levels for the GLU.Engines should be set to INFO or above (i.e. not DEBUG or TRACE) as Log levels affect GLU.Engine performance. In the event of any suspected problem and during the analysis of the problem log levels can be set to DEBUG to help trace the problem.
- Where there is a suspected performance degradation of a GLU.Engine the GLU Support team is able to help, however it is essential detailed logs and monitoring metrics are provided along with a full description of the problem scenario to help the support team understand the problem and if need be recreate in the GLU labs. GLU may ask for access to monitoring tools in the client’s environment to collaborate to pragmatically address the problem as quickly as possible.
- It is essential to ensure the GLU.Engines and associated hardware/ are kept up to date. GLU is always improving the GLU.Ware product and will release performance improvements for time to time.
- It is recommended to tune and review the performance of other ecosystem components which include load-balancers, docker container managers, databases, message queues, internal applications, as well as internal and external networks and third party applications.
Performance Test Details
Straight Line Scenario Setup
Performance Testing was executed in GLU’s AWS Test Lab within a single VPC. This ensures little to no degradation in performance due to network communications. Swarmpit was used to manage the Docker environments which were used to run the GLU.Engines and Rabbit MQ processes, 3 Nodes were set up over 3 EC2 instances.
Virtual Machine Sizes
EC2 | Virtual AWS System | CPU | Memory |
Swarm Manager | t3a.xlarge | 4 vCPU | 16 GiB |
Swarm Node 1 | t3.medium | 2 vCPU | 4 GiB |
Swarm Node 2 | t3.medium | 2 vCPU | 4 GiB |
System Versions
System | Version |
GLU.Ware | 1.9.13 |
RabbitMQ | 3.8.7 |
Swarmpit | 1.9 |
JMeter Test Setup Properties

Deployment Architecture

Straight Through Scenario Performance Test Results
Test Criteria | Result |
Users | 400 |
Duration | 1 hour |
TPS | 4,400 |
% Errors | 1.22 % |
Total Transactions | 15,846,714 |
JMeter Results Summary

Rabbit MQ Result Summary

Commentary
An initial test involving a single node with 4 vCPUs and 16 GiB of Memory achieved a result of 1885 TPS. The 4400 TPS result was achieved as described above with a Swampit Manager and two nodes, collectively utilising 8 vCPUs and 16 GiB of Memory. This proves that the GLU.Engine is CPU bound such that by reconfiguring and allocating additional CPU on is able to (better than linearly) scale the performance of a GLU.Engine setup.
Complex Scenario Setup
The complex scenario represents 2 benchmarks: the 1st excludes USSD and the 2nd includes USSD.
1st complex test excluding USSD
Performance Testing was executed in GLU’s AWS Test Lab with in a single VPC. This ensures little to no degradation in performance due to network communication. In this test a docker container was not used, rather a GLU.Engine was deployed directly to a single AWS c5.xlarge (4 vCPU, 8 GiB Memory) EC2 instance. This did not include load-balancing as the objective was to understand the load a single GLU.Engine could achieve.
The diagram below outlines the complex architecture. Note how Jmeter injects transactions and each transaction is orchestrated across a DB connection to msSQL, REST, SOAP and Rabbit connections, returning a response back to Jmeter where the time of the finished transaction was taken.

1st test complex Scenario performance test results
Test Criteria | Result |
TPS | 754 |
The graph below illustrates how performance scaled in proportion to VM sizes being increased, with each EC2 instance.

Commentary
The key factor influencing Performance when minimal latency on the response end points was found to be the number of vCPUs available.
2nd test complex test including USSD
Performance Testing was executed in GLU’s AWS Test Lab within a single VPC. This ensures little to no degradation in performance due to network communications. Swarmpit was used to manage the Docker environments which were used to host the GLU.Engines and execute the GLU.USSD tests, 4 Nodes were set up involving 1 Manager and 3 Worker nodes.

Virtual Machine Sizes
EC2 | Virtual AWS System | CPU | Memory |
Swarm Manager | t3.xlarge | 4 vCPU | 16 GiB |
Swarm Node 1 | t3.xlarge | 4 vCPU | 16 GiB |
Swarm Node 2 | t3.xlarge | 4 vCPU | 16 GiB |
Swarm Node 3 | t3.xlarge | 4 vCPU | 16 GiB |
System Versions
System | Version |
GLU.Ware | 1.9.14 |
Swarmpit | 1.9 |
2nd test USSD with Integration Scenario Performance Test Results
Test Criteria | Result |
TPS | 914,9 |


Latency Scenario
Performance Testing was executed in GLU’s AWS Test Lab within a single VPC. This ensures little to no degradation in performance due to network communications. Swarmpit was used to manage the Docker environments which supported the container running RabbitMQ.
The latency scenario was designed in such away to maximise performance where the end points were slow to respond with a high degrees of latency. The performance testing was set up with horizontal scaling across 3 silos, with contention on the test stubs being managed through a load balancer. Injection was carried out through a dedicated server for Jmeter, which was injecting USSD menu transactions into a GLU.Engine set up to distribute transactions to 3 separate Rabbit queues in a round robin fashion.

Virtual Machine Sizes
EC2 | Virtual AWS System | CPU | Memory |
Decision Maker | t2.large | 2 vCPU | 8 GiB |
USSD / Integration Engines | t3.xlarge | 4 vCPU | 16 GiB |
Test Stub | t2.medium | 2 vCPU | 4 GiB |
Swarm Manager | a1.2xlarge | 8 vCPU | 16 GiB |
Swarm Node 1 | t3a.xlarge | 4 vCPU | 16 GiB |
Swarm Node 2 | t3a.xlarge | 4 vCPU | 16 GiB |
System Versions
System | Version |
GLU.Ware | 1.9.22 |
Swarmpit | 1.9 |
Latency Scenario with USSD / Integration Performance Test Results
Test Criteria | Number of Silos | TPS Results |
---|---|---|
Latency 100ms | Silos 1 | 350 TPS |
Latency 100ms | Silos 3 | 700 PS |
GLU.Engines have the ability to be able to absorb increased latency if sufficient memory is allocated and throttle settings are adjusted to allow for the buffering of transactions. See Managing Load with Throttles.
Commentary
Even at extreme high latency in excess of 3 seconds GLU.Engines will still deliver ±90TPS.
With latency reduced to of 100ms increases throughput to ±350TPS.
GLU.Engines scale in a near linear fashion. As additional performance is required additional servers can be added.
An increase in latency may necessitate additional memory allocation for the GLU.Engine to accommodate the buffering of transactions.