Adding or Editing Connectors

Step 1 – Connector Basic Information

Give a unique Connector Name, this is the name you will see in the connector drop-down list in the Integration builder. Choose Connection Direction, for this example we will configure an OUTBOUND Connector. Select the required Protocol / Connector type from the Protocol list. Provide a Description about the end point which will be connected to.

Choose the Applications (GLU.Engines) that the connector you are creating will be made available to. In the interests of configuration efficiency, in this way as single Connector configuration can be used across any number of GLU.Engines. If you are working on a GLU.Engine that does not have this Connector associated with it, the Connector name will not appear in the Connector drop-down list in the Integration Builder.

Same Env. Host: This setting enables you to indicate that the SOAP WSDL is located in the same Environment as the Host (which is defined at Step 1). Ticking this will ‘grey-out’ the WSDL Location fields. Step 2.3 – SSL: This is where SSL Configuration properties are set for this Connector on the specified Environment. See *SSL Configuration for further detail.

Step 2 – Connector Environment Settings

Here for each Environment (selected using the Environments Drop-down list) you need to define the Host Settings, Properties and if applicable the SSL configurations for the Connector. This is where the concept of SDLC Context Awareness comes to the fore. By specifying the Host Details for each Connector and for each Environment (as defined in your SDLC in the Environments Tool), you are ensuring that when you build a GLU.Engine for a specific Environment, it will embed within it all the Host details for the Endpoints applicable only to that stage of your SDLC.

Host Settings

To invoke ‘Step 2’ – click on the Step 2 – Connector Environments Settings button. Select the Environments, that the GLU.Engine will be running in and for each apply the Environment Specific Host Settings within the Host Settings Tab. Enter the URL or IP address for the Connector in the host field. If a Port is required, enter the port in the port field.

These details will be embedded in the GLU.Engine at build time. Since these Host Settings are environment specific, when the GLU.Engine is set to a specific Environment e.g. ‘PRE-PROD’ in the example below, the GLU.Engine will direct outbound traffic routed through your connectors to the destinations defined by these Host Settings for the PRE-PROD stage of your SDLC.

For Inbound Connectors, specify the URL / Host internal IP and Port of the server that the GLU.Engine will run on for the specified Environment. For the Inbound connectors both host and port are required. Note that if you set this Host to IP address ‘’ your GLU.Engine will be able to run on any machine, to tighten control of your GLU.Engines though it is recommended that you specify the INTERNAL IP address that the GLU.Engine will be running on in the Host field, doing so means that the GLU.Engine cannot start on any machine that does not use that specified internal IP address. For testing purposes, one can also use ‘localhost’ as the Host which will enable the GLU.Engine to run on your local machine.

For Outbound Connectors, the URL / Host IP and Port will be where the GLU.Engine sends Request messages to.

Read TimeOut: The read timeout is the timeout on waiting to read data. Specifically, if the server fails to send a byte <timeout> seconds after the last byte, a read timeout error will be raised. Read timeouts may occur if the received data payload is huge.

Connection TimeOut: The connection timeout is the timeout in making the initial connection i.e. completing the TCP connection handshake.

Once the connector has been configured, click Submit to complete the Connector configuration. The Connector will thereafter be available on the Connector lists for the GLU.Engines it has been attached to.


This is where Protocol specific Properties are defined – in the example below, since this Connector is configured as a SOAP Connector, the SOAP Properties of relevance are presented. Other Protocols will render the properties applicable in this panel. See the Protocol specific GLU.Guide articles for details on Properties applicable to the specific protocol you are working with.

SSL Configuration

The SSL Tab provides the fields one should populate to configure your Connector SSL settings. These are also environment specific.

An SSL keystore holds the identity key for the server and the SSL truststore serves as the repository for trusted certificates. The SSL truststore is used for trusting or authenticating client certificates (for two-way SSL).

For SSL configuration the following types of files can be handled by GLU in SSL connections

.p12 / .crt / .pem and .ppk

If these files need to be switched to .jks then the following commands can be used:

$ keytool -v -importkeystore -srckeystore example.p12 -srcstoretype PKCS12 -destkeystore example.jks -deststoretype JKS

$ keytool -importkeystore -srckeystore example.p12 -srcstoretype pkcs12 -srcalias tomcat -destkeystore exampleCert.jks -deststoretype jks -deststorepass Yvf6K33z6qyuTvnA -destalias tomcat

$ keytool -importkeystore -srckeystore example.p12 -srcstoretype pkcs12 -destkeystore exampleCert.jks

$ openssl s_client -showcerts -connect

The following is an example of the 3rd Party Feedback which can be expected on a GLU SSL request there is no cert in request- when we require it (force) you have ssl error because our server deny requestFor this request to Prod: 

[CN=null][13/May/2020:17:15:18 +0200] - - - - [13/May/2020:17:15:18 +0200] "POST / HTTP/1.1" 200 25405 "-" "Apache-CXF/3.3.5"

For request to Test: 

exampletest == is CN from client cert in logs[11/May/2020:13:06:02 +0200] exampletest - - [11/May/2020:13:06:02 +0200] "POST /api HTTP/1.1" 200 674 "-" "Apache-CXF/3.3.5"

Java includes the keytool utility in its releases. Keytool can be used to manage keys and certificates and store them in a keystore. The keytool commands allows one to create self-signed certificates and show information about the keystore. 

A keytool command follows the following structure:

keytool -import -noprompt -trustcacerts -alias <AliasName> -file <certificate> -keystore <KeystoreFile> -storepass <Password>

Here is an example:

keytool -import -alias scaPPPPROD2 -keystore ESB-Ext-PPP.jks -file sba.PPPPROD2.cer

This link provides the full set of keytool operations:

In addition to keytool, GLU provides an SSL Tool which can validate your generated .jks vs the intended endpoint. The GLU SSL Tool .jar file is available here:

To use it you need to specify: IP Port Jks location Password

For example: java -jar ssl-0.0.1-SNAPSHOT.jar 8027 certNew.jks changeit

Additional SSL Resources:
This link (for Digicert SSL Certs) outlines the steps … typically run by the DevOps Admin responsible for the GLU.Engine VM / Server:


  1. Receiving System Server generates a certificate request (.csr) using the -certreq command
  2. Initiating System gets this CSR signed (by relevant CSA) and provide a signed certificate (.crt)
  3. Receiving System imports that CRT into their keystore (Java Key Store) (this replaces the self-signed original certificate with a proper signed one)
  4. Receiving System then exports a public key using the -export command (usually .cer)
  5. GLU Config then points to the public key location in the Path (see screenshot above)


  1. If the signing authority is not a standard signing authority (thawte, verisign etc) then you might need to import a root and possibly intermediate signing authority into your Truststore.
  2. If it is a ‘self-signed’ certificate, then Initiating System generated certs need to be loaded into Receiving System Truststore, else you won’t trust the authority that signed the Receiving System cert.
  3. It could be possible that your verisign/thawte/CA certs are outdated, you might need to get them from the CA (certificate authority) and load them into your truststore

The following articles provide guidance on troubleshoot different SSL Connection issues:

Introduction to Connectors

GLU.Engines employ the concept of Connectors. Connectors can be one of three types:

  1. Inbound Connectors are consumed by Initiating Systems that send messages to the GLU.Engine – these your GLU.Engine API’s.
  2. Outbound Connectors are used by the GLU.Engine to initiate messages / calls to downstream Receiving Systems.
  3. Inbound/Outbound Connectors are interfaces that can serve as a bi-directional interface to the GLU.Engine. This type is applicable in particular to certain technologies such as ISO8583.

GLU.Ware supports an ever expanding list of protocols and their associated payloads including TCP/IP, HTTP, SMTP, FTP, SOAP, REST, ISO8583, ISO20022 – SWIFT MX, ISO15022 – SWIFT MT, MML, GraphQL, FTP, SAP AMQP – Rabbit MQ, AMQP – Apache MQ, SMPP/SMPPS, LDAP and various Database Connectors including MySQL, Oracle, MS SQL, BD2, PostgreSQL and Cassandra.

The GLU.Engine is able to transform any supported Protocol to any other supported Protocol. It does this by un-marshalling all payload attributes received into a ‘GLU Object Model’ maintained within the GLU.Engine and it then Marshalls those attributes to outbound payloads using protocol specific content types as required.

The diagram below illustrates a Digital Channel Platform being an Initiating System will consume an Inbound Connector (API) on the GLU.Engine. The downstream ‘Receiving Systems’ (Business Systems, Scoring Systems, Regulatory Systems and CBS) are all accessed via Outbound Connectors (or if ISO8582 is employed an Inbound/Outbound Connector).

Source Systems that originate transactions, consume the Inbound (‘API’) Connector exposed on the GLU.Engine. The Outbound Connectors on the GLU.Engine will ‘consume’ the interface (or API) that downstream or Target systems expose.

There can be multiple Target systems in any flow but there is only ever one Source System for a particular transaction. During configuration of any Connector Interface on the GLU.Console, the user will define if the Connector being configured is an ‘Inbound’, ‘Outbound’ or ‘Both’. This categorisation is important in that it presents specific parameters and variables that vary between the ‘API’ and ‘Connector’ Interface contexts.


Environments can be configured to align to the Clients existing SDLC. There is no limit to the number of Environments that can be added. As an example a SDLC has the following phases:

Development -> System Integration testing -> Quality Assurance -> Production

To support this SDLC the following Environments will be created:

  • DEV
  • SIT
  • PROD

For each environment the Outbound Connectors will (typically) have different End Points – Hosts (IP/Port or URLs). Thus for each Outbound Connector you will need to define the Environment specific host details. In this way, all host configurations are defined in the GLU.Console so that there is no need for any SDLC related configuration changes to be made outside of GLU as GLU.Engines are promoted through the SDLC.

Note that as a ‘way of working’ there is merit in always having a ‘PRE-PROD’ Environment which has all the PROD host details … this will enable you to build a PRE-PROD GLU.Engine that is identical to what will be the PROD GLU.Engine. The PRE-PROD GLU.Engine can then be used for PROD smoke testing etc. then when ‘cut over’ to PROD is signed off, the PROD build can be built and deployed in confidence.


Add Environment

Use the ‘Add Environment’ Tool to add Environments, where the following details need to be provided.

Field Name Description
Environment Name Assign a Unique Name for the Environment
Production Checkbox To identify this as a ‘Production’ Environment. When a build is created using this environment the GLU.Engine will be tagged as a ‘Release’. All preceding Environment related builds in the SDLC will be tagged as ‘snapshot’ builds.
Description To explain the purpose of this Environment



Manage SDLC Tool

Use this tool to define a particular SDLC to be used for the specified Application which you select from the first pop-up box as below.


Once selected – you can add your pre-defined ‘Environments’ to this SDLC using the drop-down list. If an SDLC is going to conclude with a ‘Production’ release, select one of the pre-defined Environments that you flagged as a ‘Production’ Environment when using the ‘Add Environment’ Tool.


Please Note: There is no limit to the number of Environments you can add where the Production Checkbox has not been selected. You cannot add more than one ‘Prod’ Environment to an SDLC.


The ‘Skip Test’ toggle allows the Release Manager to quickly adjust the SDLC sequence. This may be useful for example where there is a Production bug that is low risk and urgent, instead of the fix having to follow the full SDLC sequence, the Release Manager can ‘Skip Testing’ on any Environments (except if flagged as Production) to fast track the Production fix. The RED CROSS can be used to de-activate the environment. It is not possible to delete environments once created and used for historic audit reasons.


User Accounts and Administration

Group Admin Accounts

GLU provides an organisational hierarchy where a Group can be created within which multiple separate Clients can be associated. A Group Admin Role allows a user to have Client Administrator privileges across all the Clients associated with a Group.



This is particularly useful where there is a ‘federated’ type client company structure where the ‘Head Office’ requires the ability to oversee all underlying Client activities. The Group Admin Role also allows the user the ability to ‘Add GLU.Engines’ to any Client within the Group. Use the ‘Add GLU.Engine’ tool to select which Client within the Group to initiate a new GLU.Engine for. This Role also allows the Group Admin user to administer all Users and Roles within the Group. Users can be restricted to only certain GLU.Engines within the Group.



The Group Admin Role can create Client Admin Accounts for underlying Clients. Additionally the Group Admin is able to create customised Roles with permission tailored to any specific needs.

The Client Admin Account

GLU provides each Client with an Administrator Account per Application, this Administrator will have the permissions required to create users for each Application. The Client Admin Role is able to define their own Roles and to attach permissions as required to their roles. Each User Account is associated with a Role such that associated Role permissions are granted to the User.


Adding a Users

The User Listing shows all Users, their roles, which GLU.Engines they have access to, when they last logged in etc. The Action options allow a Client Admin to resend an Activation email in the event that a user Password has been forgotten for example, Users can be deleted and their profiles and permissions edited.


While the Roles within the GLU.Console are configurable, GLU recommends assigning at least 3 separate resources to the activities on the GLU.Console. This ensures an appropriate level of segregation between the different responsibilities associated with the Release Manager, Analyst and Tester functions within any typical SDLC process. Under the User Management tool, a Client Admin will see the User listing. This details each existing users credentials, the Applications each has access to and their associated permissions. The Actions menu enables these users to be deleted or edited.


Adding a GLU.Engine

Group and Client Admins can use the GLU.Engine tool and the the ‘+Add GLU.Engine’ tool to add a GLU.Engine. This will bring up the configuration panel as below.


Adding Roles

If editing a role, the ‘Edit Role’ pop-up will detail the Role name and the associated permissions. Holding the ctrl/cmd key will enable other Permissions to be selected or existing Permissions to be un-selected. Submitting will save the updated Role – Permissions association.

All Users: Profile (self) Management

This applies to all users, clicking on the Username (top left of the Console screen), the ‘Edit Profile’ Pop-up will show. Details can be edited as needed, password resets can be performed and email or name credentials updated.

Project Initiation and Execution Guide

Integration Project Initiation

When initiating any integration project the first step is to build a solid understanding of the scope of the integration project. The following steps provide guidance as to where to best focus your efforts.


List the Integration Use Cases: (also referred to a Transactions) that are in in scope e.g. Balance Enquiry, Withdrawal, Airtime Purchase etc. This can take the form of a simple descriptive list.


List the End Points: (these are the business systems involved in all in scope transactions) and identify which are Initiating Systems (e.g. Web Channel or ATM System), Receiving Systems (e.g. Core Bank System or Airtime Platform) or both (some End Points and be involved in certain transactions as Initiating Systems, and in others as Receiving Systems).

Recorded for each the type (e.g. Oracle DB / Rabbit Message Queue) or if an API is available the Protocol (e.g REST / SOAP / TCP / ISO8583 etc. DB Connector or ISO).


Create an Integration Context Diagram: This graphical representation provides an efficient means of building a quick understanding of End Points involved, how they inter-connect, where they are located and so on. Start simple and elaborate as you gather information about the Integration project you’re working on.


Create Sequence Diagrams: To align the understanding of the Use Cases, we recommending documenting them as UML Sequence Diagrams (you can use tools such as MS Visio or Typically to start you can focus on just the ‘positive’ or ‘happy day’ scenarios. Later, one should elaborate these to include the ‘failure’ scenarios.

Inputs: You might be able to source existing sequence diagrams if such exist.

Inputs: Whiteboard sessions are a good way to talk through the flows of each transaction.


Gather End Point API Specification Documents: Based on the list of End Points, collect all available API Specifications. Note that API Specs are notoriously inaccurate so some digging may be needed, always ask if the version you find is the latest (check the change control in the doc to see when it was last updated – docs that have not changed in 12 to 18 months or more are may well be outdated). Note that if you have any SOAP API End Points in your scope, get hold of the associated WSDL files.


Establish Test End Point access: This enables you to directly fire test messages at the Test End Points and in so doing to gain a first hand understanding of each API. This step can take time as security sensitivities may necessitate VPNs to be put in place or special access controls to be used. This access is critical to the Integration Lifecycle (see below) so needs to be established as soon as possible (if at all possible). Utilise tools such as Postman to perform this step, moving from testing directly in Postman to creating a CURL command from Postman and applying this from the server which the GLU.Engine will be run from (by doing this you will test the network and port configuration too).


Gather Host Details: The Host URL or IP Address and Port for each End Point for Testing and any subsequent stages in your Integration Lifecycle (see below) are needed.


Gather any existing Test Cases or Test Packs: These can be useful as they may highlight some of the failure scenarios that have been already identified. If automated test pack collections such as one might use with tools such as Postman or SOAP UI these will help with your own test pack creation. GLU’s preferred test tool is Postman. Sometimes incumbent IT teams, suppliers or partners will have a Quality Assurance or Test Team or person. Identify who these people are and ask them for any existing Test Cases they may have.


Gather End Point Sample Request/Response messages: Since APIs are often inaccurate, the fastest way to understand how an API behaves is to get working sample messages for each step of each in scope Use Case. This can sometimes be challenging but is worth the effort, some approaches to consider: If systems you are integrating to already exist, typically someone would have tested them, if so they may be able to provide such Samples, leverage any inputs you might have gathered from your efforts to gather existing Test Packs. If you have established Test End Point access (see above) that often will give you the best mechanism of generating these Sample Messages. Detailed System Logs will often also trap the inbound / outbound message payloads. These are often considered ‘sensitive’ so may require obfuscation of sensitive data they may contain. This approach can also get complicated as logs from each End Point would be needed creating a potentially heavy workload to process them all.


Provision VMs for GLU.Engines: for each stage of the Integration Lifecycle. See the GLU.Engine Server recommended specifications.


Define the GLU Logfile Analytics Requirements:Not always critical for initial PoC’s but almost always essential as a pre-requisite for integrations prior to ‘go live’. See the standard GLU Logfile Analytics as a start but define any Client specific dashboards as needed.


Define GLU System Metrics Dashboard Requirements:If the client wants to utilise the GLU.Console Metrics Dashboard, a VPN or other network connection will need to be established. Some clients prefer to use their existing Metric Analytics and Dashboard tools. See the standard GLU Metric Dashboards as a start but define any Client specific dashboards as needed. This Template provides a succinct basis against which an Integration Project can be executed … GLU – Project Definition Document – Template – v1.0

Useful Tools

There are a number of tools that one can used for various purposes to assist with streamlining integration projects. Below are a few such tools that the GLU team often use. It is important to understand the context of any integration, to do so, drawing tools such as below can be used to build up Context Diagrams that show the initiating system/s and the various receiving systems involved in your integration project. Try to capture as much detail as possible in this visual artefacts.


In order to Test your GLU.Engines you need the ability to send messages and see responses. We use Postman widely for this purpose. Additionally you’ll need to interrogate the GLU.Engine logs, this can be done directly or using a lookset like the Elastic Stack.

Other tools that may be of use:

The Integration Lifecycle

The Integration Lifecycle for your project may vary from that outlined below depending on specific constraints you may encounter in your project. Use this as a guide.


Probe Test Phase:

Once you have access to the Test End Points, you can commence probe testing each End Point. This provides you with an understanding of how the actual End Points behave. Tests you fire (e.g. from Postman) may be based on existing test cases you’ve managed to source, or from test cases you have created in your test tool (Postman) based on your understanding of the Sample Request/Response messages you’ve sources (first prize!) or failing that based on just the API Specs.


Lab Test Phase:

  1. Build Stubs: Based on your Probe Test results, using the sample Request/Response messages you are able to configure (or build) Stubs for each End Point. Initially just focus on the ‘happy day’ Response scenarios to get those working, the ‘failure’ scenarios can be added to your stubs and Lab Tests thereafter.
  2. Build Test Packs: For each Use Case, build the test cases in your Test tool (e.g. Postman)

Incrementally Configure and Build your GLU.Engine:

  1. Configuration: Break the configuration of each Use Case into logical segments and test each segment as you go.
  2. Lab Tests: Build your Lab Test GLU.Engines for testing from the Test GLU VM or Server as you go (Note: Depending on your security policies it is also possible build initial Lab Test GLU.Engines for ‘localhost’ so you can download and test directly from your laptop or PC providing you have access from there to your Stubs. This may have security implications so should be pre-authorised by relevant parties).
  3. Start combining working segments to build up the full Use Case.
  4. GLU Logfile Analytics: At this phase, one should also lab test the GLU Logfile Analytics (if in scope).
  5. GLU Metrics Dashboard: At this phase, one should also lab test the GLU Metrics Dashboard (if in scope).

Integration Test Phase:

Build a GLU.Engine for the Test Environment (typically within the Test environment). You’ll need the host details for all the Test End Points as well as for the VM you’ll be running your GLU.Engine from. Execute your Test Pack for all Use Cases starting with the ‘happy day’ tests and then expanding into the ‘failure’ scenario’s.


UAT Phase:

Optional, depends on Clients process.


Pre-Production Phase:

Optional, depends on Clients process.



The ultimate objective, your GLU.Engines will no longer be tagged as ‘SNAPSHOT’, they will be tagged as version specific ‘RELEASE’.

GLU.Engine – Introduction

The GLU.Engine™ connects Initiating Systems to Receiving Systems. It exposes APIs as ‘Incoming Connectors’ that Initiating System connect to and it handles the downstream connections to the required Receiving Systems via ‘Outbound Connectors’. These connections may simply be for the distribution of records or data from one system to one or more downstream systems, or they may perform potentially complex orchestration and routing of transactions between multiple systems. Based on the configuration applied in the GLU.Console, each GLU.Engine performs the required Protocol Translation between Initiating Systems and Receiving Systems. Parameter Validation on all inbound messages can be applied so as to filter out spurious messages; thus protecting the downstream ecosystem from unnecessary load.


Orchestration rules can be attached to any received payload (Inbound API Requests or Receiving System API Responses). There is no limit to the number or Handlers that can be attached to any flow, this enables the GLU.Engine to perform riche transaction flow Orchestrations by running these in-flight rules to determine the routes that individual messages follow based on their respective parameter payloads.


Enrichment of message parameter sets is achieved by adding static or derived parameters within the GLU.Engine or from data received from any step in the flow.


Each GLU.Engine is SDLC Aware meaning that the SDLC stage (e.g. Test, Pre-Prod, Production) for which it has been built has embedded within it the network connection configurations necessary to seamlessly exchange messages between the correct end-point systems. Monitoring hooks enable monitoring data to be presented on the GLU.Console and a detailed Audit History of every GLU.Engine build process is retained on record in the GLU.Console. The GLU.Engine supports the transfer of messages securely using SSL and various End-Point Validation methods (e.g. Basic, OAuth, OAuth2) are supported. Additionally, anti-tamper hash value validations are supported.




GLU.Console – Introduction

The GLU.Console is a secure, multi-tenant, cloud hosted configuration environment. It is the ‘brain’ of GLU.Ware, you can think of it as the Integration Analyst’s configuration canvas. When you login to the GLU.Console you will be presented with a landing page as below. Depending on your user Role association, you may not have the full set of Management Tool options in the left tools list however all other tools will be consistent regardless of Role.

The most recently worked on Integrations in your Organisation will show in the central table providing quick links to your most recent configurations. Other tools on in the central pane likewise provide easier routes to start working in your desired tool. To return to the ‘landing page’ and quick link navigation, click on the GLU.Global icon at top left of your GLU.Console window. The screenshot below provides an example of the Integration Builder where GLU Analysts typically spend most of their configuration time.

The Integration Builder is used via the GLU.Console to configure Integration flows. Each integration has one or more transactions, each transaction has one or more transaction flows, each transaction flow has a number of tasks, and each task has parameters.

GLU Security

GLU follows Security by Design based Architecture with focus on security from 2 view points …

  1. Ensure GLU.Ware software (GLU.Console and GLU.Engines) adhere to highest standards (such as SAMM).
  2. Ensure the GLU.Ware (GLU.Console and GLU.Engines) provide the features needed for our clients to adhere to the highest security standards based on deep experience of secure integrations and standards set out by OWASP.

Security Architecture

GLU.Ware’s ‘top level’ architecture has been specifically designed with Client Security in mind. It comprises two constructs.

  1. The GLU.Console which is essentially a GUI based configuration console that GLU or Client Analysts use to design / configure the middleware. In the GLU.Console it has a Build Manager tool that is used to compile the actual middleware component which generates the 2nd component.
  2. The GLU.Engine is the component that actually handles the processing of transactions and data. GLU.Engines are deployed within the Customers security domain and within the Customers security controls, protocols and standards.

The GLU.Console has no visibility of Customer production transactional data; no transactional data handled by GLU.Engines is stored by GLU.

Security Principles

GLU.Ware has not been ISO27001 or SOC2 certified however GLU does subscribe to these standards seeking to align all aspects of our operations and software accordingly. A Customer’s use of GLU.Ware should not affect their implementation of security controls towards conforming to or implementing these standards. GLU follows the Trust Service Principles that underpin the Service Organisation Control SOC2 Report standard. A SOC2 report focuses on a business’s non-financial reporting controls as they relate to security, availability, processing integrity, confidentiality, and privacy of a system.

Although GLU is not SOC2 audited (as yet), GLU is working towards the Trust Service Principles which SOC2 is based upon and which are modelled around four broad areas: Policies, Communications, Procedures, and Monitoring. Each of the principles have defined criteria (controls) which must be met to demonstrate adherence to the principles and produce an unqualified opinion (no significant exceptions found during your audit). The great thing about the trust principles is that the criteria businesses must meet are predefined, making it easier for business owners to know what compliance needs are required and for users of the report to read and assess the adequacy.

GLU.Ware Secure SDLC

GLU’s SDLC is based on the OWASP SAMM Project guidelines. It covers the security areas of Governance, Construction, Verification and Operations. Work is currently underway to formalise GLUs detailed SSDLC Process and as part of GLUs Continuous Delivery practice security auto-scanning tools will be utilised where possible.

Security Compliance and Standards

GLU recommends that customers implement ISO17799 / BS7799 security standards. A Customers use of GLU.Ware should not affect their implementation of security controls towards conforming to or implementing ISO17799 / BS7799 standards. GLU.Ware has not been PCI-DSS or PA-DSS certified however use of GLU.Engines within the Customers domain should not affect Customers with requirements related to Payment Card Industry – Data Security Standard (PCI-DSS), or Payment Application – Data Security Standard (PA-DSS).

GLU.Ware’s ‘top level’ architecture has been specifically designed with Client Security in mind. It comprises two constructs: the GLU.Console is used to design and configure the middleware component (called the GLU.Engine) which is the second construct. The GLU.Engine is the component that actually handles the processing of transactions and data. GLU.Engines are deployed within the Customers security domain and within the Customers security controls, protocols and standards. The GLU.Console has no visibility of Customer production transactional data; no transactional data handled by GLU.Engines is stored or made visible to GLU. A Client PCI-DSS status is thus not affected by GLU.Ware, providing that the Client deploys and manages their GLU.Engines, with the same security standards as they do the rest of their PCI-DSS certified domain.


All Customer interactions with the GLU.Console and all data handled by the GLU.Engine is treated as Confidential as detailed in the Terms of the SaaS agreement. Confidentiality applies equally to all log files and backup records taken by GLU.

Secure Communication

All connections initiated by GLU.Engines built with the Integration builder will utilise the Java Secure Socket Extension (JSSE). Please refer to for details. This implicitly includes SSLv3 and TLSv1.2 .

GLU.Console – Deployment security

The GLU.Console benefits from the native security features of the AWS Cloud encompassing an end-to-end approach to providing secure, hardened infrastructure, including physical, operational, and software. AWS provides quick configuration of VPN integration into a new client’s network to ensure heartbeat and metrics can be received in GLU.Ware. The AWS infrastructure puts strong safeguards in place to help protect customer privacy. All data is stored in highly secure AWS data centres. AWS meets the Compliance Requirements of ISO 27001. This is a security management standard that specifies security management best practices and comprehensive security controls following the ISO 27002 best practice guidance. All access to GLU’s infrastructure in AWS is through AWS security protocols and Key Management Service.

The GLU.Console is installed on a hardened operating system within GLU’s secure AWS environment. The GLU.Console application architecture spans 2 physical tiers. In the Perimeter zone, behind our perimeter Firewall resides the GLU Reverse Proxy. This provides a number of benefits including the ability to load balance the GLU.Console; hiding the topology and characteristics of the GLU back-end servers by removing the need for direct internet access to them; handling of incoming HTTPS connections, decrypting the requests and passing unencrypted requests on to the GLU.Console server and providing for centralised logging of HTTP traffic. The Business zone hosts the core of the GLU.Console – handling all User Management and associated configuration Business activity.

GLU has been certified as AWS Well Architected – see the Case Study.

GLU.Ware – Transport security

SSL is used to access the GLU.Console. Transport security responsibility for the GLU.Engine resides with the Customer. GLU recommends to SSL encrypt or VPN tunnel all communications between 3rdParty Systems and the GLU.Engine. In the consumer channel context, some channels in particular USSD however do not support encryption. GLU recommends to Customers to apply transport layer security to such data communications at the earliest opportunity upstream of the GLU.Engine.

GLU.Ware – Secure Hash Algorithms

GLU uses a SHA when encrypting passwords, so the server only needs to keep the specific user’s hash value, not the actual password. Any breach of the database will find only the hashed values and not the actual passwords. SHAs can also be used to detect the tampering of data by attackers, preventing “Man in the Middle” attacks. GLU.Ware supports SHA-256, SHA-384 and SHA-512. GLU recommends clients avoid / do not use SHA-1 and MD5 as these have been compromised however GLU.Ware is able to support these in exceptional circumstances. This can be configured with select or all parameters in a request or a response to ensure its authenticity. The SHA mechanism uses data from the message, which it validates as input to the Hash Algorithm. For example, a request with parameters “accountID”, “amount”and “reference” can be configured to use a single parameter or all parameters as input to the Hash Algorithm. The input parameters are specified in the Hash Expression. To use “accountID” and”amount” from the example above, the expression should be set to: ${accountID}+${amount}. A Salt Key can also be inserted for additional safeguarding. The salt value provides an external input to the Hash that isn’t present in the message being validated.

GLU.Console – Authentication & Authorisation

GLU uses Spring Security to provide a flexible framework for authentication and authorisation requirements. Users of the GLU.Console are authenticated against an LDAP server hosted within the GLU.Console using a Unique Username and Password. Users are assigned permissions and are associated with roles. These permissions are configurable to enable alignment of functional roles to the Customers role / resource context. Password Security complexity rules as well as password reuse limits are enforced.

GLU.Console – Spam and Abuse Protection

The GLU.Console access is further controlled via reCAPTCHA ‘I am Human’ validation – a free service from Google that helps protect websites from spam and abuse. It uses advanced risk analysis techniques to tell humans and bots apart. As a GLU Admin on the GLU.Console, a Variables option enables this feature to be activated or deactivated. It also enables the reCAPTCHA Key to be updated when needed. The site key is stored in the Variables table in the GLU.Console Database. When GLU generates the Login page we retrieve the value and pass it to the HTML Login page, reCAPTCHA V2 is being used. For further details see:

GLU.Console – Data at rest encryption

All configuration and Customer specific data that is stored in the GLU.Console database is being encrypted using native database encryption. There is no data at rest within the GLU.Engine. Transaction data from API Requests and Responses passes through the GLU.Engine to Connectors and is not stored in any database. GLU.Engine – Network security: The GLU.Engine components are deployed by the Customer into their own network. These components can be deployed into any security zone providing all applicable routing and firewall rules (sources, destinations, ports etc.) are correctly configured by the Customers’ DevOps engineers. GLU recommends the use on the front-end of a reverse proxy, to act as an intermediary for all incoming connections.

GLU.Ware – Code security

GLU follows a secure coding discipline. Periodic static and dynamic code reviews are performed against a Test GLU.Engine. The test GLU.Engines are also periodically subjected to independent Penetration Tests. Some vulnerable components (e.g. framework libraries) can be identified and exploited with automated tools. These types of issues are not always easy to exploit; however, some sites publish plugins and scripts to automate attacks of this kind. GLU pro-actively scans for such exploits using the “Using Components with Known Vulnerabilities” category of OWASP Top 10 of 2013 as well as reviewing the types of licenses used in all the libraries within the GLU.Engine. The GLU.Engine only makes use of libraries that are licensed with Open Source licenses which only requires the Copyright and License be placed in the binary distribution. Lightweight Directory Access Protocol (LDAP) is an open, vendor-neutral, industry standard application protocols or accessing and maintaining distributed directory information services over an IP network

GLU.Console Session Timeouts

One of the most authoritative web application security standards organisations is OWASP (Open Web Application Security Project). Here’s what OWASP says about session timeouts: “Insufficient session expiration by the web application increases the exposure of other session-based attacks, as for the attacker to be able to reuse a valid session ID and hijack the associated session, it must still be active. The shorter the session interval is, the lesser the time an attacker has to use the valid session ID. The session expiration timeout values must be set accordingly with the purpose and nature of the web application, and balance security and usability, so that the user can comfortably complete the operations within the web application without his session frequently expiring…Common idle timeouts ranges are 2-5 minutes for high-value applications and 15- 30 minutes for low risk applications.”

For this reason any pages that handle potentially sensitive information have a Timeout setting of 2 minutes, whereas other pages have a timeout setting of 30 mins. 30 Seconds before your session times out you’ll be presented with a ‘pop-up’ dialogue box saying ‘Your session is about to expire. Do you want to stay connected and extend your session?’ and you’ll be given the opportunity to retain your session. If you don’t your session will expire and you’ll be logged out of the GLU.Console.

GLU.Ware Security product features

The GLU.Ware product provides many mechanisms to secure all transactions and associated Customer Data which flows through GLU’s Engines. The mechanisms and features within the product are described below.

CORS Configuration

Support for Cross-Orgin Resource Sharing Header options.

API access controls

Access control is enforced in trusted server-side code (GLU.Engine), where it is not possible to modify the access control check or metadata. Deny access is basic premise for GLU Engines.

We implement access control mechanisms once and re-use them throughout the application.

Rate limit API and controller access is available through throttle type 1 & 2 controls to minimise the harm from automated attack tooling.

[Throttle Type 1]

[Throttle Type 2]

WCF Security

GLU provides support for Windows Communication Foundation – WCF Security. Windows Communication Foundation (WCF) has two major modes for providing security (Transport and Message) and a third mode (TransportWithMessageCredential) that combines the two.

GLU.Ware currently offers Message level Security only. Should Client require either of the other two modes, please raise a Support Desk Ticket. Message security uses the WS-Security specification to secure messages. The WS-Security specification describes enhancements to SOAP messaging to ensure confidentiality, integrity, and authentication at the SOAP message level (instead of the transport level).

In brief, message security differs from transport security by encapsulating the security credentials and claims with every message along with any message protection (signing or encryption). Applying the security directly to the message by modifying its content allows the secured message to be self-containing with respect to the security aspects. This enables some scenarios that are not possible when transport security is used.

Detailed Logging & Monitoring

GLU.Engines provide for rich set of logging and monitoring functionality.

[Logging Features]

[Logging Levels]

[Monitoring APIs]

[Monitoring Dashboard]

Masking Sensitive Data & Parameters in the Logs

It is possible to set up masks for all data and parameters in the logs.

[Masking log data]

Secure Hash Algorithms

GLU uses a SHA when encrypting passwords, so the server only needs to keep the specific user’s hash value, not the actual password. Any breach of the database will find only the hashed values and not the actual passwords. SHAs can also be used to detect the tampering of data by attackers, preventing “Man in the Middle” attacks.

GLU.Ware supports SHA-256, SHA-384 and SHA-512. GLU recommends clients avoid / do not use SHA-1 and MD5 as these have been compromised however GLU.Ware is able to support these in exceptional circumstances. This can be configured with select or all parameters in a request or a response to ensure its authenticity. The SHA mechanism uses data from the message, which it validates as input to the Hash Algorithm. For example, a request with parameters “accountID”, “amount”and “reference” can be configured to use a single parameter or all parameters as input to the Hash Algorithm. The input parameters are specified in the Hash Expression. To use “accountID” and”amount” from the example above, the expression should be set to: ${accountID}+${amount}. A Salt Key can also be inserted for additional safeguarding. The salt value provides an external input to the Hash that isn’t present in the message being validated. hash

Hash Paramter configuration


There is a FUNCTION that can be used to encode a string to either Base32 or Base64.


SSL Security

GLU.Ware supports SSL encryption – see SSL Functionality

RESTful API Design-Driven Approach

There are many good online resources that you can refer to to help you understand the best practices in relation to API Design. Use this resource to help build up your foundational understanding of REST. Here is a particularly good article on RESTful API Design.

Design easy-to-consume APIs

A good API design makes the API easy to consume by the app developer. Below are a set of design best practices that have enabled many API designers with SOAP design experience to build the right set of easy-to-consume RESTful APIs. Using a data-centric model APIs should focus on the underlying entities/resources they expose, rather than a set of functions that manipulate those entities. In other words, the URLs should have nouns, not verbs.

For example, a collection of cars could have a URL And, individual cars would each have a unique URL like With this approach, you can retrieve the details of the car using the GET method, delete the car using the DELETE method, and modify properties of the car using the PATCH or PUT methods.

By contrast, in a function-oriented API, there is much more variability, and much more detail a developer has to learn. And there is no clear structure or pattern you can use to help them with the next API.

Building simple JSON

Due to its simplicity, JavaScript Object Notation (JSON) has become the de facto standard for web APIs. When JSON is used well, it is simple and intuitive. If your JSON doesn’t look as straightforward as the example below, you may be doing something wrong.


“kind”: “Car”
“name”: “BMW”,
“Color”: “Silver”,


Your JSON API will be simpler and easier to understand if you stick to the principle that the names in your JSON are always property names, and the JSON objects always correspond to entities in your API’s data model.

Expressing relationships as links

If your web APIs do not include links today, a first step is simply to add some links without making other changes, like this:

{ “id”: “12378”, “kind”: “Car” “type”: “BMW”, “Colour”: “Silver”, “ownerID”: “9876599”, “ownerLink”: “” }

Using links makes it easier for app developers to consume resources, with less to learn and no need to hunt for documentation. Moreover, links can be plugged into templates to produce the right URL.

Designing URLs

A good way to make APIs human-friendly involves the creation of entity URLs that have the entity type in them when fetching a specific resource. Thus, instead of https://cartracker. com/RTRX4545666, it is more desirable to have

Also, it is not recommended to code a hierarchy of entities into an URL. Hierarchies are not as stable as they might seem; encoding them in your URLs could prevent you from reorganizing your hierarchies in the future.

For query URLs, it is recommended to use the format:{personId}/cars rather than{personId}

Many app developer prefer the first format because it is more readable, more intuitive, and easier for API developers to implement.

GLU.Engine – Performance Related Support

Data needed for Performance related Support 

If you have a performance related issue with a GLU.Engine and are using either the GLU advised JVM settings or your own adjusted JVM settings, please provide the garbage collection log, Thread Dump file, and Heapdump in the support ticket. The relevant Unix stack and jmap commands for capturing these files are provided below. The full developer JDK will need to be installed t0 used these commands.


For centos this is the command to install the developer JDK:

sudo yum install java-1.8.0-openjdk-devel


You will need the PID of the JVM running the GE. use this command to get that:

ps -afe | grep java

This will return a result similar to the following: 

[ec2-user@ip-172-31-4-29 ~]$ ps -afe | grep java
root      2725     1  3 06:21 ?        00:05:38 java -XX:+PrintGCDetails -Xloggc:gc.log -Xms1g -Xmx3g -XX:+UseG1GC -XX:MaxGCPauseMillis=250 -XX:+UseStringDeduplication -XX:G1HeapRegionSize=32 -XX:ConcGCThreads=4 -XX:G1ReservePercent=15 -XX:InitiatingHeapOccupancyPercent=30 -XX:MetaspaceSize=100M -jar ./engine/ims-1.1-SNAPSHOT.jar --spring.config.additional-location=./engine/config/appSetting.yml
ec2-user  4715  3867  0 08:47 pts/0    00:00:00 grep --color=auto java


From this you can see that 2725 is the PID.

gc – garbage collection

If you are running the performance settings, you will find the gc.log file in the GLU working directory.

-rwx------ 1 ec2-user ec2-user    149 Sep  9 09:57
-rwx------ 1 ec2-user ec2-user    144 Sep  9 09:57
-rwx------ 1 ec2-user ec2-user    177 Sep  9 09:57
-rwx------ 1 ec2-user ec2-user    256 Sep  9 09:57 docker-compose.yml
-rwx------ 1 ec2-user ec2-user    304 Sep  9 09:57
-rwx------ 1 ec2-user ec2-user    283 Sep  9 09:57
drwxrwxr-x 3 ec2-user ec2-user     48 Sep  9 11:39 engine
-rwx------ 1 ec2-user ec2-user    278 Sep  9 13:36
-rwx------ 1 ec2-user ec2-user    456 Sep  9 13:47
-rw-r--r-- 1 root     root          5 Sep 10 06:21 pid.file
drwxrwxr-x 2 ec2-user ec2-user     25 Sep 10 06:21 log
-rw-r--r-- 1 root     root     293710 Sep 10 07:51 gc.log


gc.log is the file to attach to the Support ticket.

Thread Dump

Use the following command to capture a Thread Dump for the PID of your GLU.Engine:

jstack -l 2725 > /home/ec2-user/imsussdthreadDump.txt


where 2725 is the PID for the GLU.Engine and /home/ec2-user/imsussdthreadDump.txt is the path and the filename of the file to attach to the Support ticket.


Use the following command to capture a heapdump:

sudo jmap -dump:live,format=b,file=/home/ec2-user/imsUSSDdump.hprof 2725


where 2725 is the PID for the GE and /home/ec2-user/imsUSSDdump.hprof is the path and the filename of the file to attach to the Support ticket.

Fill the form and we’ll contact you shortly

    I agree with

    We uses cookies to make your experience on this website better. Learn more
    Accept cookies