Integration Project Initiation
When initiating any integration project the first step is to build a solid understanding of context and scope. The following steps provide guidance as to where to best focus your efforts.
List the Integration Use Cases: (also referred to as Transactions) that are in scope e.g. Balance Enquiry, Withdrawal, Airtime Purchase etc. This can take the form of a simple descriptive list.
List the End Points: (these are the business systems involved in all in scope transactions) and identify which are Initiating Systems (e.g. Web Channel / ATM System / Back Office UI etc.), Receiving Systems (e.g. Core Bank System / ERP / Database / Message Queue / Airtime Platform etc.). Some End Points and be involved in certain transactions as Initiating Systems, and in others as Receiving Systems.
For each End Point gather information relevant to your integration – the system type (e.g. Oracle DB / Rabbit Message Queue), if an API is available the Protocol (e.g. REST / SOAP / TCP / ISO8583 etc. DB Connector or ISO), the system Name and Version, the API Version etc.
Create an Integration Context Diagram: This graphical representation provides an efficient means of building a quick understanding of End Points involved, how they inter-connect, where they are located and so on. Start simple and elaborate as you gather information about the Integration project you’re working on. You can also start to identify the specific GLU.Engines need, since GLU.Engines can be connected to each other, it often makes sense to have more fine-grained GLU.Engines so they can be updated and evolved independently to minimise downtime in event of any changes being needed. GLU.Engines can also be monolithic, servicing connections to multiple Endpoints. See the example adjacent.
Create Sequence Diagrams: To align the understanding of the Use Cases, we recommending documenting them as UML Sequence Diagrams (you can use tools such as MS Visio or Draw.io). Typically to start you can focus on just the ‘positive’ or ‘happy day’ scenarios – see simple example below. Later, one should elaborate these to include the ‘failure’ scenarios. You might be able to source existing sequence diagrams if such exist. Whiteboard sessions are a good way to talk through the flows of each transaction.
Gather End Point Connection and API Specification Documents: Based on the list of End Points, collect all available API Specifications. Note that API Specs are notoriously inaccurate so some digging may be needed, always ask if the version you find is the latest (check the change control in the doc to see when it was last updated – docs that have not changed in 12 to 18 months or more are may well be outdated). If you have any SOAP API End Points in your scope, get hold of the associated WSDL files.
Gather End Point Connection and API Authentication and Authorisation details: Based on the list of End Points, document the authentication and authorisation details, SSL certificates, encryption methods used etc. Typically different credentials will be required for Test vs. Production and it may be that Production credentials may only be shared when required.
Gather Host Details: The Host URL or IP Address and Port details for each End Point for Testing and any subsequent stages in your Integration Lifecycle are needed.
Establish Test End Point access: This enables you to directly send test messages at the Test End Points and in-so-doing to gain a first hand understanding of each API. This step can take time as security sensitivities may necessitate VPNs to be put in place or special access controls to be used. This access is critical to the Integration Lifecycle so needs to be established as soon as possible. Utilise tools such as Postman to perform this step, moving from testing directly in Postman to creating a cURL command from Postman and applying this from the server which the GLU.Engine will be run from (by doing this you will test the network and port configuration too). Please refer to Test End Points for details on approach and tools.
Gather any existing Test Cases or Test Packs: These can be useful as they may highlight some of the failure scenarios that have been already identified. If automated test pack collections such as one might use with tools such as Postman or SOAP UI are available, these will help with your own test pack creation. GLU’s preferred test tool is Postman. Sometimes incumbent IT teams, suppliers or partners will have a Quality Assurance or Test Team or person. Identify who these people are and ask them for any existing Test Cases they may have. Use any available existing test cases to start to build up the test pack for your integration use case covering both positive and negative scenarios.
Gather End Point Sample Request/Response messages: Since APIs are often inaccurate, the fastest way to understand how an API behaves is to get working sample messages for each step of each in scope Use Case. This can sometimes be challenging but is worth the effort, some approaches to consider: If systems you are integrating to already exist, typically someone would have tested them, if so they may be able to provide such samples, leverage any inputs you might have gathered from your efforts to gather existing Test Packs. If you have established Test End Point access (see above) that often will give you the best mechanism of generating these sample messages. Detailed System Logs will often also trap the inbound / outbound message payloads. These are often considered ‘sensitive’ so may require obfuscation of sensitive data they may contain. This approach can also get complicated as logs from each End Point would be needed creating a potentially heavy workload to process them all. Regardless of how you gather your sample messages, having them will help to accelerate you integration configuration.
Provision VMs for GLU.Engines: for each stage of the Integration Lifecycle. See the GLU.Engine Server recommended specifications.
Define the GLU Logfile Requirements: Not always critical for initial PoC’s but almost always essential as a pre-requisite for integrations prior to ‘go live’. See the GLU Logfile page as a start but define any Client specific dashboards as needed.
Define GLU System Metrics Dashboard Requirements: Clients should use their existing Metric Analytics and Dashboard tools to monitor their GLU.Engines. The full spectrum of JMX Metric’s is available to be consumed by any compatible Monitoring tool. Click here for more on the JMX Metrics.
This Template provides a succinct basis against which an Integration Project can be executed … GLU – Project Definition Document – Template – v1.0
There are a number of tools that one can use for various purposes to assist with streamlining integration projects. Below are a few such tools that the GLU team often use. It is important to understand the context of any integration, to do so, drawing tools such as below can be used to build up Context Diagrams that show the initiating system/s and the various receiving systems involved in your integration project. Try to capture as much detail as possible in this visual artefacts.
- https://app.diagrams.net/ – for building up Context Diagrams – a free and very powerful alternative to Microsoft Visio
- https://mermaid-js.github.io/mermaid-live-editor/ – a free and powerful tool to assemble Sequence Diagrams
In order to Test your GLU.Engines you need the ability to send messages and see responses. We use Postman widely for this purpose. Additionally you’ll need to interrogate the GLU.Engine logs, this can be done directly or using a lookset like the Elastic Stack.
- https://www.postman.com/ – For Testing your GLU.Engines
- https://www.elastic.co/elastic-stack – for Log Analytics – Elasticsearch, Kibana, Beats, and Logstash (also known as the ELK Stack), allows one to search, analyze, and visualize GLU.Engine logs it in real time.
- https://hawt.io/ – a free tool to visualise your GLU.Engine JMX Metrics
Other tools that may be of use:
- https://www.docker.com/ – If your are using Docker Containers to run you GLU.Engines
The Integration Lifecycle
The Integration Lifecycle for your project may vary from that outlined below depending on specific constraints you may encounter in your project. Use this as a guide.
Probe Test Phase:
Once you have access to the Test End Points, you can commence probe testing each End Point. This provides you with an understanding of how the actual End Points behave. Tests you fire (e.g. from Postman) may be based on existing test cases you’ve managed to source, or from test cases you have created in your test tool (Postman) based on your understanding of the sample Request/Response messages you’ve sourced (first prize!) or failing that based on just the API Specs.
Lab Test Phase:
- Build Stubs: Based on your Probe Test results, using the sample Request/Response messages you are able to configure (or build) Stubs / Mock Services for each End Point. These could be stand-alone GLU.Engines that are provisioned to behave as Stubs / Mock services or they can be embedded within the integration GLU.Engine configuration. Initially just focus on the ‘happy day’ Response scenarios to get those working, the ‘failure’ scenarios can be added to your stubs and Lab Tests thereafter.
- Build Test Packs: For each Use Case, build the test cases in your Test tool (e.g. Postman).
Incrementally Configure and Build your GLU.Engine:
- Configuration: Break the configuration of each Use Case into logical segments and test each segment as you go.
- Lab Tests: Build your Lab Test GLU.Engines for testing from the Test GLU VM or Server as you go. Depending on your security policies it is also possible build initial Lab Test GLU.Engines for ‘localhost’ so you can download and test directly from your laptop or PC providing you have access from there to your Stubs. This may have security implications so should be pre-authorised by relevant parties.
- Start combining working segments to build up the full Use Case.
- Logfile Analytics: At this phase, one should also lab test the Logfile Analytics (if in scope).
- Metrics Dashboard: At this phase, one should also lab test the Metrics Dashboard (if in scope).
Integration Test Phase:
Build a GLU.Engine for the System Integration Test (SIT) Environment. You’ll need the host details for all the Test End Points as well as for the VM you’ll be running your GLU.Engine from. Execute your Test Pack for all Use Cases starting with the ‘happy day’ tests and then expanding into the ‘failure’ scenario’s.
Optional, depends on Clients process.
Optional, depends on Clients process.
The ultimate objective, your GLU.Engines will no longer be tagged as ‘SNAPSHOT’, they will be tagged as version specific ‘RELEASE’.