1. Home
  2. GLU.Guide
  3. GLU Integration Project Methodology
  4. Project Initiation and Execution Guide

Project Initiation and Execution Guide

Integration Project Initiation

When initiating any integration project the first step is to build a solid understanding of the scope of the integration project. The following steps provide guidance as to where to best focus your efforts.

 

List the Integration Use Cases: (also referred to a Transactions) that are in in scope e.g. Balance Enquiry, Withdrawal, Airtime Purchase etc. This can take the form of a simple descriptive list.

 

List the End Points: (these are the business systems involved in all in scope transactions) and identify which are Initiating Systems (e.g. Web Channel or ATM System), Receiving Systems (e.g. Core Bank System or Airtime Platform) or both (some End Points and be involved in certain transactions as Initiating Systems, and in others as Receiving Systems).

Recorded for each the type (e.g. Oracle DB / Rabbit Message Queue) or if an API is available the Protocol (e.g REST / SOAP / TCP / ISO8583 etc. DB Connector or ISO).

 

Create an Integration Context Diagram: This graphical representation provides an efficient means of building a quick understanding of End Points involved, how they inter-connect, where they are located and so on. Start simple and elaborate as you gather information about the Integration project you’re working on.

 

Create Sequence Diagrams: To align the understanding of the Use Cases, we recommending documenting them as UML Sequence Diagrams (you can use tools such as MS Visio or Draw.io). Typically to start you can focus on just the ‘positive’ or ‘happy day’ scenarios. Later, one should elaborate these to include the ‘failure’ scenarios.

Inputs: You might be able to source existing sequence diagrams if such exist.

Inputs: Whiteboard sessions are a good way to talk through the flows of each transaction.

 

Gather End Point API Specification Documents: Based on the list of End Points, collect all available API Specifications. Note that API Specs are notoriously inaccurate so some digging may be needed, always ask if the version you find is the latest (check the change control in the doc to see when it was last updated – docs that have not changed in 12 to 18 months or more are may well be outdated). Note that if you have any SOAP API End Points in your scope, get hold of the associated WSDL files.

 

Establish Test End Point access: This enables you to directly fire test messages at the Test End Points and in so doing to gain a first hand understanding of each API. This step can take time as security sensitivities may necessitate VPNs to be put in place or special access controls to be used. This access is critical to the Integration Lifecycle (see below) so needs to be established as soon as possible (if at all possible). Utilise tools such as Postman to perform this step, moving from testing directly in Postman to creating a CURL command from Postman and applying this from the server which the GLU.Engine will be run from (by doing this you will test the network and port configuration too).

 

Gather Host Details: The Host URL or IP Address and Port for each End Point for Testing and any subsequent stages in your Integration Lifecycle (see below) are needed.

 

Gather any existing Test Cases or Test Packs: These can be useful as they may highlight some of the failure scenarios that have been already identified. If automated test pack collections such as one might use with tools such as Postman or SOAP UI these will help with your own test pack creation. GLU’s preferred test tool is Postman. Sometimes incumbent IT teams, suppliers or partners will have a Quality Assurance or Test Team or person. Identify who these people are and ask them for any existing Test Cases they may have.

 

Gather End Point Sample Request/Response messages: Since APIs are often inaccurate, the fastest way to understand how an API behaves is to get working sample messages for each step of each in scope Use Case. This can sometimes be challenging but is worth the effort, some approaches to consider: If systems you are integrating to already exist, typically someone would have tested them, if so they may be able to provide such Samples, leverage any inputs you might have gathered from your efforts to gather existing Test Packs. If you have established Test End Point access (see above) that often will give you the best mechanism of generating these Sample Messages. Detailed System Logs will often also trap the inbound / outbound message payloads. These are often considered ‘sensitive’ so may require obfuscation of sensitive data they may contain. This approach can also get complicated as logs from each End Point would be needed creating a potentially heavy workload to process them all.

 

Provision VMs for GLU.Engines: for each stage of the Integration Lifecycle. See the GLU.Engine Server recommended specifications.

 

Define the GLU Logfile Analytics Requirements:Not always critical for initial PoC’s but almost always essential as a pre-requisite for integrations prior to ‘go live’. See the standard GLU Logfile Analytics as a start but define any Client specific dashboards as needed.

 

Define GLU System Metrics Dashboard Requirements:If the client wants to utilise the GLU.Console Metrics Dashboard, a VPN or other network connection will need to be established. Some clients prefer to use their existing Metric Analytics and Dashboard tools. See the standard GLU Metric Dashboards as a start but define any Client specific dashboards as needed. This Template provides a succinct basis against which an Integration Project can be executed … GLU – Project Definition Document – Template – v1.0

Useful Tools

There are a number of tools that one can used for various purposes to assist with streamlining integration projects. Below are a few such tools that the GLU team often use. It is important to understand the context of any integration, to do so, drawing tools such as below can be used to build up Context Diagrams that show the initiating system/s and the various receiving systems involved in your integration project. Try to capture as much detail as possible in this visual artefacts.

 

In order to Test your GLU.Engines you need the ability to send messages and see responses. We use Postman widely for this purpose. Additionally you’ll need to interrogate the GLU.Engine logs, this can be done directly or using a lookset like the Elastic Stack.

Other tools that may be of use:

The Integration Lifecycle

The Integration Lifecycle for your project may vary from that outlined below depending on specific constraints you may encounter in your project. Use this as a guide.

 

Probe Test Phase:

Once you have access to the Test End Points, you can commence probe testing each End Point. This provides you with an understanding of how the actual End Points behave. Tests you fire (e.g. from Postman) may be based on existing test cases you’ve managed to source, or from test cases you have created in your test tool (Postman) based on your understanding of the Sample Request/Response messages you’ve sources (first prize!) or failing that based on just the API Specs.

 

Lab Test Phase:

  1. Build Stubs: Based on your Probe Test results, using the sample Request/Response messages you are able to configure (or build) Stubs for each End Point. Initially just focus on the ‘happy day’ Response scenarios to get those working, the ‘failure’ scenarios can be added to your stubs and Lab Tests thereafter.
  2. Build Test Packs: For each Use Case, build the test cases in your Test tool (e.g. Postman)

Incrementally Configure and Build your GLU.Engine:

  1. Configuration: Break the configuration of each Use Case into logical segments and test each segment as you go.
  2. Lab Tests: Build your Lab Test GLU.Engines for testing from the Test GLU VM or Server as you go (Note: Depending on your security policies it is also possible build initial Lab Test GLU.Engines for ‘localhost’ so you can download and test directly from your laptop or PC providing you have access from there to your Stubs. This may have security implications so should be pre-authorised by relevant parties).
  3. Start combining working segments to build up the full Use Case.
  4. GLU Logfile Analytics: At this phase, one should also lab test the GLU Logfile Analytics (if in scope).
  5. GLU Metrics Dashboard: At this phase, one should also lab test the GLU Metrics Dashboard (if in scope).

Integration Test Phase:

Build a GLU.Engine for the Test Environment (typically within the Test environment). You’ll need the host details for all the Test End Points as well as for the VM you’ll be running your GLU.Engine from. Execute your Test Pack for all Use Cases starting with the ‘happy day’ tests and then expanding into the ‘failure’ scenario’s.

 

UAT Phase:

Optional, depends on Clients process.

 

Pre-Production Phase:

Optional, depends on Clients process.

 

Production:

The ultimate objective, your GLU.Engines will no longer be tagged as ‘SNAPSHOT’, they will be tagged as version specific ‘RELEASE’.

Was this article helpful?

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Need Support?

Can't find the answer you're looking for?
Contact Support
Fill the form and we’ll contact you shortly

    I agree with

    cookies
    We uses cookies to make your experience on this website better. Learn more
    Accept cookies