Swagger Load

In Release v4.0.0, the Swagger Load tool has been introduced enabling OpenAPI / Swagger files to be loaded into the GLU.Console to generate Transaction configurations directly.

Linking the swagger to your connector

For any REST connector, the green button is available for you to link a swagger.

The first time you use the Swagger Load tool the pop-up below will prompt you to either upload a .json or .yaml file or to point to a URL for the Swagger file you wish to use.

Once this is done and the Swagger document has been loaded against the connector, when you view the connector in the Connector screen you will see the BOTH button will be pink and an extra button will appear in the ACTION column.

Using the swagger to generate configuration

You can access the Swagger Load tool within the Orchestration Manager. It will appear if the Connector you have selected is REST. If not, it will not appear.

If you have previously loaded and used a Swagger file, when you click on ‘Generate Endpoint from Swagger’ it will bring up the most recently loaded Swagger (example below). If you want to use a different Swagger, click on ‘Load New Version’.

The Connector Swagger Manager popup (above) will show all API Transactions available within the Swagger file. You can use the radio buttons to select the API Transaction you want to generate you config for.

Below, you can see the ‘API Transaction’ selected is ‘Find Pet by ID’ and you are then given the option to define the Request and Response content types depending on what the API Transaction chosen supports.

You then click ‘Generate’ to create the configuration for this leg of your Orchestration. You then simply need to clear the validation warnings by setting the Parameter Names to use for each Parameter on the Request and Response.

Enablers

GLU.Ware leverages various software, libraries and tools. The key underlying enabler of GLU.Ware being Apache Camel along with various other libraries are opensource and are used under the permissive Apache 2.0 opensource license. The GLU ISO8583 Connector makes use of the JPOS component under the opensource GNU Affero General Public license. The Jenkins tool and slf4j-log4j12 library is used under the permissive MIT opensource License. OpenJDK is used under the GNU General Public License v2.  Hibernate is used under the opensource GNU Lesser General Public License version 2.1 license. Other disclosed programmes are proprietary in nature such as the various AWS tools that GLU Software relies on – those don’t form part of the software code but the software relies on those disclosed programmes to function.

GLU Functions

GLU Functions and Formulas offer versatility, serving both derived parameters, request, and response handlers. 

It is important to note that a singular derived parameter or handler can only be associated with one FUNCTION, prohibiting the mixing of two FUNCTIONS. For instance, if a Derived Parameter needs to calculate the time difference between the current time (utilising the NOW FUNCTION) and another parameter, the DIFFTIMESTAMP FUNCTION can be employed. However, it necessitates first defining a Derived Parameter, let us say ‘timeNow’, using the NOW FUNCTION. Subsequently, the DIFFTIMESTAMP FUNCTION can be utilised with FUNCTION notation as demonstrated below: 

DIFFTIMESTAMP(${timeExpiry}-${timeNow})

It is possible to execute functions only by checking the box Run Function, as some functions do not return values, such as removing data from the cache.


The below screenshot shows the tick box selected and the parameter field not being shown.

All functions are accessible through the Predefined Functions feature. Upon selecting the “Predefined Functions” tick box, a drop-down menu displays a list of predefined functions. Opting for a Predefined Function automatically replaces the Function or Formula box with the template of associated parameters. As show in the screenshot below:

If the box is unticked, the Predefined Function field vanishes, while the function itself persists.  As show in the screenshot below:

Note: FORMULAs involve the use of mathematical calculations and are always prefixed with the ‘=’ symbol. FUNCTIONs are not preceded by any symbol.

Initialise

This Derived Parameter is the most basic of FUNCTIONS in that it enables one to create a Derived Parameter that has a specific INITIAL value. IN the example below the starting value will be ‘0’ for the ‘redeemedAmountFormatted’ Derived Parameter. This enables one to add for example a Handler rule that will overwrite this parameter in the event that another received parameter e.g. ‘redeemedAmount’ is NOT NULL.


IFNULL Function

The IFNULL Function is used to check whether a parameter is NULL, and if so, return another parameter that is specified by the user. This function is like a try/catch statement in JavaScript.

Function Structure

IFNULL(${nullParam},${string}) 

When a Derived Parameter is created utilising the IFNULL Function, it checks if the first parameter (`${nullParam}`) is NULL. If it is, the function returns another specified parameter (`${string}`). In cases where no parameter is explicitly specified, a static value is returned. If the first parameter is not NULL, the function simply returns the value of the first parameter. 

Examples

 Here are some examples to illustrate its usage: 

Example 1:

The IFNULL function checks if Param1 is null or not. In this case, Param1 is provided with the value “Big_Bang.” Since Param1 is not null, the IFNULL function returns the value of Param1, which is “Big_Bang.”

Therefore, the result of the expression IFNULL(${Param1},${Param2}) in this specific case is “Big_Bang.”

IFNULL(${Param1},${Param2})

(Param1 isn't sent at all)
Param2 = "Hello_World"

IFNULL returns "Hello_World"

Scenario:

  • Param1 is not sent or is null.
  • Param2 is set to “Hello_World”.

Outcome:

In this scenario, the IFNULL function will return “Hello_World” because Param1 is either not sent or is null, and the fallback value is specified as Param2, which is “Hello_World”.


Example 2:

The IFNULL function checks if the first parameter (Param1) is null. If it is null, the function returns the second parameter (Param2). If it’s not null, it returns the value of the first parameter.

IFNULL(${Param1},${Param2})

Param1 = "Big_Bang"
Param2 = "Hello_World"

IFNULL returns "Big_Bang"

Scenario:

  • Param1 is provided with the value “Big_Bang,” which is not null.
  • Param2 is “Hello_World.”

Outcome:

Since Param1 is not null, the IFNULL function returns the value of Param1, which is “Big_Bang.”

Therefore, the result of the expression IFNULL(${Param1},${Param2}) with the given values is “Big_Bang.”


Example 3:

IFNULL(${Param1},"Bye_World")

(Param1 isn't sent at all)
Param2 = "Hello_World"

IFNULL returns "Bye_World"

Scenario:

  • Param1 is not sent at all, meaning it’s null.
  • Param2 is “Hello_World.”

Outcome:

Since Param1 is null, the IFNULL function returns the value of Param2, which is “Bye_World.”

Therefore, the result of the expression IFNULL(${Param1},"Bye_World") with the given values is “Bye_World.”

In each example, the behaviour of the IFNULL function is highlighted, illustrating how it handles NULL parameters and returns the appropriate value based on the specified conditions. 


IFEMPTY Function

The IFEMPTY function is like to the IFNULL function and is used to check whether a parameter is EMPTY, meaning it lacks an assigned value. If the parameter is indeed EMPTY, the function returns another parameter specified by the user. 

Function Structure

IFEMPTY(${emptyParam},${stringTwo}) 

When a Derived Parameter is created using the IFEMPTY function, if the first parameter is EMPTY, it will return the specified parameter (or, if none is specified, a static value). Conversely, if the first parameter is not EMPTY, it will return the value of the first parameter. 

Example

Example 1:

The IFEMPTY function is used to check if the first parameter (Param1) is an empty string. If it is empty, the function returns the second parameter (Param2). If it’s not empty, it returns the value of the first parameter.

IFEMPTY(${Param1},${Param2})

Param1 = ""
Param2 = "Hello_World"

IFEMPTY returns "Hello_World"

Scenario:

  • Param1 is an empty string, as indicated by Param1 = "".
  • Param2 is “Hello_World.”

Outcome:

Since Param1 is empty, the IFEMPTY function returns the value of Param2, which is “Hello_World.”

Therefore, the result of the expression IFEMPTY(${Param1},"Hello_World") with the given values is “Hello_World.”


Example 2:

The IFEMPTY function is used to check if the first parameter (Param1) is an empty string. If it is empty, the function returns the second parameter (Param2). If it’s not empty, it returns the value of the first parameter.

IFEMPTY(${Param1},${Param2})

Param1 = "Big_Bang"
Param2 = "Hello_World"

IFEMPTY returns "Big_Bang"

Scenario:

  • Param1 is not an empty string, as it is “Big_Bang.”
  • Param2 is “Hello_World.”

Outcome:

Since Param1 is not empty, the IFEMPTY function returns the value of Param1, which is “Big_Bang.”

Therefore, the result of the expression IFEMPTY(${Param1},"Hello_World") with the given values is “Big_Bang.”


Example 3:

The IFEMPTY function is used to check if the first parameter (Param1) is an empty string. If it is empty, the function returns the second parameter (Param2). If it’s not empty, it returns the value of the first parameter.

IFEMPTY(${Param1},"Bye_World")

Param1 = ""
Param2 = "Hello_World"

IFEMPTY returns "Bye_World"

Scenario:

  • Param1 is an empty string, as it is “”.
  • Param2 is “Bye_World.”

Outcome:

Since Param1 is empty, the IFEMPTY function returns the value of Param2, which is “Bye_World.”

Therefore, the result of the expression IFEMPTY(${Param1},"Bye_World") with the given values is “Bye_World.”

IFEMPTY provides a flexible way to handle situations where parameters might lack values, ensuring your program behaves as intended even under varying conditions. 



IFNULL OR EMPTY Function

The IFNULLOREMPTY Function is a combination of the IFNULL and IFEMPTY Functions and is used to check whether a parameter is either NULL OR EMPTY and if so, return another parameter that is specified by the user. This function seamlessly navigates between these two states, providing flexibility in handling different conditions.

 

Function Structure

IFEMPTY(${emptyParam},${stringTwo}) 

When the first parameter is identified as NULL OR EMPTY, the function returns a specified parameter (or, if none is specified, a static value). Conversely, when the first parameter is neither NULL nor EMPTY, the function returns the value of the first parameter. 

Example

Example 1:

IFNULLOREMPTY(${Param1},${Param2})

Param1 = ""
Param2 = "Hello_World"

IFNULLOREMPTY returns "Hello_World"

Scenario:

  • Param1 is an empty string ("").
  • Param2 is “Hello_World.”

Outcome:

The function IFNULLOREMPTY (assuming it works like IFEMPTY or IFNULL) checks if Param1 is either null or empty. In this case, since Param1 is an empty string, the function returns the second parameter, which is “Hello_World.”


Example 2:

IFNULLOREMPTY(${Param1},${Param2})

Param1 = "Big_Bang"
Param2 = "Hello_World"

IFNULLOREMPTY returns "Big_Bang"

Scenario:

  • Param1 is an empty string ("").
  • Param2 is “Hello_World.”

Outcome:

The function IFNULLOREMPTY checks if Param1 is either null or empty. In this case, since Param1 is an empty string, the function returns the second parameter, which is “Hello_World.”


Example 3:

IFNULLOREMPTY(${Param1},"Bye_World")

(Param1 isn't sent at all)
Param2 = "Bye_World"

IFNULLOREMPTY returns "Bye_World"

Scenario:

  • Param1 is not sent or is an empty string.
  • Param2 is “Bye_World”.

Outcome:

  • The IFNULLOREMPTY function returns “Bye_World” because Param1 is either null or empty, and the default value is used in such cases.

This function is useful for providing a fallback value when a parameter may not be present or is an empty string.

The IFNULLOREMPTY function proves to be a versatile solution, offering a comprehensive approach to handle scenarios involving both NULL and EMPTY conditions. Its flexibility allows you to tailor the output based on the state of the initial parameter. 

GLU SERVER NAME

This function is used to retrieve the name of the server where the GLU application is running. This is a placeholder that will be replaced with the actual server’s name when the expression is evaluated.

Function Structure

${GLU_SERVER_NAME}

Example

If, for instance, the GLU application is running on a server with the name “DESKTOP-JH9PA6A,” then when you use `${GLU_SERVER_NAME}`, the response will be: 

DESKTOP-JH9PA6A 

This allows you to dynamically capture and use the server’s name within your application or responses. 

GLU Transaction ID Function

The `GLU_TRX_ID` function is designed to retrieve the unique transaction ID associated with a specific transaction. This identifier serves as a distinct label for each transaction, ensuring that every new transaction is assigned a unique and identifiable value. 

Function Structure

${GLU_TRX_ID}

Example

If the function or placeholder for obtaining the transaction ID is, for instance, `${TRANSACTION_ID}`, and the ID for a test transaction is “b67f0087-a3c4-4e28-b8f1-d01b21086b1d,” then when you use `${TRANSACTION_ID}`, the response will be: 

b67f0087-a3c4-4e28-b8f1-d01b21086b1d 

This allows you to reference and use the unique transaction ID within your application or responses. Please replace `${TRANSACTION_ID}` with the actual function or placeholder used in your system. 

GLU REDELIVERY COUNTER

`${GLU_REDELIVERY_COUNTER}` is a system variable that provides the count of retry attempts made by the system during a particular operation. It is often used in conjunction with a retry mechanism to manage and control how many times an operation should be retried. 

Function Structure

${GLU_REDELIVERY_COUNTER}

Example: 

Consider a scenario where a message delivery operation is subject to potential transient failures, such as network issues. A retry mechanism is implemented to handle such failures, and `${GLU_REDELIVERY_COUNTER}` is utilised to keep track of the retry attempts. 

Explanation: 

  • Retry Condition: The example checks if `${GLU_REDELIVERY_COUNTER}` is less than 3. This implies that the system will attempt to deliver the message again only if the previous attempts have not succeeded. 
  • Retry Logic: If the counter is below the specified threshold (in this case, 3), the system initiates another attempt to deliver the message. The actual retry mechanism might introduce delays between attempts to allow for transient issues to resolve. 
  • Maximum Retry Attempts: The use of `${GLU_REDELIVERY_COUNTER}` allows developers to set a maximum limit on the number of retry attempts. In this example, if the counter exceeds 2 (since it starts from 0), the system will log an error and stop further retry attempts. 

Result: 

Let us examine how the system behaves during different retry attempts: 

  • First Attempt: `${GLU_REDELIVERY_COUNTER}` is 0. The system retries the message delivery. 
  • Second Attempt: `${GLU_REDELIVERY_COUNTER} ` is 1. The system retries again. 
  • Third Attempt: `${GLU_REDELIVERY_COUNTER}` is 2. The system makes one more attempt. 
  • Fourth Attempt: `${GLU_REDELIVERY_COUNTER}` is now 3. The maximum retry limit is reached. The system logs an error and ceases further retry attempts. 


SPLIT Function

The SPLIT Function allows users to break down a string based on a specific character or delimiter. Upon execution, this function generates an array where each element corresponds to a segment of the split string, with indices starting at 0. 

Function Structure

SPLIT(${stringOne}, delimiter) 

This function operates by parsing the input string (${stringOne}) and splitting it at every occurrence of the specified delimiter. After that, it constructs an array containing the segmented strings. 

Example

SPLIT(${stringOne},_)

stringOne = "Jim_and_Pam"

Returns:
[
{
"value": "Jim",
"key": "0"
},
{
"value": "and",
"key": "1"
},
{
"value": "Pam",
"key": "2"
}
]

In this example, the SPLIT function divides the string “Jim_and_Pam” at each underscore character (‘_’). Consequently, it generates an array comprising segments, each represented by a key-value pair, where “value” signifies the segmented string, and “key” denotes its index within the array. 

CREATE VALUE AS STRING FROM ARRAYS

The `CREATE_VALUE_AS_STRING_FROM_ARRAYS` function is designed to extract a string from a multi-level array based on specified parameters.

Function Structure :

CREATE_VALUE_AS_STRING_FROM_ARRAYS(<sourceArrayName1>[].<sourceArrayName2>[], <attributeName>, [<delimiterForArray> <delimiterBetweenValues>])
  • `sourceArrayName1`: Top-level collection path name. 
  • `sourceArrayName2`: Next level down from the top of the collection path. You can add extra levels as needed. 
  • `attributeName`: The attribute you want to extract into a string. 
  • `delimiters`: Optional delimiters; if not included, the string will not have any delimiters. If used, wrap them with `<>` to avoid confusion with the function. 

Example

When setting up this Derived Parameter, you should specify ‘numbers’ as the ‘derivedParameterName’ and input the following formula in the ‘Formula’ box:

CREATE_VALUE_AS_STRING_FROM_ARRAYS(boards[].selections[], selection, [; <,>]) 

This configuration will create the ‘numbers’ parameter by extracting the ‘selection’ values from the arrays within ‘incomingBoards’, and it will concatenate them into a single string using the specified delimiters [; <,>].

Explanation: 

  • `sourceArrayName1`: `boards` 
  • `sourceArrayName2`: `selections` 
  • `attributeName`: `selection` 
  • `delimiters`: `;` (delimiterForArray) and `,` (delimiterBetweenValues) 

Given the following array structure: 

{ 
"boards": [ 
{"selections": ["4", "14", "18"]}, 
{"selections": ["2", "19", "20"]}, 
{"selections": ["1", "12", "18"]} 
] 
} 

The function transforms it into the following string: 

"numbers": "4,14,18;2,19,20;1,12,18" 

When configuring the Derived Parameter: 

  • `derivedParameterName`: `numbers` 
  • `Formula` box: `CREATE_VALUE_AS_STRING_FROM_ARRAYS(incomingBoards[].selections[], selection, [; <,>])` 

ADD ATTRIBUTE TO ARRAY WITH FIX VALUE

The `ADD_ATTRIBUTE_TO_ARRAY_WITH_FIX_VALUE` function is used to add a fixed value to an array.

Function Structure:

ADD_ATTRIBUTE_TO_ARRAY_WITH_FIX_VALUE(${Array}, <AttributeName>, <FixedValue>) 

  • `${Array}`: The array to which the attribute is added. 
  •  `<AttributeName>`: The name of the attribute to be added. 
  •  `<FixedValue>`: The fixed value to be assigned to the specified attribute. 

Example:

ADD_ATTRIBUTE_TO_ARRAY_WITH_FIX_VALUE(${Token}, SerialNumber, ${receiptNo}) 

This function adds a `SerialNumber` attribute to each element in the `${Token}` array and assigns the value of `${receiptNo}` as the fixed value. 

Given the input array: 

<stdToken units="66.666664" amt="1346" tax="202" tariff="..." desc="Normal Sale" unitsType="kWh" rctNum="639221497438">64879811944360134888</stdToken> 

<bsstToken bsstDate="2020-12-09 08:16:00 +0200" units="50.0" amt="0" tax="0" tariff="..." desc="FBE Token" unitsType="kWh">49098796041557732611</bsstToken> 

Applying the function: 

ADD_ATTRIBUTE_TO_ARRAY_WITH_FIX_VALUE(${Token}, SerialNumber, ${receiptNo}) 

Results in: 

<stdToken units="66.666664" amt="1346" tax="202" tariff="..." desc="Normal Sale" unitsType="kWh" rctNum="639221497438" SerialNumber="1234567890">64879811944360134888</stdToken> 

<bsstToken bsstDate="2020-12-09 08:16:00 +0200" units="50.0" amt="0" tax="0" tariff="..." desc="FBE Token" unitsType="kWh" SerialNumber="1234567890">49098796041557732611</bsstToken> 

Here, the `SerialNumber` attribute is added to each `<stdToken>` and `<bsstToken>` element in the `${Token}` array with the fixed value `${receiptNo}` (assuming `${receiptNo}` is dynamically provided). Adjust the parameters as per your specific use case. 


LENGTH Function

The `LENGTH(${string})` function calculates and returns the length (number of characters) of the specified string.  

 Function Structure

LENGTH(${string}) 

  •  `${string}`: The string for which you want to calculate the length. 

Example

LENGTH(${attribute}) 

${string} = "Hello_World" 
LENGTH(${string}) 

In this case, the function `LENGTH(${string})` would return the value `11`, as there are 11 characters in the string “Hello_World”

Given the example:

"attribute": "Hello_world" 

Applying the function: 

LENGTH(${attribute}) 

Results: 

"lengthOfAttribute": 11 

Here, the `LENGTH` function calculates the length of the string “Hello_world” in the `${attribute}` parameter and returns the result as a new derived parameter named “lengthOfAttribute”. The value 11 represents the number of characters in the string. 

GET ISO MESSAGE WITH LENGTHS

In general terms, this function generates an ISO message and includes length information for the elements within the ${Field12722AllData} variable or field. ISO 8583 messages typically consist of fixed-length or variable-length fields, and the inclusion of length information is crucial for parsing and interpreting the message correctly. 

Here for more detail: 

Function Structure

GET_ISO_MESSAGE_WITH_LENGTHS(${string}) 
  •  `${string}`: The string for which you want to calculate the length and the length of the length.

Example

GET_ISO_MESSAGE_WITH_LENGTHS(${Field12722AllData}) 

Given the example: 

"Field12722AllData": "IFSFData...restOfPayload" 

 

Applying the function: 

GET_ISO_MESSAGE_WITH_LENGTHS(${Field12722AllData}) 

Results in: 

[ISO_LENGTH] Value: [3584<IFSFData...restOfPayload</IFSFData] 

Here, `ISO_LENGTH` is a derived parameter that contains the length and the length of the length of the string `${Field12722AllData}`. The specific details of how these lengths are calculated are likely part of the internal logic related to ISO 8583 message formatting. Please refer to your system’s documentation for precise details. 

NOW Function (Current Date with Pattern)

The NOW function is used to capture the current date and time, and it can also be customised to display the time in a specific format.

Function Structure: 

NOW([format]) 

  • `format` (optional): Specifies the desired format for the date and time. If not provided, the default format will be used. 

Example

1. Using Default Format: 

NOW() 

This will store the time with the default format, for example: 

"now": "Fri Aug 14 13:10:22 SAST 2020" 

2. Using a Specific Format: 

NOW(YYYY-MM-DD HH:MM:SS) 

This will store the time with the specified format, for example: 

"now": "2020-08-14 13:10:22" 

 3. Without Parentheses 

NOW 

This will store the time with a specific format, for example: 

"now": "14/08/2022" 

Example in a Template: 


  "responseMessage": "variableParameterResponse", 
  "staticTimestamp": "${staticTimestamp}", 
  "now": "${now}" 

In this example, when the template is processed, the `${now}` variable will be replaced with the current date and time based on the specified or default format. 


DATE FORMAT Function

The `DATEFORMAT` function is used to change the date format from one specified format to another.

Function Structure: 

dateformat(${date}, <newFormat>)
  •  `date`: The original date value. 
  •  `<newFormat>`: The desired format for the date. 

Example

dateformat(${date}, yyyy-MM-dd HH:mm:ss:ms)

This will convert the date “29/09/2021” (in dd/MM/yyyy format) to the new format “2021-09-29 00:00:00:00” (in yyyy-MM-dd HH:mm:ss:ms format). 

 Example Usage: 


  “date”: “29/09/2021”, 
  “dateInNewFormat”: “${dateformat(${date}, yyyy-MM-dd HH:mm:ss:ms)}” 

In this example, the `dateInNewFormat` variable will be replaced with the converted date when the template is processed. 

Make sure to replace `${date}` with the actual variable or value containing the original date you want to format and adjust the desired format according to your requirements. 

RANDOM Function

The `RANDOM` function generates a random number within a specified range.

Function Structure: 

random[min, max] 

  •  `min`: The minimum value of the range. 
  •  `max`: The maximum value of the range. 

Example

random[10, 20]

This function will return a random number between 10 and 20 (inclusive). Each time this function is called, a different random number within this range will be generated. 

Example Usage: 


  "randomNumber": "${random[10, 20]}" 
}
 

In this example, the `”randomNumber”` variable will be replaced with a different random number between 10 and 20 each time the template is processed. 

Adjust the `min` and `max` values according to your specific range requirements. 

 

PADRIGHTSTRING Function

The `padrightstring` function is used to pad a string with a specified character (in this case, ‘0’) to the right until it reaches a certain length. Here is an explanation of the function with your example: 

Function Structure:

padrightstring(${string}, length, character) 

Paramaters:

  • `${string}`: The original string. 
  • `length`: Total number of characters after padding. 
  • `character`: The character used for padding. 

Example

padrightstring(${amountOne}, 10, 0) 

In your specific case: 

  •  ${amountOne}: This is the input string.  
  • 10: This is the desired total length of the resulting string after padding.
  • 0: This is the padding character.

PADLEFTSTRING Function

The `PADLEFTSTRING` function is used to pad a string with a specified character to the left until it reaches a certain length.

Function Structure: 

padleftstring(${string}, length, character) 

  • `${string}`: The original string. 
  • `length`: Total number of characters after padding. 
  • `character`: The character used for padding. 

Example

padleftstring(${amountOne}, 10, 0) 

  • `${amountOne}: This is the input string
  • 10: This is the desired total length of the resulting string after padding.
  • 0: This is the padding character. In this case, the character is ‘0’ (zero)

STRIPSTART Function

The `STRIPSTART` function is used to remove leading characters from a string that match the specified character.

Function Structure: 

stripstart(${parameterName}, stripChar) 

  • `${parameterName}`: The original string. 
  •  `stripChar`: The character to be removed from the beginning of the string. 

Example 

STRIPSTART(${accountNumber}, 0)

  •  `${accountNumber}` is, for example, “00000867512837656” 
  • `stripChar` is ‘0’ 

Result: 

The function will remove all leading ‘0’ characters from the account number. So, “00000867512837656” will be saved as “867512837656”, and “00087693487672938” will be saved as “87693487672938”. 

DIFF TIME STAMP

The `DIFFTIMESTAMP` function is used to calculate the difference between two timestamps in milliseconds.

Function Structure: 

difftimestamp(${dateTwo},${dateOne}) 

  •  `${dateTwo}`: The later date. 
  • `${dateOne}`: The earlier date. 


Example 

difftimestamp(${dateTwo},${dateOne}) 

In your specific case: 

  •  `DateTwo`: “17/03/2021” 
  • `DateOne`: “17/03/2020” 

Result: 

The function calculates the difference between these two dates in milliseconds. In your example, it results in: 

difftimestamp = 31536000000 

This represents one year calculated in milliseconds (1 year * 365 days (about 12 months) * 24 hours * 60 minutes * 60 seconds * 1000 milliseconds). 

If you need to calculate the time in minutes between the current time and an expiry time, you can follow these steps: 

1. Create a Derived Parameter called `timeNow` using the `${NOW}` function. 

2. Then create a Derived Parameter called `calcedExpiryTimeMilliSeconds` using the `difftimestamp` function to calculate the time difference in milliseconds. 

3. Now you can use the formula to convert `calcedExpiryTimeMilliSeconds` to `minutes`

This way, you can effectively calculate the time difference between two timestamps and convert it to the desired unit, such as minutes. 

RIGHTSTRING Function

The `RIGHTSTRING` function is used to extract the rightmost characters from a string or parameter.

Function Structure: 

 ${string}.rightString[n] 

Parameters:

  •  `${string}`: The source string or parameter. 
  • `n`: The number of characters to extract from the right. 

Example:

${tax_id}.rightString[8] 

In this example, `${tax_id}` is a parameter or string, and you want to extract the rightmost 8 characters from it. 

Result: 

If `${tax_id}` contains, for example, “1234567890”, then `${tax_id}.rightString[8]` will result in “567890”

This function is useful when you need to retrieve a specific number of characters from the right side of a string or parameter. 

SUBSTRING Function

he `SUBSTRING` function is used to extract a portion of a string based on the specified starting and ending indices.  

Function Structures: 

1. With only the starting index: 

SUBSTRING(${string}, startNumber) 

2. With both starting and ending indices: 

SUBSTRING(${string}, startNumber, endNumber) 

  •  `${string}`: The source string or parameter. 
  • `startNumber`: The starting index (0-based) of the substring. 
  • `endNumber` (optional): The ending index (0-based) of the substring. 

Example

1. With only the starting index: 

SUBSTRING(${stringOne}, 5) 

In this example, `${stringOne}` is a parameter or string, and you want to extract the substring starting from the 5th index. 

 Result: 

If `${stringOne}` contains “Hello_world”, then `SUBSTRING(${stringOne}, 5)` will result in “o_world” (it extracts characters from index 5 to the end). 

2. With both starting and ending indices: 

SUBSTRING(${stringOne}, 0, 5) 

In this example, `${stringOne}` is a parameter or string, and you want to extract the substring starting from the 0th index up to the 5th index. 

Result: 

If `${stringOne}` contains “Hello_world”, then `SUBSTRING(${stringOne}, 0, 5)` will result in “Hello” (it extracts characters from index 0 to 5, excluding the character at index 5). 

This function is useful for manipulating and extracting specific portions of strings. 

SUBSTRING BETWEEN Function

The `SUBSTRING_BETWEEN` function is used to extract a substring from the original string located between two specified texts or substrings.

Function Structure: 

SUBSTRING_BETWEEN(${string}, text1, text2) 

  • `${string}`: The source string or parameter. 
  •  `text1`: The starting text or substring. 
  • `text2`: The ending text or substring. 


Example

SUBSTRING_BETWEEN(${stringOne}, DE, IZE) 

In this example, `${stringOne}` is a parameter or string, and you want to extract the substring that occurs between the texts “DE” and “IZE” in the original string. 

This function is useful for scenarios where you need to extract a specific portion of a string that is bounded by two known texts or substrings. 

TIMESTAMP Function

The `TIMESTAMP` function is used to obtain the current timestamp calculated in milliseconds.

Function Structure: 

timestamp 

Example

timestamp 

The `timestamp` function is used independently without any parameters. When called, it returns the current timestamp, representing the number of milliseconds that have elapsed since the Unix epoch (January 1, 1970, 00:00:00 UTC). 

Result: 

If you call `timestamp` at a specific moment, it will return the corresponding timestamp value. 

In the provided example: 

  • Current date & time: Tuesday 9 March 2021 08:50:38.955 
  •  `timestamp` result: 1615279838955 (milliseconds since the Unix epoch) 

This value can be useful for capturing and working with the current time in various scenarios within a system or application. 

UTC Time Function

The `CURRENT_DATE_TIME_UTC()` function returns the current date and time in Coordinated Universal Time (UTC). The format of the returned value is in the ISO 8601 format, which includes the year, month, day, hour, minute, second, and milliseconds, followed by the ‘Z’ indicating UTC. 

Function Structure: 

CURRENT_DATE_TIME_UTC()

Example

If you call `CURRENT_DATE_TIME_UTC()` at a specific moment, it will return a result like: 

2022-06-22T13:52:50.083Z 

This timestamp provides a standardised representation of the current date and time in UTC and is commonly used in various systems and applications. The ‘Z’ at the end indicates that the time is in UTC. 

Concatenate Function

The expression `${String1}:${string2}:${string3}: ……` is a template or formula used to concatenate (join) multiple strings together using colons (`:`) as separators. The values of `${String1}`, `${string2}`, `${string3}`, etc., will be replaced with actual values when this expression is evaluated. 

Function Structure:

${String1}:${string2}:${string3}: ......

Example

If you have the following values: 

  • `${date}` is “09/03/2021” 
  • `${string}` is “Hello_world” 
  • `${day}` is “Tuesday” 

When you substitute these values into the formula `${date}:${string}:${day}`, the result will be:

09/03/2021:Hello_world:Tuesday 

 

So, the response is a single string where the values of `${date}`, `${string}`, and `${day}` are joined together using colons as separators. 

ADD DAYS TO DATE Function

The ADD_DAYS_TO_DATE function is utilised to add a specified number of days to a given date. The syntax is ADD_DAYS_TO_DATE(${date}, <number of days to add>). The number of days can be provided in the request as a variable, for instance, ADD_DAYS_TO_DATE(${date}, ${numberOfDays}).

 Function Structure: 

ADD_DAYS_TO_DATE(${date},<numbers of days to add>)

Example

As mentioned above ADD_DAYS_TO_DATE function is used to calculate a new date by adding a specified number of days to an existing date. Here are two examples:

Example 1:

ADD_DAYS_TO_DATE(${dateOne},5)

  • Initial Date (Input): “09/03/2021”
  • Number of Days to Add (Input): 5
  • Result: The calculated date after adding 5 days to the initial date is “14/03/2021”.

Example 2:


ADD_DAYS_TO_DATE(${dateOne},${date})
  • Function: ADD_DAYS_TO_DATE(${dateOne}, ${date})
  • Initial Date (Input): “09/03/2021”
  • Number of Days to Add (Input): This would depend on the value of ${date} which is not specified.
  • Result: The calculated date after adding the specified number of days to the initial date.

In both examples, the function returns a new date. The first example adds a fixed number of days (5) to a specific date (${dateOne}). The second example suggests adding a variable number of days (specified by ${date}) to the same initial date (${dateOne}), but the specific outcome depends on the value of ${date}.

REMOVE DAYS TO DATE Function

The purpose of the REMOVE_DAYS_TO_DATE function is to manipulate dates by subtracting a specific number of days from a given date.

 Function Structure: 

REMOVE_DAYS_TO_DATE(${dateOne}, <number_of_days_to_remove>)

Paramaters:

  • ${dateOne}: This variable likely represents the initial date from which you want to subtract days.
  • <number_of_days_to_remove>: This is a placeholder for the actual number of days you want to subtract from the ${dateOne}.

Example

REMOVE_DAYS_TO_DATE(${date}, 5)
  • The variable `date` is initially set to “09/03/2021”. 
  •  The function instructs to remove 5 days from the given date. 

 Explanation 

  1. Initial Date: The starting point is the date “09/03/2021” (assuming the format is DD/MM/YYYY). 
  2. Subtraction: The function subtracts 5 days from the initial date. 
  3. Result: The response will be “07/03/2021”. 

`REMOVE_DAYS_TO_DATE` is a convenient function for scenarios where you need to calculate a new date by subtracting a certain number of days from an existing date. It is particularly useful in data manipulations and can be employed in various contexts, such as managing time-based operations or adjusting timestamps based on specific requirements. 

DIFFERENCE BETWEEN DATES

The `DIFFERENCE_BETWEEN_DATES` function in GLU calculates the difference in days between two specified dates. It provides a convenient way to determine the duration or gap between two dates, ignoring the time components. 

Function Structure: 

DIFFERENCE_BETWEEN_DATES(${dateTwo}, ${dateOne}) 

  •  `${dateTwo}` and `${dateOne}` are placeholders or variables representing the two dates for which the difference needs to be calculated. 

Example

Suppose you have two dates: 

  • `${dateOne}`: “2022-01-15” 
  • `${dateTwo}`: “2022-02-10” 

Using the `DIFFERENCE_BETWEEN_DATES` function: 

DIFFERENCE_BETWEEN_DATES(${dateTwo}, ${dateOne})  

Result: 

The result of this function will be the number of days between the two specified dates: 

Result: 26 days 

Note: 

  •  The dates can be in any valid date format. 
  • The result is in terms of days. 

The `DIFFERENCE_BETWEEN_DATES` function is useful for scenarios where you need to calculate the difference in days between two dates, such as in scheduling, billing, or other time-related operations. 

SET DATA TO CACHE

The `SET_DATA_TO_CACHE` function in GLU is used to store a variable value in a cache, associating it with a specified cache parameter name. This allows you to manage and retrieve values from the cache in your application. 

Function Structure: 

SET_DATA_TO_CACHE(${NewCacheValuepid},cachepid)
  • `${NewCacheValuepid}` is a placeholder or variable representing the value that you want to store in the cache. 
  •  `cachepid` is the name assigned to the parameter within the cache. 


Example

The example below shows how the SET_DATA_TO_CACHE is used in a handler to assign value to the cache value cachepid.

Suppose you want to store the value of a variable `${NewCacheValuepid}` in the cache and associate it with the cache parameter `cachepid`. Here is how you would use the `SET_DATA_TO_CACHE` function:

SET_DATA_TO_CACHE(${NewCacheValuepid}, cachepid) 

Result: 

The specified value `${NewCacheValuepid}` will be stored in the cache under the parameter name `cachepid`

Note: 

This function is useful for caching values that need to be accessed or shared across various parts of your application. 

The `SET_DATA_TO_CACHE` function facilitates the storage of variable values in a cache, enabling efficient data management and retrieval in GLU applications. 

GET DATA FROM CACHE

The `GET_DATA_FROM_CACHE` function in GLU is used to retrieve values from a cache. This function has different forms based on the use case. 

1. Array Form: 

GET_DATA_FROM_CACHE(array[], column1, column2, ${variable}) 

  •  `array[]`: An array containing a list of data to be looked up. 
  • `column1`, `column2`, …: Columns in the array where the value specified by `${variable}` will be searched. 
  •  `${variable}`: The variable to be found in the specified columns of the array. 

2. Single Parameter Form: 

GET_DATA_FROM_CACHE(singleCacheName) 

  •  `singleCacheName`: The name of a single parameter in the cache. 

3. Dynamic Parameter Form: 

GET_DATA_FROM_CACHE_USING_DYNAMIC_PARAM(${variable}) 

  •  `${variable}`: A variable representing the parameter to be retrieved from the cache. 

Examples

1. Array Form: 

GET_DATA_FROM_CACHE (chicken[], message, track, ${findme}) 

  •  `chicken[]`: An array with columns (message, track, id). 
  • `message`, `track`: Columns in the array. 
  • `${findme}`: Variable to be searched in the specified columns. 

messagetrackid
livercsidesong4
heartasidesong7
feetbsidesong1
  •     If `${findme}` is found in the `track` column, the corresponding value in the `message` column will be returned. 
  •    If no match is found, it returns NULL


2. Single Parameter Form: 

GET_DATA_FROM_CACHE(param) 

  •  Retrieves the value of a single parameter named `param` from the cache. 

3. Dynamic Parameter Form: 

GET_DATA_FROM_CACHE_USING_DYNAMIC_PARAM(${variable}) 

  • Retrieves the value of a parameter using a dynamic variable `${variable}`

Note: 

  •  The cache should be populated in a separate transaction using the ‘store in cache’ function. 
  • This command can only be used in a Derived Parameter. 

The `GET_DATA_FROM_CACHE` function is versatile, allowing you to retrieve values from arrays or single parameters in the cache, facilitating data retrieval in GLU applications. 

GET DATA FROM CACHE CONTAINS

The `GET_MAPPED_FROM_CACHE_CONTAINS` function in GLU is used to perform a comparison between a cached table and a parameter. This function checks if any of the look-up values in the cached array are contained in the specified parameter. 

GET_MAPPED_FROM_CACHE_CONTAINS

Function Syntax: 

 GET_MAPPED_FROM_CACHE_CONTAINS(tableOfValue[], returnValueColumn, lookUpValueColumn, ${parameter}) 

  •  `tableOfValue[]`: An array containing a list of data for comparison. 
  •  `returnValueColumn`: Column in the array whose corresponding value will be returned. 
  • `lookUpValueColumn`: Column in the array containing values to be checked against the parameter. 
  • `${parameter}`: The parameter to be compared. 

Example

Parameter: valueToLookInto:”What is my fruit?”


Suppose you have the following data in the cache: 

returnValuelookUpValue
applehat
pairabc
orangexyz12

And you want to check if the parameter `${valueToLookInto}` (“What is my fruit?”) contains any of the look-up values in the `lookUpValueColumn`

GET_MAPPED_FROM_CACHE_CONTAINS(tableOfValue[], returnValue, lookUpValue, ${valueToLookInto}) 

In this case, the function would return: 

apple 

This is because “hat” (from the `lookUpValueColumn` corresponding to “apple”) is contained in the `${valueToLookInto}` parameter. 

The `GET_MAPPED_FROM_CACHE_CONTAINS` function provides a mechanism to compare a parameter against a cached table and return the corresponding value based on the matching condition. It performs a contains check on the specified parameter against the values in the look-up column of the cached table. 

GET MAPPED ARRAY FROM CACHE

The `GET_MAPPED_ARRAY_FROM_CACHE` function in GLU is used to retrieve a mapped array from cache based on a specified condition. It is useful when you have an array saved in cache, and you want to get a specific parameter from the array based on a condition. 

Function Structure: 

GET_MAPPED_ARRAY_FROM_CACHE(arrayToCache[], saveAttributeArrayInCache2,saveAttributeArrayInCache1,${conditionCache2},-);
  •  `arrayToCache[]`: The name of the array where parameters `saveAttributeArrayInCache2` and `saveAttributeArrayInCache1` are saved in cache. 
  •  `saveAttributeArrayInCache2`: The first parameter to be saved in the array. 
  •  `saveAttributeArrayInCache1`: The second parameter to be saved in the array. 
  •  `${conditionCache2}`: The condition to determine which parameter to retrieve (1 or 2). 

Example

REMOVE DATA FROM CACHE

This command serves the primary purpose of removing cached data associated with a particular parameter. For instance, you can apply REMOVE_DATA_FROM_CACHE(param) to precisely delete cache entries linked to the specified parameter. 

Function Structure:  

REMOVE_DATA_FROM_CACHE(param)
  • `param`: The name of the parameter whose associated cache data needs to be removed. 


Example

Suppose you have cached data associated with a parameter called `${myParameter}`, and you want to remove this data from the cache. You would use the following command: 

REMOVE_DATA_FROM_CACHE(${myParameter}) 

This command will delete the cache entries linked to the specified parameter `${myParameter}`

The `REMOVE_DATA_FROM_CACHE` function is employed to selectively remove cache data related to a specific parameter. It provides a means to clean up and manage cached information in a GLU environment.

CREATE ARRAY

The CREATE_ARRAY function is used to generate an array, and it takes three parameters:

Functional structure:

CREATE_ARRAY(${arraySizeParameter},[Key],[Value])
  • ${arraySizeParameter}: Represents the size of the array, which should be an integer.
  • [Key]: Represents the key or attribute for each element in the array.
  • [Value]: Represents the value associated with each key in the array.

Example

  • Derived Parameter: ‘scoreArray’
  • Formula: CREATE_ARRAY(${countScore},[Score],[true])
  • Value of ‘countScore’: 3

Resulting Array:

"boards": [
{"quickpick": true},
{"quickpick": true},
{"quickpick": true},
{"quickpick": true},
{"quickpick": true}
]

In this example, the scoreArray is created as an array of objects. Each object has a key ([Score]) and a value (true). The size of the array is determined by the value of ${countScore}, which is set to 5 in this case.

Note: The array elements are identical in structure, and the quickpick attribute is set to true for each element.

CHANGE_PARAMS_VALUE_IN_ARRAY

The `CHANGE_PARAMS_VALUE_IN_ARRAY` function allows GLU functions or formulas to be applied a parameter values in an array.

This function must have 4 arguments:

  1. arrayName (note there’s no [])
  2. Paramter Name (The one we need to get the Value from and overwrite it)
  3. Function/Formula To Execute must Be between []
  4. Overwrite Flag (true to overwrite current array, false to create new array from the current one)

Examples:

CHANGE_PARAMS_VALUE_IN_ARRAY(arrayName,paramName,[SUBSTRING(${paramName},40,400)],true)
CHANGE_PARAMS_VALUE_IN_ARRAY(links,href,[https://glu.payments.com${href}],true)
CHANGE_PARAMS_VALUE_IN_ARRAY(Product,litres,[=${litres}/100],true)

CREATE ARRAYS FROM STRING WITH ATTRIBUTES

The `CREATE_ARRAYS_FROM_STRING_WITH_ATTRIBUTES` function in GLU is designed to dynamically create arrays based on a source string parameter (`stringValue`). This function is particularly useful when you have a structured string and you want to parse it into a nested array, allowing for customisation of attributes and delimiters. 

Functional structure:

CREATE_ARRAYS_FROM_STRING_WITH_ATTRIBUTES(${stringValue},[arrayName arraychildName....], [attribute], [delimeter1 delimeter2...],[extraAttribute1 extraAttribute2...],[extraAttributeValue1 extraAttributeValue2...], arrayIndex)

Parameters:

  • ${stringValue}: The source string parameter already unmarshalled into GLU.Engine.
  • [arrayName arraychildName….]: Denotes the array tree structure with potential multiple levels.
  • [attribute]: Represents the name of the parameter to be saved into the lowest level array from the source string.
  • [delimeter1 delimeter2…]: Specifies delimiters in the source string indicating breaks in the tree structure.
  • [extraAttribute1 extraAttribute2…]: Names of extra attributes to be added to the array.
  • [extraAttributeValue1 extraAttributeValue2…]: Corresponding values of the extra attributes.
  • arrayIndex: Determines the starting position in the array for the extra attributes.

Examples

Example 1:

CREATE_ARRAYS_FROM_STRING_WITH_ATTRIBUTES(${numbers},[boards selections], [], [; <,>],[quickpick],[false], 0)

Explanation:

  • ${numbers}: Source string parameter.
  • [boards selections]: Array tree structure with two levels.
  • []: No additional attributes at the top level.
  • [; <,>]: Delimiters indicating breaks in the tree structure.
  • [quickpick]: Attribute name for the lowest level array.
  • [false]: Attribute value for the lowest level array.
  • 0: Starting position in the array for the extra attributes.

Transformation:

"numbers": "1,2,3,4,5,6;11,12,13,14,15,16"

  • Transformed into:

"boards": [ {"quickpick": "false", "selections": ["1", "2", "3", "4", "5", "6"]}, {"quickpick": "false", "selections": ["11", "12", "13", "14", "15", "16"]} ]

Example 2:

CREATE_ARRAYS_FROM_STRING_WITH_ATTRIBUTES(${numbers},[boards selections], [], [;<, >],[],[00], 1)

Explanation:

  • ${numbers}: Source string parameter.
  • [boards selections]: Array tree structure with two levels.
  • []: No additional attributes at the top level.
  • [;<, >]: Delimiters indicating breaks in the tree structure.
  • []: No additional attributes at the lowest level array.
  • [00]: Attribute value for the lowest level array.
  • 1: Starting position in the array for the extra attributes.

Transformation:

"numbers": "1,2,3,4,5,6;11,12,13,14,15,16"

  • Transformed into:

"boards": [ {"selections": ["00", "1", "2", "3", "4", "5", "6"]}, {"selections": ["00", "11", "12", "13", "14", "15", "16"]} ]

In summary, the function enables the creation of arrays from a structured string, incorporating extra attributes as needed. The syntax is flexible, allowing customisation of array structure and additional attributes based on specific requirements.

CREATE ARRAY FROM ARRAY AND ARRAY CHILDREN

The `CREATE_ARRAY_FROM_ARRAY_AND_ARRAY_CHILDREN` function streamlines the organisation of array data by consolidating both the parent array and its children’s values into a single, cohesive root array. This function simplifies the structure, bringing all child values directly into the parent array.  

Function Structure: 

CREATE_ARRAY_FROM_ARRAY_AND_ARRAY_CHILDREN(balances) 

Example 

In the context of the function CREATE_ARRAY_FROM_ARRAY_AND_ARRAY_CHILDREN(balances), where “balances” represents the parent array, the function operates by consolidating all values from its children arrays into the root array. 

For instance, consider the scenario with nested arrays like balances[].balanceResources[]. After applying the function, the parameters originally residing within the “balanceResources[]” array will be reorganised to exist directly within the “balances[]” array. 

Round Function

The ROUND function is used to round a decimal number to a specified number of decimal places. For example, if you have a number 123.4567 and you want to round it to two decimal places, you would use the function as follows: ROUND(123.4567, 2), which would result in 123.46. This function is useful for ensuring consistency and precision in financial calculations and other scenarios where specific decimal accuracy is required.

Functional structure:

ROUND(${amountToRound}, x)
  • ${amountToRound}, represents the decimal number you want to round. 
  • x, indicates the number of decimal places to which you want to round the number.

Examples:

ROUND(123.4567, 2) –> 123.46
ROUND(987.654, 1) –> 987.7

REPLACE Function

The `REPLACE` function in GLU is used to replace specific values within strings. It’s a straightforward text replacement function where occurrences of a particular value in the given string are replaced with another specified value. 

Functional structure:

REPLACE(${string},${valueToReplace},${valueToReplaceWith})
  • `${string}`: The string you want to modify. 
  • `${valueToReplace}`: The value you want to replace in the string. 
  • `${valueToReplaceWith}`: The value you want to replace `${valueToReplace}` with in the string. 

Example

EPLACE(${string}, ${valueToReplace}, ${valueToReplaceWith}) 

Example Scenario: 

Given the following inputs: 

  •  `${string}`: “Hello_World” 
  • `${valueToReplace}`: “World” 
  • `${valueToReplaceWith}`: “user” 

The `REPLACE` function transforms the string to: 

"string": "Hello_user" 

The `REPLACE` function is a simple yet powerful tool for modifying strings by replacing specific values. It’s useful when you need to dynamically update or customize string content within the GLU.Engine environment. 

ENCODESTRING Function

The `ENCODESTRING32` and `ENCODESTRING64` functions in GLU are used to encode a string into either Base32 or Base64 formats, respectively. These encoding schemes are commonly employed for various purposes, including secure data transmission and storage. 

Functional structures:

ENCODESTRING32(${string})

Or

ENCODESTRING64(${string})

ENCODESTRING32:

Base32 is a binary-to-text encoding scheme that uses a set of 32 characters, typically the 26 uppercase letters A-Z and the digits 2-7. It is designed to represent binary data in a human-readable format. 

ENCODESTRING32(${stringOne}) 
  • Purpose: Encodes the input string to Base32 format.
  • Base32 Explanation: Base32 is a numeral system that uses a set of 32 digits, each represented by 5 bits. It often uses a standard 32-character set, including upper-case letters A–Z and digits 2–7.
  • Example: ENCODESTRING32("Hello") might return something like "JBSWY3DPEB3W64TMMQ==="

ENCODESTRING64: 


Base64 is another binary-to-text encoding scheme that uses a set of 64 characters (commonly A-Z, a-z, 0-9, '+', and '/'). It's widely used to encode binary data for safe transmission over text-based channels, such as email attachments or data in URLs. 

ENCODESTRING64(${stringOne})
  • Purpose: Encodes the input string to Base64 format.
  • Base64 Explanation: Base64 is designed to carry binary data across channels that reliably support text content. It uses a set of 64 ASCII characters to represent binary information.
  • Example: ENCODESTRING64("Hello") might return something like "SGVsbG8=".

These encoding functions are useful when you need to transform strings into a format suitable for secure and reliable data transmission or storage. Choose between Base32 and Base64 encoding based on your specific requirements.

DECODESTRING Function

The `DECODESTRING32` and `DECODESTRING64` functions in GLU are used to decode a string from either Base32 or Base64 formats back to ASCII. These decoding functions are essential when you have encoded data and need to recover the original content. 

Functional Structures:

DECODESTRING32(${string})

Or

DECODESTRING64(${string})

DECODESTRING32:

Base32 decoding involves converting a string encoded in Base32 format back to its original ASCII representation. Base32 is often used to represent binary data in a human-readable format. 

DECODESTRING32(${encodedMessageBase32})
  • Purpose: Decodes the input string from Base32 format back to ASCII.
  • Base32 Explanation: Base32 is a numeral system that uses a set of 32 digits, each represented by 5 bits. It often uses a standard 32-character set, including upper-case letters A–Z and digits 2–7.

DECODESTRING64:

Base64 decoding is the process of converting a string encoded in Base64 format back to its original ASCII representation. Base64 is widely used for encoding binary data for secure transmission or storage. 

DECODESTRING64(${encodedMessageBase64})
  • Purpose: Decodes the input string from Base64 format back to ASCII.
  • Base64 Explanation: Base64 is designed to carry binary data across channels that reliably support text content. It uses a set of 64 ASCII characters to represent binary information.

These decoding functions are valuable when you need to reverse the encoding process and obtain the original content from Base32 or Base64-encoded strings. Choose the appropriate decoding function based on the encoding method used. 


ADD / REMOVE PERIOD Function

The `ADD_PERIOD` and `REMOVE_PERIOD` functions in GLU are used to manipulate date time values by adding or removing a specified period of time. These functions are helpful when you need to perform operations like adding or subtracting minutes, hours, days, weeks, months, or years from a given date time. 

Functional Structures:

ADD_PERIOD(${param},${daystoAdd},periodType)

or

REMOVE_PERIOD(${param},${daystoRemove},periodType)

1. ADD_PERIOD (date): 

`ADD_PERIOD(${param},${daystoAdd},periodType)` 

  •   Purpose: Adds a specified period to a datetime parameter. 
  •   Parameters: 
    •  `${param}`: The datetime parameter to which the period will be added. 
    • `${daystoAdd}`: The number of units (e.g., seconds, minutes, days) to add. 
    • `periodType`: The type of period to add (second, minute, hour, day, week, month, year). 
  •    Example: 
ADD_PERIOD(${staticDateAndTime},30, second) 

2. REMOVE_PERIOD (date): 

`REMOVE_PERIOD(${param},${daystoRemove},periodType)` 

  •   Purpose: Removes a specified period from a datetime parameter. 
  •   Parameters: 
    • `${param}`: The datetime parameter from which the period will be removed. 
    • `${daystoRemove}`: The number of units (e.g., seconds, minutes, days) to remove. 
    • `periodType`: The type of period to remove (second, minute, hour, day, week, month, year). 
  •     Example: 
   REMOVE_PERIOD(${staticDateAndTime},30,second) 

Period Types: 

  •  `second` 
  •  `minute` 
  • `hour` 
  •  `day` 
  • `week` 
  •  `month` 
  • `year` 

Example Scenarios: 

  1.  Add 60 seconds to the current date and time. 
  1.  Add 5 minutes to the current date and time. 
  1. Add 2 hours to the current date and time. 
  1. Add 3 days to the current date and time. 
  1. Add 6 years to the current date and time. 
  1.  Add 2 weeks to the current date and time. 
  1. Remove 60 seconds from the current date and time. 
  1. Remove 5 minutes from the current date and time. 
  1. Remove 2 hours from the current date and time. 
  1. Remove 3 days from the current date and time. 
  1.  Remove 6 years from the current date and time. 
  1. Remove 2 weeks from the current date and time. 

Note: These functions are useful for dynamic date and time calculations in various scenarios, such as setting expiration times for transactions or managing time-sensitive operations. 

ENCRYPT USING RSA PUBLIC KEY

The `ENCRYPT_USING_RSA_PUBLIC_KEY` function in GLU is used to encrypt a value using the RSA public key encryption algorithm. This function is typically used in scenarios where data needs to be securely transmitted or stored, and RSA public key encryption is employed for confidentiality. 

Functional Structure:

ENCRYPT_USING_RSA_PUBLIC_KEY(${decryptedValue},${modulus},${exponent},UTF-8)

Example

Parameters: 

  • `${decryptedValue}`: The value to be encrypted. 
  • `${modulus}`: The modulus part of the RSA public key (usually obtained from the public certificate). 
  • `${exponent}`: The exponent part of the RSA public key (usually obtained from the public certificate). 
  •  `UTF-8`: The character set encoding used for encryption. 

Note: The modulus and exponent are critical components of an RSA public key and are typically part of the public certificate. The public key is used for encryption, and the corresponding private key (not involved in this function) is used for decryption. 

This function ensures that sensitive information can be securely transmitted or stored, and only entities possessing the corresponding private key (which is kept secret) can decrypt and access the original data. 

CONVERT DATE TO TIMESTAMP

The `CONVERT_DATE_TO_TIMESTAMP ` function in GLU is used to convert a date to a timestamp. Timestamps are often represented in milliseconds since the Unix Epoch (January 1, 1970). This conversion is useful in various scenarios, such as comparing or manipulating date values. 

Functional Structure:

convert_to_timestamp(${date})

Parameter: 

  • `${date}`: The date to be converted to a timestamp. 

Example

convert_to_timestamp(${dateOne})

  • Purpose: Converts the provided date to its corresponding timestamp, typically represented in milliseconds.
  • Example: Given the date “17/03/2020”, the response would be the timestamp 1584396000000.

This function is particularly useful when there is a need to work with time in a numeric format, such as when performing date-based calculations or comparisons. The resulting timestamp represents the number of milliseconds that have elapsed since the Unix epoch (January 1, 1970, 00:00:00 UTC), making it a standard format for representing time across various systems and programming languages.

MERGE VALUES IN ARRAY

The `MERGE_VALUES_IN_ARRAY` function in GLU is used to merge values from two columns within an array into a new column. This operation is particularly useful when you want to create a new column that combines information from existing columns in an array. 

Functional Structure:

MERGE_VALUES_IN_ARRAY(product,[type charge],typecharge,-)

Parameters: 

  1. existingArray: Refers to the name of the pre-existing array.
  2. [column1 column2]: Denotes the two ‘columns’ within the array that require merging.
  3. newColumnName: Specifies the name of the newly created column housing the merged values.
  4. delimiter: Represents the character(s) utilised as a separator between the values of the specified columns.

Example

MERGE_VALUES_IN_ARRAY(arrayToMergeValues, [attribute1 attribute2], newAttribute,-)
  • arrayToMergeValues: This is the name of the array that already exists. 
  • [attribute1 attribute2]: These are the two ‘columns’ or attributes in the array that you want to merge. 
  • newAttribute: This is the name of the new attribute that will be created to store the merged values. 
  • : This is the delimiter that is used to separate the values from attribute1 and attribute2 when they are merged. 

The function proves useful when there is a need to perform a lookup in the array by matching on two values, providing a convenient method to establish a combined lookup key. This combined key can serve various purposes, such as enhancing data retrieval and facilitating comparisons.

HMAC SHA 1 Encoder Function

The `HMAC_SHA_1_BASE64_ENCODER` function in GLU is used to generate an HMAC-SHA-1 (Hash-based Message Authentication Code with Secure Hash Algorithm 1) signature for a given base string using a secret key. The result is then encoded in Base64 format. 

Functional Structure:

HMAC_SHA_1_BASE64_ENCODER(${baseString},${SignValueKey})

Parameters: 

  • `${baseString}`: The string message for which the HMAC-SHA-1 signature is generated. 
  • `${SignValueKey}`: The secret key used for generating the HMAC-SHA-1 signature. 

Example

  HMAC_SHA_1_BASE64_ENCODER(${payload}, ${secretKey}) 

  • HMAC is a mechanism for adding a shared secret key to a message, ensuring data integrity and authenticity. 
  • SHA-1 is a cryptographic hash function. 
  • The result is then encoded in Base64 format. 

Outcome:

The function takes the provided `${baseString}` and `${secretKey}`, applies the HMAC-SHA-1 algorithm to create a cryptographic signature, and then encodes the result using Base64. The final output is a Base64-encoded string that serves as a secure representation of the HMAC-SHA-1 signature for the given message and key pair. 

Practical Application:

  • Security: Ensures the integrity and authenticity of messages or data by creating a secure signature. 
  • Message Verification: Useful in scenarios where it’s essential to verify that a message has not been tampered with during transmission. 
  • API Authentication: Commonly employed in API security to validate the authenticity of requests. 

This function plays a crucial role in maintaining the security of data exchanges by generating a reliable

and secure signature that can be used to verify the origin and integrity of transmitted information. 


HMAC SHA 256 Encoder Function

The `HMAC_SHA_256_BASE64_ENCODER` function in GLU serves as a critical component for ensuring the integrity and authenticity of data through the generation of a secure signature. Specifically, it utilizes the HMAC-SHA-256 (Hash-based Message Authentication Code with Secure Hash Algorithm 256-bit) algorithm, coupled with a secret key, to produce a tamper-resistant signature. The resulting signature is then encoded into a Base64 format, enhancing its usability and interoperability. 

Function Overview: 

HMAC_SHA_256_BASE64_ENCODER(${jsonPayload},${privateKey})

Parameters: 

  •  `${jsonPayload}`: The original JSON payload or data that requires secure verification. 
  • `${privateKey}`: A confidential key employed in the HMAC-SHA-256 algorithm, enhancing the security of the generated signature. 

Example

HMAC_SHA_256_BASE64_ENCODER({"user": "JohnDoe", "role": "admin"}, "SecretKey456") 

The function performs the following steps: 

1. Utilises the HMAC-SHA-256 algorithm to create a cryptographic signature. 

2. Encodes the resulting signature into Base64 format. 

The `HMAC_SHA_256_BASE64_ENCODER` function is a fundamental tool in securing data transactions, offering a reliable means of generating and verifying cryptographic signatures to fortify the integrity of digital communication. 

AES 256 Encryption (CBC)

The `ENCRYPTION_AES_256_BIT_MODE_CBC_BASE64_ENCODER` function in GLU serves as a robust encryption mechanism leveraging the widely adopted Advanced Encryption Standard (AES). This symmetric encryption algorithm, known for its security and reliability, operates with a 256-bit key and employs Cipher Block Chaining (CBC) mode. The purpose is to generate an AES-encrypted representation of sensitive data, typically in JSON format. 

Functional Structure:

ENCRYPTION_AES_256_BIT_MODE_CBC_BASE64_ENCODER(${jsonPayload},${privateKey},${initVector})

Paramaters:

  1. ${jsonPayload}: This is a placeholder for the JSON payload that you want to encrypt. It should be a variable or value containing the data you wish to secure.
  2. ${privateKey}: This represents the private key used for encryption. The private key is a secret cryptographic key that should be kept confidential. It plays a crucial role in the AES-256 bit encryption algorithm.
  3. ${initVector}: This is the initialisation vector (IV) used in the encryption process. The IV adds randomness to the encryption, making it more secure. It should be unique for each encryption operation and is typically generated randomly.

Example

ENCRYPTION_AES_256_BIT_MODE_CBC_BASE64_ENCODER(${jsonPayload},${privateKey},${initVector})

In this example, the function encrypts a JSON payload containing user information with AES using a 256-bit key. The secret key “SecretKey456” is utilized, and an optional initialization vector “InitializationVec123” is provided for added security. The resulting encrypted data is then represented in Base64 encoding. 

The `ENCRYPTION_AES_256_BIT_MODE_CBC_BASE64_ENCODER` function provides a secure and standardised approach to encrypting sensitive data, making it an essential tool in scenarios where data confidentiality is of utmost importance. 

AES 256 Decryption (CBC)

The `DECRYPTION_AES_256_BIT_MODE_CBC_BASE64_DECODER` function in GLU serves as a crucial component for securely retrieving and processing encrypted data. It utilises the AES-256-bit encryption algorithm in Cipher Block Chaining (CBC) mode, providing a reliable and widely adopted method for ensuring the confidentiality of sensitive information. 

Functional Structure:

DECRYPTION_AES_256_BIT_MODE_CBC_BASE64_DECODER(${EncryptedPayload},${secretKey},${initVector})

Parameters:

  • `${EncryptedPayload}`: The base64-encoded data that has been encrypted using AES-256-bit in CBC mode. 
  • `${secretKey}`: The secret key used during the encryption process to maintain the confidentiality of the data. 
  • `${initVector}`: The optional initialisation vector used during encryption, enhancing the security of the decryption process. 

Example

In this example, the function decrypts a base64-encoded payload that was initially encrypted using the AES-256-bit encryption algorithm in CBC mode. The “SecretKey456” serves as the secret key for decryption, and the optional “InitializationVec123” is provided for accurate decryption. The result is the original data represented as a base64-decoded string. 

Practical Applications:

1. Secure Data Retrieval: Enables the secure retrieval of sensitive information stored in an encrypted format. 

2. Data Processing: Essential for applications that deal with encrypted data, ensuring confidentiality during processing. 

3. Security Integration: Commonly used in systems where encrypted data must be decrypted securely for various operational needs. 

In summary, the `DECRYPTION_AES_256_BIT_MODE_CBC_BASE64_DECODER` function plays a crucial role in decrypting data encrypted with AES-256-bit in CBC mode, providing a secure and reliable method for accessing confidential information. 

BASE64 TO HEX

The BASE64_TO_HEX function is designed to convert a Base64-encoded value to its corresponding Hexadecimal representation. This conversion is useful in scenarios where Hexadecimal format is required, such as cryptographic operations or data transformations. 

Functional Structure:

BASE64_TO_HEX(${encryptToBase64EncoderUsingHmacSHA256})

Parameters:

  • ${base64Value}: The Base64-encoded value that needs to be converted to Hex. 

Example

Note: Ensure that the input value provided to the function is a valid Base64-encoded string, as the function expects Base64-encoded input for accurate conversion. 

In summary, the BASE64_TO_HEX function serves as a valuable tool for transforming Base64-encoded data into its corresponding Hexadecimal representation, providing versatility in data processing and cryptographic applications. 

ENCODE HEX TO BASE64

The `ENCODE_HEX_TO_BASE64` function is employed to transform a hexadecimal value, often representing a SHA-1 Thumbprint, into a Base64URL encoded format. This conversion is integral when constructing JSON Web Signatures (JWS), particularly when including the x5t header parameter. 

Functional Structure:

ENCODE_HEX_TO_BASE64(${x5tSHA})

Parameters: 

  • `${x5tSHA}`: Represents the hexadecimal value, typically the SHA-1 Thumbprint. 

Example

  1. Hexadecimal Input: The function takes a hexadecimal value, `${x5tSHA}`, as input. This value is usually a SHA-1 Thumbprint generated using the `GENERATE_FINGERPRINT` function. 
  2. Base64URL Encoding: Utilising the Base64URL encoding process, the function converts the hexadecimal value into a Base64URL encoded representation. 
  3. Result Storage (Optional): The resulting Base64URL encoded value can be stored in a variable or parameter for subsequent use, often labeled as `x5t` in this example. 

The outcome of the `ENCODE_HEX_TO_BASE64` function is the Base64URL encoded representation of the input hexadecimal value. This result, commonly labeled as `x5t`, is essential when constructing JWS headers, particularly when including the x5t parameter to convey the SHA-1 Thumbprint. The encoded value is typically conveyed as a string suitable for JWS header construction. 

MD5 HEX HASH Function

The MD5_HEX function is designed to generate an MD5 hash in hexadecimal format for a given parameter. MD5 (Message Digest Algorithm 5) is a widely used cryptographic hash function producing a 128-bit (16-byte) hash value, typically expressed as a 32-character hexadecimal number. 

Functional Structure:

MD5_HEX(${base_encode})

Parameters:

  • ${baseEncode}: The parameter for which the MD5 hash in hexadecimal format needs to be generated. 

Example

The outcome of this function is the MD5 hash of the input data presented in hexadecimal format. This hash can be used for various purposes, including verifying data integrity and comparing files or values.

Note: While MD5 is widely used, it’s important to note that MD5 is considered insecure for cryptographic purposes due to vulnerabilities that allow for collision attacks. For security-sensitive applications, consider using stronger hash functions like SHA-256 or SHA-3.

URL ENCODER

The URL_ENCODER function is employed for URL encoding, transforming special characters into a format suitable for inclusion in a URL. This function is particularly useful when dealing with parameters or values that need to be passed in URLs. 

Functional Structure:

URL_ENCODER(${publicKey},UTF-8)

Parameters:

  • ${publicKey}: The parameter or value that requires URL encoding. 
  • UTF-8: The character encoding scheme to be used, typically specified as UTF-8. 

Example

The function encodes the provided parameter for URL usage, ensuring special characters are appropriately represented. 

Key Considerations:

1. URL Encoding: URL encoding is necessary to represent reserved characters in a URL to prevent misinterpretation. 

2. Character Encoding: UTF-8 is a widely used character encoding scheme that provides support for a broad range of characters. 

URL encoding is essential for handling special characters in URLs, ensuring proper functionality and data integrity when transmitting data via web applications. 

In summary, the URL_ENCODER function is a valuable tool for preparing parameters or values for inclusion in URLs by encoding special characters, contributing to the overall robustness and reliability of web applications. 

CONSOLIDATE Function

The purpose of the CONSOLIDATE function is to aggregate or consolidate data based on the specified criteria, grouping by accountID and applying some form of consolidation on the amount values.

Function Structure:

CONSOLIDATE(${result},accountID,amount)

Parameters:

  • ${result}: This is likely the variable or parameter where the result of the consolidation will be stored.
  • accountID: This is probably the field or column in your data that you want to use as a grouping criterion for consolidation.
  • amount: This could be the field or column containing numeric values that you want to consolidate, possibly by summing them up for each unique accountID.

Example

The outcome of this function would be the consolidated result, where data is grouped by unique accountID, and the amount values are aggregated. The specific consolidation operation (e.g., sum, average) would depend on the implementation details of the CONSOLIDATE function.

The exact behaviour and implementation of the CONSOLIDATE function may depend on the context or the system in which it is used. It’s advisable to refer to the documentation or code implementation for precise details.

TOLOWERCASE Function

The `TOLOWERCASE` function is used to convert the contents of a parameter or variable to lowercase.

Function Structure:

TOLOWERCASE(${paramterName})
  • `TOLOWERCASE`: Indicates the function that converts text to lowercase.  
  • ${parameterName}: This variable represents the input string or parameter whose characters you want to convert to lowercase. 

Example

If ${parameterName} is, for example, “HelloWorld”, the outcome would be “helloworld” after applying the TOLOWERCASE function. 

TOUPPERCASE Function

The `TOUPPERCASE` function is used to convert the contents of a parameter or variable to uppercase.  

Function Structure: 

TOUPPERCASE(${paramterName})
  • `TOUPPERCASE`: Signifies the function responsible for converting text to uppercase.  
  •  `${parameterName}`: Denotes the parameter or variable containing the text to be converted.  

Example

If ${parameterName} is, for example, “helloWorld”, the outcome would be “HELLOWORLD” after applying the TOUPPERCASE function. 

MOD Function

The `MOD` operation in the provided formula is employed to categorise MSISDN numbers based on the evenness or oddness of their last two digits. The formula is structured as follows:  

Function Structure: 

= ${msisdnlast2Digit}.% 2
  • `${msisdnlast2Digit}`: Represents the last two digits of the MSISDN.  
  • `%`: Denotes the modulo operation, calculating the remainder of the division.  
  • `2`: Serves as the divisor for the modulo operation.  

Example

For instance, if `${msisdnlast2Digit}` is `25`, the operation evaluates to `= 25 % 2`, resulting in `1`. This indicates an odd number.  

Outcome:  

  •  If the result is `0`, it implies an even MSISDN, categorized as `routekey 0`.  
  • If the result is `1`, it signifies an odd MSISDN, categorized as `routekey 1`.  

This approach efficiently segments MSISDN numbers into two distinct categories based on the evenness or oddness of their last two digits. The resulting `routekey` serves as a classification criterion.  

SIGN MX MESSAGE Function

The `SIGN_MX_MESSAGE` function is designed to apply the IETF/W3C XML Signature standard, often known as XML-DSig, specifically for ISO 20022 messages. XAdES (XML Advanced Electronic Signatures) outlines profiles of XML-DSig, and XAdES-BES (Basic Electronic Signature) within this context offers fundamental authentication and integrity protection, crucial for advanced electronic signatures in payment systems.  

Function Structure: 

SIGN_MX_MESSAGE(${messageISO20022},${certificate},${privateKey})

Parameters:  

  • `messageISO20022`: The XML message that needs to be signed using XML-DSig.  
  • `certificate`: The Public Certificate utilized for signing the message.  
  • `privateKey`: The Private Key utilised for signing the message.  

The result of this operation is the application of XML-DSig to the provided XML message, creating a digitally signed version. This signature provides assurances of both the authenticity and integrity of the XML document.  

VERIFY MX MESSAGE Function

The purpose of VERIFY_MX_MESSAGE is to verify the authenticity or integrity of a signed message using cryptographic methods, either with a certificate or a public key. 

Function Structures: 

VERIFY_MX_MESSAGE(${SignedMessage},${certificate},false) 

or 

VERIFY_MX_MESSAGE(${SignedMessage},${publicKey},true) 

  • ${SignedMessage}: This is likely the signed message or data that needs to be verified. 
  • ${certificate} or ${publicKey}: This parameter represents either a certificate or a public key, depending on the specific use case. 
  • false or true: This boolean parameter determines the mode of verification. When set to false, it likely indicates the use of a certificate for verification, and when set to true, it likely indicates the use of a public key. 

The outcome of the function would typically be a boolean value indicating whether the verification process succeeded (true) or failed (false). 

JSON Web Signature (JWS) or JSON Web Encryption (JWE)

JWS or JSON Web Signature, requires some derived parameter inputs to create a signed Jose payload. JWS consists of three parts: Header, Payload, and Signature. Each of these parts is encoded in BASE64URL, then all three parts are connected in one line, the delimiter is a dot.

Header requires an x5t header parameter:

  • Base64url-encoded SHA-1 thumbprint (a.k.a. digest) of the DER encoding of the X.509 certificate.

GENERATE FINGERPRINT Function

The GENERATE_FINGERPRINT function is used to generate the SHA-1 thumbprint of a certificate. 

Function Structure: 

GENERATE_FINGERPRINT(${certWithTags},SHA-1) 

Parameters: 

  • ${certWithTags}: This is likely a variable or parameter containing the certificate information. The function calculates the SHA-1 thumbprint based on this certificate. 
  • SHA-1: Specifies the hashing algorithm to be used, in this case, SHA-1. 

Example: 

GENERATE_FINGERPRINT(${certWithTags},SHA-1) [saved as x5tSHA in this example] 

Outcome: 

  • The function generates the SHA-1 thumbprint of the provided certificate. 
  • The result is saved with the name “x5tSHA.” 

GENERATE JWS Function

The GENERATE_JWS function is used to create a signed JSON Web Signature (JWS) payload by combining the previously created values. 

Function Structure: 

GENERATE_JWS(${headerJWS},${responseBodyStart},${rpkPrivateKey},${algorithmJWS})

Parameters: 

  • ${headerJWS}: Represents the header part of the JWS, likely containing information about the signing algorithm. 
  • ${responseBodyStart}: Represents the payload part of the JWS, typically the content that you want to sign. 
  • ${rpkPrivateKey}: Represents the private key used for signing the JWS. 
  • ${algorithmJWS}: Specifies the algorithm to be used for JWS signing. 

Example 

This combines the previously created values to create the signed JWS payload: 

GENERATE_JWS(${headerJWS},${responseBodyStart},${rpkPrivateKey},${algorithmJWS}) 

Outcome: 

  • The function takes the header, payload, private key, and algorithm as input parameters. 
  • Combines these parameters to generate a signed JWS payload. 

Note: 

  • Make sure that the private key (${rpkPrivateKey}) is securely stored and handled. 
  • The signing algorithm (${algorithmJWS}) should be selected based on your security requirements. 
  • Ensure that the header (${headerJWS}) and payload (${responseBodyStart}) are properly formatted according to the JWS specifications. 

This process is commonly used in securing and verifying the integrity of data in web communications, especially in scenarios like authentication tokens or data exchange between parties where data integrity and authenticity are crucial. 

SIGN USING PRIVATE KEY AND BASE64

The `SIGN_USING_PRIVATEKEY_AND_BASE64` function is designed to sign data using a private key and encode the resulting signature in Base64 format. This process is commonly employed for data integrity verification and authentication in secure systems.  

Function Structure: 

SIGN_USING_PRIVATEKEY_AND_BASE64(${certPassword},
${certFilenamePath},${dataToSignParam},
${keyStoreProviderParam},${keyStoreTypeParam},
${signatureProviderParam},${signatureAlgorithmParam}) 

Parameters: 

  • ${certPassword}: The password or passphrase to access the private key. 
  • ${certFilenamePath}: The path to the certificate file containing the private key. 
  • ${dataToSignParam}: The data that you want to sign. 
  • ${keyStoreProviderParam}: The provider for the keystore (e.g., cryptographic library). 
  • ${keyStoreTypeParam}: The type of keystore used for storing keys. 
  • ${signatureProviderParam}: The provider for signature operations. 
  • ${signatureAlgorithmParam}: The cryptographic algorithm used for signing (e.g., RSA, ECDSA). 

Example 

The outcome of the `SIGN_USING_PRIVATEKEY_AND_BASE64` function is the Base64-encoded signature generated by signing the specified data using the provided private key. This signature is commonly used in secure communication systems to verify the authenticity and integrity of transmitted data.  

CURRENT NANO TIME

The `CURRENT_NANO_TIME` function returns the current time in nanoseconds, providing a level of precision that is crucial for accurate timing and performance measurements. This function is commonly utilized in high-performance computing applications where precise timing is essential.  

Function Structure: 

CURRENT_NANO_TIME 

Example

The result of the `CURRENT_NANO_TIME` function is a numerical value representing the current time in nanoseconds. This value is often a large number, reflecting the high precision achieved by measuring time at the nanosecond level.  Example: The current nano time now is 17122459102375. 

GET BODY() Function

The `GET_BODY()` function is designed to retrieve the entire response body from a transaction. It is particularly useful when you need to capture and use the response body in subsequent transactions or for further processing within your application.  

Function Structure:

GET_BODY() 

Example 

The result of the `GET_BODY()` function is the complete response body obtained from the current transaction. This includes all content, such as text, JSON, XML, or any other format returned in the response.  

GET BODY AS JSON

The `GET_BODY_AS_JSON()` function is designed to retrieve the content of a request body or a specific portion of the body and parse it as JSON data. This function is particularly useful when dealing with API responses or other data formats delivered in JSON.  

Function Structure: 

GET_BODY_AS_JSON(${variable}) 

Parameters:  

  • `${variable}`: The variable or path indicating the location of the JSON content within the response body.  

Example 

The result of the `GET_BODY_AS_JSON(${variable})` function is the parsed JSON data obtained from the specified location within the response body.  

Use Case:  

  • JSON Response Parsing: After making an API call or receiving a response in JSON format, you may want to extract specific information or navigate through the JSON structure. This function allows you to retrieve and parse JSON content.  
  • Dynamic JSON Processing: The `${variable}` parameter provides flexibility in specifying the location of the JSON content, enabling dynamic processing based on the structure of the response.  

The `GET_BODY_AS_JSON()` function simplifies the process of extracting and parsing JSON data from response bodies. By utilising this function, you can seamlessly integrate JSON processing into your application logic, enabling efficient handling of API responses and other JSON-formatted data.  

CHECK IF PAYLOAD IS JSON

The `CHECK_IF_PAYLOAD_IS_JSON()` function is utilised to determine whether a given payload is in JSON format. It returns a boolean value, `true` if the payload is valid JSON, and `false` if it is not. This function serves as a quick check to ensure that incoming data adheres to the expected JSON format.  

Function Structure: 

CHECK_IF_PAYLOAD_IS_JSON(${parameter}) 

Parameters:  

  • `${parameter}`: The parameter or variable containing the payload to be checked for JSON format.

The result of the `CHECK_IF_PAYLOAD_IS_JSON(${parameter})` function is a boolean value (`true` or `false`) indicating whether the payload is in valid JSON format.  

Use Case:  

  • Data Validation: Before attempting to parse or manipulate JSON data, it’s crucial to validate its format. This function can be used as an initial step to ensure that the payload adheres to the expected JSON structure.  
  • Error Handling: In scenarios where JSON parsing is expected, checking the payload’s validity can help in implementing appropriate error handling. If the payload is not valid JSON, error-handling mechanisms can be triggered.  

The `CHECK_IF_PAYLOAD_IS_JSON()` function is a valuable tool for quickly validating whether a given payload is in JSON format. By incorporating this function into your data processing workflows, you can enhance the robustness of your applications by ensuring that they handle JSON data correctly and gracefully handle unexpected formats.  

GET VALUE FROM JSON PAYLOAD()

The `GET_VALUE_FROM_JSON_PAYLOAD()` function is designed to retrieve the value of a parameter from a specified path within a JSON payload. It provides a convenient way to extract specific data points from complex JSON structures.  

Function Structure: 

 GET_VALUE_FROM_JSON_PAYLOAD(${jsonPayload2},array[1].param) 

Parameters:  

  • `${jsonPayload}`: The JSON payload from which the value needs to be extracted.  
  • `array[1].param`: The specific path indicating the location of the desired parameter in the JSON structure.  

Example


The result of the `GET_VALUE_FROM_JSON_PAYLOAD(${jsonPayload}, array[1].param)` function is the value of the specified parameter located at the given path within the JSON payload.  

Use Case:  

  • Data Extraction: In scenarios where a JSON payload contains nested structures, this function can be used to extract specific values based on their paths. It simplifies the process of navigating through complex JSON hierarchies.  
  • Parameter Retrieval: When dealing with API responses or other data sources in JSON format, extracting specific parameters becomes crucial. This function aids in retrieving targeted information.  

Note:  

  • The `${jsonPayload}` parameter should contain the JSON payload from which the value needs to be extracted.  
  • The `array[1].param` represents the path to the desired parameter. This path must align with the structure of the JSON payload.  

The `GET_VALUE_FROM_JSON_PAYLOAD()` function enhances the capability to work with JSON data by providing a means to extract specific values based on their paths within the payload. This is particularly useful in scenarios where precise data extraction is required from nested and complex JSON structures.  

GET COMBINED BODY

The `GET_COMBINED_BODY()` function is a versatile tool crafted to simplify data manipulation by amalgamating information that has been previously split. While splitting data into smaller components is a routine operation, the ability to effectively reassemble this fragmented data into a cohesive whole is equally crucial. The GET_COMBINED_BODY function addresses this need.  

Function Structure: 

GET_COMBINED_BODY()

Example 

The result of the `GET_COMBINED_BODY()` function is the combined or concatenated form of the previously split data segments.  

Use Case:  

  • Data Reconstruction: Useful in scenarios where data has been split into multiple parts for processing or transmission and needs to be reconstructed into its original form.  
  • Streamlining Processing Pipelines: In data processing pipelines, especially when dealing with distributed systems, information might be divided for parallel processing. This function helps in combining the results for further processing.  

Note:  

  • The `GET_COMBINED_BODY()` function relies on the context of the transaction where data has been split using the `SPLIT` function or a similar mechanism.  
  • It’s essential to use this function in an appropriate context where data splitting has occurred earlier in the transaction flow.  

The `GET_COMBINED_BODY()` function serves as an effective means of consolidating data segments that have been split previously. It plays a pivotal role in scenarios where data needs to be reconstructed or combined after undergoing processes that involve fragmentation.  

GET COMBINED BODY TO STRING

The GET_COMBINED_BODY_TO_STRING() function is a specialized tool designed for combining data fragments into a single string. While the GET_COMBINED_BODY function is versatile and can handle various data types, GET_COMBINED_BODY_TO_STRING() specifically focuses on string concatenation. It serves as a dedicated tool for simplifying the process of combining fragmented text or character data. 

Function Structure: 

GET_COMBINED_BODY_TO_STRING()

Example 

Outcome: 

  • The function operates on data fragments and combines them into a single string, simplifying the process of concatenating text or character data. 

Note: 

  • Use GET_COMBINED_BODY_TO_STRING() when you specifically need to concatenate string data. 
  • It streamlines the process compared to the more general-purpose GET_COMBINED_BODY, which can handle a variety of data types. 
  • Ensure that the data fragments provided as input are compatible with string concatenation to achieve the desired outcome. 


AES GCM Decryption

The DECRYPT_SEC_KEY_CIPHER function is designed to decrypt a payload using symmetric key encryption with AES/GCM/NoPadding algorithm. 

Function Structure: 

DECRYPT_SEC_KEY_CIPHER(${payload},${decrKey},${initVector},${secretKeySpecAlgorithm},
${cipherTransformation},${authTag}) 

  • Payload: This is the encrypted payload that you receive. 
  • DecrKey: Your decryption key. This is the key you’ve been provided to decrypt the payload. 
  • InitVector: Initialization Vector. It’s a parameter that typically comes from the header of the incoming request. Initialization vectors are used in encryption algorithms to ensure that the same plaintext does not result in the same ciphertext. 
  • SecretKeySpecAlgorithm: This parameter specifies the algorithm for generating secret keys. For this function, it must be set to “AES.” 
  • CipherTransformation: Specifies the name of the transformation, which includes the algorithm, mode, and padding scheme. For this function, it must be set to “AES/GCM/NoPadding.” 
  • AuthTag: Authentication Tag. It’s a parameter that typically comes from the header of the incoming request. The authentication tag is used to ensure the integrity and authenticity of the decrypted data. 

Example 

The result of the DECRYPT_SEC_KEY_CIPHER function would be the decrypted content of the payload using the provided decryption key, initialisation vector, and other cryptographic parameters. The function uses symmetric key encryption (AES) with GCM mode and no padding to ensure secure and authenticated decryption. The outcome is the original content that was encrypted, now in its plaintext form. 

LUHN Algorithm VALIDATE CARDS WITH LUHN ALGO

The VALIDATE_CARDS_WITH_LUHN_ALGO function employs the Luhn algorithm to validate a given identity number (presumably representing a credit card number). The Luhn algorithm is a simple checksum formula used to validate various identification numbers, including credit card numbers. 

Function Structure: 

VALIDATE_CARDS_WITH_LUHN_ALGO(${identityNumber}) 

  • ${identityNumber}: This is the parameter representing the identity number (credit card number) to be validated. 

Example 

The following function applies the Luhn algorithm rules. The output is either: ‘false'(doesn’t pass Luhn test) or ‘true'(does pass Luhn test): 

VALIDATE_CARDS_WITH_LUHN_ALGO(${identityNumber}) 

The function then performs the Luhn algorithm on the provided identity number and returns either ‘false’ if the number fails the Luhn test or ‘true’ if it passes the test. This provides a quick check to determine the validity of a credit card number based on the Luhn algorithm rules. 

GENERATE GUID Function

The GENERATE_GUID function is a utility designed to create a Globally Unique Identifier (GUID), a unique identifier for objects or entities within a computer system. A GUID is a 128-bit value usually represented as a string of hexadecimal digits separated by hyphens. 

Function Structure: 

GENERATE_GUID 

Example 

When this function is used, it dynamically generates a unique identifier at runtime. 

Globally Unique Identifier (GUID): This is a unique identifier that consists of 128 bits, ensuring a high probability of uniqueness. The format is typically a string of hexadecimal digits separated by hyphens (e.g., “550e8400-e29b-41d4-a716-446655440000”). 

Use Case

GUIDs are commonly employed in software development and database management scenarios where ensuring a unique identifier is crucial. They are particularly useful when there’s a need to uniquely identify objects or records across different systems or networks. 

Outcome

The result of calling GENERATE_GUID is a newly generated GUID, ensuring that the identifier is highly likely to be unique within the system or network. This uniqueness is achieved through an algorithm that minimises the probability of collision (two GUIDs being the same). 

GET STRING FROM DUPLICATE KEYS ARRAY In JSON PAYLOAD

The GET_STRING_FROM_DUPLICATE_KEYS_ARRAY_In_JSON_PAYLOAD function is designed to extract a string value from a JSON object, specifically addressing scenarios where the JSON payload contains duplicated keys. This function is crucial in situations where parsing duplicated keys as a string might lead to exceptions due to conflicts. 

Function Structure: 

GET_STRING_FROM_DUPLICATE_KEYS_ARRAY_In_JSON_PAYLOAD(${parentpayload},objectPath)

Parameters: 

  • ${parentpayload}: This parameter represents the entire JSON payload where the string value needs to be extracted. 
  • objectPath: This is the path within the JSON structure to the object containing the duplicated keys. 

Example 

Example Usage: 

codeGET_STRING_FROM_DUPLICATE_KEYS_ARRAY_In_JSON_PAYLOAD(${parentPayload}, response.transaction.receiptsFields.line) 

  • parentPayload: This is the parameter where the entire JSON payload is stored. 
  • response.transaction.receiptsFields.line: This is the path to the object where the key “line” is duplicated. 

Scenario: 

Consider the following JSON payload: 

{ 
"response": { 
"transaction": { 
"Number": "9877890000000000", 
"additionalTxnFields": "xxx", 
"receiptsFields": { 
"line": "Test: $5.00", 
"line": "PIN#: XXXXX", 
"line": "Expires: 1111", 
"line": "Serial #: 005343622929", 
"line": "Order ID #: LD ", 
"line": "Activated. Non-refundable.", 
"line": "RedemptionPIN: 1234" 
} 
} 
} 
}
 

  • In this payload, the “receiptsFields” object contains duplicated keys named “line.” 

The function allows for manipulation of the JSON object, ensuring that the string values associated with duplicated keys are extracted without causing exceptions due to conflicts. The extracted value can then be used as needed within the system. 

Customer Specific Functions

PAN ENCRYPT Function

The TENACITY_PAN_ENCRYPT function is a specialised encryption algorithm designed for use with specific PAN (Primary Account Number) types. 

Function Structure: 

TENACITY_PAN_ENCRYPT(${pan}) 

Example 

When used with an actual PAN value, it encrypts the PAN according to the specific algorithm in use. 

PAN (Primary Account Number)

This is a numeric identifier that is essential in financial transactions. In this context, it refers to a credit card number. 

Use Case

The TENACITY_PAN_ENCRYPT function is employed for the encryption of PAN data. The exact details and considerations for using this function are typically provided by GLU Support. Users are advised to consult with GLU Support to understand the appropriate scenarios and guidelines for using this encryption algorithm. 

Outcome

The result of calling TENACITY_PAN_ENCRYPT(${pan}) is the encrypted version of the provided PAN. For instance, if the original PAN is “1944219200122247”, the function might return “1944882297307746” as the encrypted PAN. 


No-Code Myths – Debunked!

Debunking the No-code Myths

The importance of leveraging software to stay competitive in the market and notably the benefits of no-code platforms are important to understand. There are however a number of mis-perceptions and concerns (myths) about no-code platforms, some of the most common of which are outlined below. It is important to recognise these concerns and to understand why they are mis-placed so as to ensure that the opportunities presented by no-code solutions can be embraced.

MYTH #1 – NO-CODE IS ONLY FOR BASIC USE CASES

No-code platforms can lower the cost of building apps and enable experimentation and the exploration of new ideas by building apps and their underlying ‘plumbing’ to test viability and business value quickly. No-code platforms can be used to deliver business mission-critical solutions in isolation, or in some cases, Software developers may still be involved (see Myth #4) to handle more sophisticated requirements. Importantly though, in recent years no-code platforms (such as GLU.Ware) are being used to bring complex Enterprise level Use Cases to life without any software developers being involved.

Myth #2 – NO-CODE IS JUST ANOTHER HYPE

The concept of using visual tools for software development, known as visual CASE (computer aided software engineering) tools, has been around since the 1970s, but early attempts were complex and required specialised knowledge. As a result, business users turned to homegrown tools like spreadsheets or databases, which were easier to build but had performance and security issues. It wasn’t until the mid-2000s, with advancements in cloud computing and software platforms, that the idea of no-code development began to address the historical challenges of software engineering in a way that is enterprise-ready. While the concept of no-code has been around for decades, its simplicity, ease of use, and ability to address enterprise needs has become widely recognised in recent years.

MYTH #3 – THERES NO REAL DIFFERENCE BETWEEN LOW-CODE AND NO-CODE

Low-code and no-code are not the same thing. They both use visual abstractions to simplify software development, but they are designed for different users and offer different benefits. Low-code platforms aim to reduce the amount of code that needs to be written by more junior developers, but still require knowledge of proper application design and architecture, as well as some lightweight coding knowledge. No-code platforms such as GLU.Ware, on the other hand, are intended for non-developers and aim to fully remove the need for coding.

MYTH #4 – NO-CODE PROJECTS CAN’T BE COMBINED WITH TRADITIONAL SOFTWARE DEVELOPMENT

No-code built solutions – both Business Apps and the underlying integration architecture (such as where GLU.Ware is used) can be used for a wide range of software solutions, including mission-critical ones. It is also possible to incorporate traditional software development elements into no-code projects by forming teams that include both no-code creators and software developers. These teams can collaborate efficiently and deliver enterprise-grade applications using no-code.

MYTH #5 – NO-CODE IS GOING TO PUT SOFTWARE DEVELOPERS OUT OF WORK

The idea that no-code development will replace software developers is false. There will always be a need for software developers to work with no-code teams, as software development languages and frameworks continue to evolve and push the boundaries of innovation. No-code tools are typically built on standardised components that were first developed and tested by software developers before being offered as pre-built components for no-code development. Therefore, software developers will continue to play an important role in the development of new digital apps and services.

MYTH #6 – NO-CODE WILL GET OUT OF CONTROL

The notion that no-code platforms are inherently insecure and unreliable is not true. While it is understandable for IT to worry about non-compliant and unreliable apps, modern no-code platforms offer governance and reporting capabilities to ensure proper use. In GLU.Ware, maker-checker controls, workflows and audit trails are just some of the capabilities available to ensure users follow appropriate software ‘development’ (i.e. configuration) practices. By implementing controls and governance, no-code platforms encourage the use of a standard platform that can be consistently governed.

MYTH #7 – NO-CODE PROJECTS FOLLOW THE SAME APPROACH AS TRADITIONAL SOFTWARE DEVELOPMENT

The development practices for no-code platforms should be tailored to take advantage of their unique strengths, rather than simply treating them like traditional development methods. No-code platforms intentionally abstract many details, which means that a different set of skills and backgrounds will be needed for a no-code team. GLU’s no-code methodology is principled on the ability to empower non-developers with the means of creating APIs and Integration components at speed (see the GLU ‘V-model of testing’), which in turn underpins an ability to Innovate at Speed.



Content is based on GLU’s Team experience and interpretation of the summary in Chapter 2 of The No-Code-Playbook – Published 2022 – ISBN 979-8-218-06204-0

Performance Benchmark

Context

GLU.Ware is all about speed. Not just the ability to ‘Integrate at Speed’ but equally so, to ‘Process at Speed’. It’s our mission to ensure that GLU.Engines in a Clients ecosystem are able to scale horizontally and vertically so as to guarantee that those GLU.Engines never cause transactional bottlenecks.


Performance Testing GLU.Engines is thus an integral part of the GLU.Ware Product Quality Assurance discipline. The objective of our performance testing process is to identify opportunities to optimise the GLU.Ware code, its configuration and how it is deployed and in-so-doing to continuously improve the performance of GLU.Engines.


Our Performance Testing process provides GLU and our Clients with insight into the speed, stability, and scalability of GLU.Engines under different conditions.

Test Scenarios

We have defined three performance test scenarios to cover the spectrum of solutions which GLU.Engines can provide integrations for. To focus on maximum throughput we have defined a simple ‘Straight Line Scenario’; to explore the impact of latency on a GLU.Engine we have included the ‘Latency Scenario’; and to understand the impact of complexity we have included the ‘Complex Integration Scenario’.


The Straight Line Scenario is a simple Asynchronous JSON payload pass through, a delivered JSON Payload simply being offloaded downstream to a Rabbit Message Queue.


The ‘Latency Scenario’ is  similar to the Straight line scenario except the payload is a USSD menu payload and it passed through a GLU.Engine which produces transactions in a Rabbit Message Queue.  Those transactions are in turn consumed by another GLU.Engine from a Rabbit Message Queue and they are then passed to a stub which has been configured with variable latency in its response (to emulate latency in downstream Endpoint systems).


The Complex Integration Scenario involves multiple layers of orchestration logic, multiple downstream Endpoints including multiple protocol transformations and multiple synchronous and asynchronous calls to Databases and Message Queues.


Executive Summary of Performance Test Results

Straight Line Integration ScenarioComplex Integration Scenario
TPS4,400754
CPUs84
SetupContainers: 1 Docker Swarm Manager (4vCPU, 16 GiB) and x2 Worker Nodes (2 vCPU, 4 GiB)VM (4 vCPU, 8 GiB Memory)


Additionally, we have defined a Performance Test scenario for the GLU.USSD solution which is pre-integrated with the GLU.Engine.

USSD SolutionUSSD with Latency Injection
TPS9151 Silo – 350  (Latency of 100ms)
3 Silos – 702 (Latency of 100ms)
CPUs164
SetupContainers: 1 Docker Swarm Manager (8vCPU, 16 GiB) and x2 Worker Nodes (4 vCPU, 16 GiB)VM (2 vCPU, 8 GiB Memory) – GLU.Engine Producer   Containers: 1 Docker Swarm Manager (8vCPU, 16 GiB) and x2 Worker Nodes (4 vCPU, 16 GiB) – RabbitMQ   VM (4 vCPU, 16 GiB Memory) – GLU.Engine Consumer & USSD


GLU.Engines are CPU bound, so ‘vertically scaling’ CPU leads to a better than linear performance improvement. GLU.Engines can also be horizontally scaled behind a load balancer or a Docker Swarm Manger (proxy) if containerised.


GLU.Engines have the ability to absorb latency in End Points up to 100ms and still achieve considerable TPS, with increased TPS being possible if horizontal scaling is architected into the deployment architecture.

Performance Optimization Recommendations

For optimal performance of a system of GLU.Engines, as reflected in the TPS benchmark figures for the systems defined in this document, the following recommendations are advised:

  • Performance of a system is dependent on the performance of each component within the system. A GLU.Engine is only one such component and as such it is important to ensure (monitoring and tracking) for all components connected to the GLU.Engine to ensure they are performing in line with expectations. It is essential to pro-actively monitor the ecosystem and the GLU.Engines specifically with alerts set for all metrics of interest including but not limited to CPU, Memory, Heap Size, Garbage Collection, Disk Space, Latency etc.

  • The deployment architecture of GLU.Engines within the ecosystem has a direct bearing on their performance. Ideally there should be maintained a performance forecast ensuring required additional capacity is planned and implemented in a timely manner. GLU support is available to assist to guide on the required sizing of the deployment architecture. Forecasts should include transaction types, flows, TPS and payload sizes as these all have a bearing on performance.

  • It is recommended to consult GLU support on specifications for GLU.Engines and the servers / VM’s / Containers / networks to help understand any constraints which may exist with the system architectures.

  • It is recommended during ‘normal’ operations log levels for the GLU.Engines should be set to INFO or above (i.e. not DEBUG or TRACE) as Log levels affect GLU.Engine performance. In the event of any suspected problem and during the analysis of the problem log levels can be set to DEBUG to help trace the problem.

  • Where there is a suspected performance degradation of a GLU.Engine the GLU Support team is able to help, however it is essential detailed logs and monitoring metrics are provided along with a full description of the problem scenario to help the support team understand the problem and if need be recreate in the GLU labs. GLU may ask for access to monitoring tools in the client’s environment to collaborate to pragmatically address the problem as quickly as possible.

  • It is essential to ensure the GLU.Engines and associated hardware/ are kept up to date. GLU is always improving the GLU.Ware product and will release performance improvements for time to time.

  • It is recommended to tune and review the performance of other ecosystem components which include load-balancers, docker container managers, databases, message queues, internal applications, as well as internal and external networks and third party applications.

Performance Test Details

Straight Line Scenario Setup

Performance Testing was executed in GLU’s AWS Test Lab within a single VPC.  This ensures little to no degradation in performance due to network communications. Swarmpit was used to manage the Docker environments which were used to run the GLU.Engines and Rabbit MQ processes, 3 Nodes were set up over 3 EC2 instances.

Virtual Machine Sizes

EC2Virtual AWS SystemCPUMemory
Swarm Managert3a.xlarge4 vCPU16 GiB
Swarm Node 1t3.medium2 vCPU4 GiB
Swarm Node 2t3.medium2 vCPU4 GiB



System Versions

SystemVersion
GLU.Ware1.9.13
RabbitMQ3.8.7
Swarmpit1.9

JMeter Test Setup Properties


Deployment Architecture


Straight Through Scenario Performance Test Results

Test CriteriaResult
Users400
Duration1 hour
TPS4,400
% Errors1.22 %
Total Transactions15,846,714



JMeter Results Summary



Rabbit MQ Result Summary

Commentary

An initial test involving a single node with 4 vCPUs and 16 GiB of Memory achieved a result of 1885 TPS.  The 4400 TPS result was achieved as described above with a Swampit Manager and two nodes, collectively utilising 8 vCPUs and 16 GiB of Memory. This proves that the GLU.Engine is CPU bound such that by reconfiguring and allocating additional CPU on is able to (better than linearly) scale the performance of a GLU.Engine setup.

Complex Scenario Setup

The complex scenario represents 2 benchmarks: the 1st excludes USSD and the 2nd includes USSD.

1st complex test excluding USSD

Performance Testing was executed in GLU’s AWS Test Lab with in a single VPC.  This ensures little to no degradation in performance due to network communication. In this test a docker container was not used, rather a GLU.Engine was deployed directly to a single AWS c5.xlarge (4 vCPU, 8 GiB Memory) EC2 instance. This did not include load-balancing as the objective was to understand the load a single GLU.Engine could achieve.


The diagram below outlines the complex architecture. Note how Jmeter injects transactions and each transaction is orchestrated across a DB connection to msSQL, REST, SOAP and Rabbit connections, returning a response back to Jmeter where the time of the finished transaction was taken.

1st test complex Scenario performance test results

Test CriteriaResult
TPS754

The graph below illustrates how performance scaled in proportion to VM sizes being increased, with each EC2 instance.

Commentary

The key factor influencing Performance when minimal latency on the response end points was found to be the number of vCPUs available.

2nd test complex test including USSD

Performance Testing was executed in GLU’s AWS Test Lab within a single VPC. This ensures little to no degradation in performance due to network communications.  Swarmpit was used to manage the Docker environments which were used to host the GLU.Engines and execute the GLU.USSD tests, 4 Nodes were set up involving 1 Manager and 3 Worker nodes.


Virtual Machine Sizes

EC2Virtual AWS SystemCPUMemory
Swarm Managert3.xlarge4 vCPU16 GiB
Swarm Node 1t3.xlarge4 vCPU16 GiB
Swarm Node 2t3.xlarge4 vCPU16 GiB
Swarm Node 3t3.xlarge4 vCPU16 GiB

System Versions

SystemVersion
GLU.Ware1.9.14
Swarmpit1.9

2nd test USSD with Integration Scenario Performance Test Results

Test CriteriaResult
TPS914,9

Latency Scenario

Performance Testing was executed in GLU’s AWS Test Lab within a single VPC. This ensures little to no degradation in performance due to network communications.  Swarmpit was used to manage the Docker environments which supported the container running RabbitMQ.


The latency scenario was designed in such away to maximise performance where the end points were slow to respond with a high degrees of latency.  The performance testing was set up with horizontal scaling across 3 silos, with contention on the test stubs being managed through a load balancer.  Injection was carried out through a dedicated server for Jmeter, which was injecting USSD menu transactions into a GLU.Engine set up to distribute transactions to 3 separate Rabbit queues in a round robin fashion.



Virtual Machine Sizes

EC2Virtual AWS SystemCPUMemory
Decision Makert2.large2 vCPU8 GiB
USSD / Integration Enginest3.xlarge4 vCPU16 GiB
Test Stubt2.medium2 vCPU4 GiB
Swarm Managera1.2xlarge8 vCPU16 GiB
Swarm Node 1t3a.xlarge4 vCPU16 GiB
Swarm Node 2t3a.xlarge4 vCPU16 GiB

System Versions

SystemVersion
GLU.Ware1.9.22
Swarmpit1.9

Latency Scenario with USSD / Integration Performance Test Results

Test CriteriaNumber of SilosTPS Results
Latency 100msSilos 1350 TPS
Latency 100ms  Silos 3700 PS

GLU.Engines have the ability to be able to absorb increased latency if sufficient memory is allocated and throttle settings are adjusted to allow for the buffering of transactions. See Managing Load with Throttles.

Commentary

Even at extreme high latency in excess of  3 seconds GLU.Engines will still deliver ±90TPS.

With latency reduced to of 100ms increases throughput to ±350TPS.

GLU.Engines scale in a near linear fashion. As additional performance is required additional servers can be added.

An increase in latency may necessitate additional memory allocation for the GLU.Engine to accommodate the buffering of transactions.

Environment Variables

It is possible to configure a set of variables such that the values of the Variable will change depending on the Environment which the GLU.Engine is run on. This allows the user to not set fixed values inside the configuration itself and then need to change this during the lifecycle of your engine.

An example of this is if you want to change a Slack channel which messages are sent to depending on whether you are deploying on a development environment or a production environment. It is possible to have a single variable name such as “slackKeyValue” with the channel keys for development and production being different.

How to set this up:

1. Setup your Global Variables

Press the “Global Variables” button in the Environments tool, to access the Variable configuration screen.

The Global Variables screen shows the variables that exist. It is possible to “Add Variables”, and to modify and delete existing variables from this screen. Each Variable must have a unique name per Client, with a description of what the variable is used for.

If you add or modify a variable you will be presented with the ‘Edit Variable’ dialogue.

In this dialogue, you can define/modify the name of the variable and description. For each environment, you can set the value to be used. It is not necessary to enter values for Environments that are not used. If the value is left empty, then null will be present in the GLU.Engine when used in that environment. Once Variables have been set per environment, those variables will be used for each GLU.Engine Environment-specific build.

2. Use the variable in your integration

The variable that you have created is now available to be used in your integration. It will be added to the parameter dropdown box with the prefix “env_” .


Example of global variables that will be present in the pulldown box with the “env_” prefix:

Example of using a global variable in the context name:

/${header.env_slackKeyValue} … where the variable defined was slackKeyValue and the env_ is the prefix to identify the variable as a global variable and the /${header. represents part of the parameter being passed to the URI.

Masking Environment Variables in the logs


See how the values are masked in the logs


Note: avoid using variables in Header, Body, or query sections endpoint calls, as they will not be encrypted when presented in the logs.

Using an Environment Variable in a Handler

Where an Environment Variable needs to have a condition applied and Action taken, when in the Integration Builder, and configuring the Handler, select the Environment Variable from the Parameter Name drop-down.

See the example below.

TCP/IP Connectors

Context

TCP/IP is an abbreviation for Transmission Control Protocol / Internet Protocol. It is a set of protocols that define how two or more computers can communicate with each other. The protocol is effectively a set of rules that describe how the data is passed between the computers. It is an open standard so can be implemented on any computer with the appropriate physical attributes. 

Properties Tab

If Properties need to set for the TCP connector, for example for a TCP/IP connector to an HSM the key = textline must be set to the value = true as shown in the example below.

As another example, by default, TCP/IP Connectors are asynchronous. If you require the Connector to be synchronous, the key = synchronous must be set to the value = true.

Decoder / Encoder Configuration

As a final, slightly more complex example, a messages sent over TCP/IP include a variable byte length header known as the Variable Length Indicator (VLI), proper configuration of the decoder and encoder is important. Here’s how to handle such requirements:

Variable Length Indicator (VLI):

  • Typically consisting of 2 bytes, the VLI precedes every message sent to or from the TCP/IP endpoint.
  • Bytes 1-2 indicate the number of bytes in the message, excluding the first 2 bytes.
  • These 2 bytes represent a 16-bit unsigned integer in network byte order.
  • Note: If the message is compressed before transmission, it must first be compressed to determine its length. As an example, suppose that you were to look at just the text (excluding the 2 byte header) making up an XML message from a particular example and that you then counted the characters and they happened to add up to 299 characters. If you took a scientific calculator (the one in Windows for example) and you typed in 299 with “Dec” (for decimal) selected, then you select “Hex”, the value that would be shown is “12B”, which means the same as 01 2B, which is exactly what one would expect to see for the first two bytes of the message if the message had been dumped to a file and then opened with a hex editor.

Configuration Properties:

  • Configure the decoder type as ‘StringDecoder’ and the encoder as ‘LengthFieldPrepender’.
  • Set the length to ‘2’ to handle the 2-byte VLI.
  • Additionally, set the property ‘useByteBuf’ to ‘true’. This instructs GLU to convert the message body into ByteBuf before transmission, allowing for efficient byte-level manipulation.


    In the screenshot below the above described Decoder / Encoder and Field Length settings are shown. Additionally you’ll see the TCI/IP property key = usebyteBuf is set to value = true … with this setting GLU will turn the message body into ByteBuf before sending it out. Just like an ordinary primitive byte array, ByteBuf uses zero-based indexing. It means the index of the first byte is always 0 and the index of the last byte is always capacity – 1.

    Integration Builder – An Introduction

    The Integration Builder is used to configure the APIs of your GLU.Engine will expose the Outbound Connectors and the Orchestration logic using Handlers that will direct the transaction flows.  
      
    When the GLU.Engine build is processed in the Build Manager tool, the Integration configurations are compiled using the Apache Camel Routes framework. This enables broad native support within GLU.Ware for the full spectrum of the Camel Route capabilities (e.g., Message Queues, Database connections, LDAP etc.).  
      
    Multiple integrations are supported. The GLU.Engine can be configured to connect to a multitude of systems, translating protocols between them as required. GLU.Ware can validate a Request and collate the required data from different systems to complete it. GLU.Ware can validate a Request and collate the required data from different systems in order to complete the Request. Different outcomes of a transaction can be catered for with configurable ‘flows’ using ‘Handlers.’  
      
    All Integration flows start with an Inbound Connector (API) Request and end with a Response. Downstream of the Request/Response definition for a Transaction, the Integration Flow is defined in the Orchestration section. For each step in the Orchestration, calls can be passed to downstream systems via Outbound Connectors. The Outbound Connector options are created by configuration of the relevant Connectors using the ‘Connector Tool’. All Parameters (data) received for any Endpoint are transformed or ‘un-marshalled’ into a GLU ‘Object-Model’ that is persisted for the duration of an individual transaction only. It is from these un-marshalled parameters that subsequent Outbound Connector payloads (or the API Response) can be populated. These parameters can be converted to different protocol formats and / or simply reused at later steps in any flow. 

    Transaction Codes and Flow Codes

    Each Transaction is identified by a Transaction Code which must be a unique ‘string’. It can be anything the Analyst desires, typically it will describe the API service being configured e.g., ‘BalanceEnquiry’
     
    Within the Orchestration each ‘leg’ is assigned a unique ‘Flow Code’ which similarly to the Transaction Code uniquely identifies each outbound call within the Orchestration Manager. These Flow Codes can be used to route transactions flows to specific steps in the flow i.e., Flows although defined chronologically do not necessarily follow the sequence in which they are defined. 
     
    A GUID is generated by the GLU.Engine for each Transaction processed by the GLU.Engine. 
     
    A GUID (Globally Unique Identifier) is a large, random number that is used to uniquely identify a transaction that the GLU.Engine processes. GUIDs are 128 bits long, allowing for many unique values. The GUID is labelled as the GLU_TXN_ID in the GLU.Engine logs. The GLU_TXN_IDs are useful when one is tracing transactions in the logs. The use of GUIDs ensures that each transaction with its associate Transaction Code has a unique identifier. 

    The ‘Repair Integration’ Tool

    The ‘Repair Integration’ tool at the top right of the Integration ‘tree’ is used to re-index the full tree structure of your integration. 

    In the event you have a configuration that has been imported or segments of integrations that have been cut or pasted into your tree, the indexing will need to be refreshed. This is simply done by clicking the ‘Repair Integration’ icon. 

    In such situations where your config ‘index’ is misaligned, the configuration validations that run in the background and that flag config error warnings may flag warnings related to configuration issues that have been resolved. Using the ‘Repair Integration’ tool will clear such Warnings.  

    Search Integration Tool

    The ‘Search’ tool at the top right of the Integration ‘tree’ is used to search and retrieve matches the flowing information: 


     

    • Transaction Code and Flow Code: Transaction Code: Unique string identifying each transaction, often reflecting the configured API service (e.g., ‘BalanceEnquiry’). Flow Code: Unique identifier assigned to each leg within the orchestration, aiding in routing transaction flows to specific steps.  
    • Parameter Label, Parameter Name, and Attribute Name: Parameter Label: Descriptive label for a parameter. Parameter Name: Unique identifier for the parameter. Attribute Name: Specific attribute associated with the parameter.  
    •  Derived Parameter Label, Derived Parameter Name, and Derived Parameter Body: Derived Parameter Label: Descriptive label for a derived parameter. Derived Parameter Name: Unique identifier for the derived parameter. Derived Parameter Body: The content or value associated with the derived parameter.  
    • Static Parameter Label, Static Parameter Name, and Static Parameter Body: Static Parameter Label: Descriptive label for a static parameter. Static Parameter Name: Unique identifier for the static parameter. Static Parameter Body: The fixed content or value associated with the static parameter.  
    • Handler Label and Condition Value (if applicable): Handler Label: Descriptive label for a handler. Condition Value: Value used in conditions, for example, when the parameter equals a specified value.  
    • Value Mapping: Mapping of values between different parameters or components.  
    • Text in Request/Response Templates: Any textual content present in the request or response templates.  

    Note: Case sensitivity is not considered during searches; results are displayed irrespective of the case used in the search. For instance, searching for ‘response’ will yield results displayed in capital letters as shown in the example screen.  


    The search results appear as follows:

    HTTP / RESTful Connectors

    Configuring HTTP / RESTful Connectors

    All GLU HTTP connectors uses the HTTP4 component, which is a newer version of the Hypertext Transfer Protocol designed to enhance the performance, security, and scalability of web applications. It is built on the QUIC protocol, utilising UDP instead of TCP and supporting multiplexing, encryption, and congestion control.

    HTTP Cross-Origin Resource Sharing

    HTTP Cross-Origin Resource Sharing (CORS) is a mechanism in HTTP headers that enables a server to specify which origins, including domains, schemes, or ports, are allowed to access its resources. This is particularly important for web applications that need to interact with resources hosted on different domains.

    In the context of GLU, where HTTP connectors are used, CORS support can be enabled to facilitate cross-origin requests. Here’s how you can configure CORS support in GLU:

    1. Host Settings: Host settings represent the generic parts of the URL path, such as http://localhost:9088/services.
    2. Context Names: Context names, configured in the Integration Builder, form the basis of outbound connector requests. These names typically include the host, port, and context name, resulting in a URL structure like {host}{port}/{contextName}.
    3. Enabling CORS: To enable CORS support in GLU, add ?enableCORS along with allowed headers to the context name in the Integration Builder. This informs the server that cross-origin requests should be permitted from specified origins. There are two common configurations for CORS support:
    • Allow All Headers: Use ?enableCORS=true&filterInit.allowedHeaders=* to allow all headers from any origin.
    • Allow Specific Headers: Alternatively, specify specific headers to be allowed using ?enableCORS=true&filterInit.allowedHeaders= followed by a comma-separated list of header names.

    By configuring CORS support in GLU, you ensure that your HTTP connectors can effectively handle cross-origin requests, thereby enabling seamless interaction between different domains and resources.


    Properties

    The Properties Tab is where Protocol specific Properties are defined. There are no HTTP or REST specific properties so these options are not applicable for such Connector types. For other types such for a SOAP Connector, the SOAP Properties such as WSDL Location, SOAP Context Service name etc. will be presented.

    Swagger – OpenAPI 3.0

    Overview

    OpenAPI 3.0 is the latest name given to the previous Swagger standard, the market however is still widely familiar with and using the term ‘Swagger’ to refer to OpenAPI 3.0 – as such GLU refers to the GLU.Ware OpenAPI 3.0 as ‘Swagger’. Swagger Specification and API document are used interchangeably. Details on the OpenAPI 3.0 Standard are available here.

    GLU.Ware has the ability to generate a Swagger standard based API document to describe the APIs in the GLU.Engine. The API document is compiled in the build process. GLU uses the API configuration (as defined in the GLU.Console) to define the method, schema and structure of the API, and it pulls in the associated API descriptions configured by the analyst in the Integration Builder. The API document is included in the GLU.Engine build such that when the JVM is run a web service will also run to host the Swagger web page to render the API document.

    The GLU.Engine Swagger File can be accessed through {Engine_URL}:{Server Port}/swagger. For example: http://mybankapi:9195/swagger

    This will display the Open API 3.0 (Swagger) definition for the GLU.Engine APIs.

    Download JSON for Swagger Specification

    Included in the Swagger specification is a URL which can be used to download the full JSON for the swagger specification.


    This JSON can be used to render the swagger definition in the GLU.Control API Manager and/or other swagger management tools.

    Console API Control panel

    The output of the API document generated by GLU will resemble the following sample.


    The GLU.Console API Control Panel is used to configure the API document that will be generated.


    The subsequent sections pertain to each of the labeled fields showcased in the API Control Panel screenshot provided above.

    A – Swagger API Title

    This is the title for the Swagger specification.

    B – Swagger API description

    This is the description of the API which should provide a description for all the API’s displayed on the Swagger specification and accessed in the GLU.Engine.

    The format of this description is HTML.

    See below for an example of the API description:


    C – Swagger Contact email

    This pertains to the contact details for the person responsible to support the exposed API, specifically the email address. Upon selecting this option on the screen, the associated email application will be triggered, and a new email will be created addressed to the provided email address.

    D – Swagger API Terms of service URL

    The URL to the terms of service for the API.

    i.e. www.myorg.legalblurb.com

    E – Swagger API Licence URL

    The license information for the exposed API.

    i.e. www.myorg.termsandSLA.com

    F – Swagger Groups

    On the Swagger specification it is possible to group API transactions into sections. To do this use the ‘Manage Swagger Groups’ tool.


    In the screenshot below you can see how APIs are grouped based on their Swagger Group associations.


    G – Manage Swagger Servers

    It is possible to configure URL settings to point at the GLU.Engine which contains the API transactions defined in the Swagger specification, this provides the ability to use the web service hosted swagger specification to test each API transaction.


    Request Parameters

    If samples have not been included in the API transactions configuration, then GLU will generate a schema for the API Request and Response parameters based on the configuration API “body” Request and Response parameters.

    The schema is based on Parameter Types.

    The table below shows how the parameters are mapped to the Swagger specification.

    GLU.Console Parameter TypeGLU.Console Parameter definitionSwagger definitionGenerated in swagger
    TextminLengthstringYes
    TextmaxLengthstringYes
    Dateincludes extra date format definitionstringYes
    TextregexstringYes
    TextDEFAULTstringNo – default value isn’t included
    HashstringYes
    IntegerintegerYes
    ImageimageYes
    FloatfloatYes



    See below example from the Swagger specification with the mapping.





    Show in Swagger Doc: For each transaction there is a tick box which can be used to indicate if a section in the swagger specification should be generated. Tick the box to include in swagger specification, un-tick the box to leave out from swagger specification.


    Sample messages for the Request payload as well as Success and Failure Samples for the Response payload can be defined. These where provided will be included into the API Document. When the samples sections are filled in, then these will override the definition of the schema from the request parameters. Instead the value defined in the sample field will be pushed to the swagger specification.

    Fill the form and we’ll contact you shortly

      I agree with

      cookies
      We uses cookies to make your experience on this website better. Learn more
      Accept cookies