Enterprise Applications Integration: Traditional SOA vs. Modern Microservices

Swagger Load enables OpenAPI / Swagger files to be loaded into the GLU.Console to generate Transaction configurations directly.
For any REST connector, the green button is available for you to link a swagger.
The first time you use the Swagger Load tool the pop-up below will prompt you to either upload a .json or .yaml file or to point to a URL for the Swagger file you wish to use.
Once this is done and the Swagger document has been loaded against the connector, when you view the connector in the Connector screen you will see the BOTH button will be pink and an extra button will appear in the ACTION column.
You can access the Swagger Load tool within the Orchestration Manager. It will appear if the Connector you have selected is REST. If not, it will not appear.
If you have previously loaded and used a Swagger file, when you click on ‘Generate Endpoint from Swagger’ it will bring up the most recently loaded Swagger (example below). If you want to use a different Swagger, click on ‘Load New Version’.
The Connector Swagger Manager popup (above) will show all API Transactions available within the Swagger file. You can use the radio buttons to select the API Transaction you want to generate you config for.
Below, you can see the ‘API Transaction’ selected is ‘Find Pet by ID’ and you are then given the option to define the Request and Response content types depending on what the API Transaction chosen supports.
You then click ‘Generate’ to create the configuration for this leg of your Orchestration. You then simply need to clear the validation warnings by setting the Parameter Names to use for each Parameter on the Request and Response.
GLU.Ware leverages various software, libraries and tools. The key underlying enabler of GLU.Ware being Apache Camel along with various other libraries are opensource and are used under the permissive Apache 2.0 opensource license. The GLU ISO8583 Connector makes use of the JPOS component under the opensource GNU Affero General Public license. The Jenkins tool and slf4j-log4j12 library is used under the permissive MIT opensource License. OpenJDK is used under the GNU General Public License v2. Hibernate is used under the opensource GNU Lesser General Public License version 2.1 license. Other disclosed programmes are proprietary in nature such as the various AWS tools that GLU Software relies on – those don’t form part of the software code but the software relies on those disclosed programmes to function.
GLU Functions and Formulas offer versatility, serving both derived parameters, request, and response handlers.
It is important to note that a singular derived parameter or handler can only be associated with one FUNCTION, prohibiting the mixing of two FUNCTIONS. For instance, if a Derived Parameter needs to calculate the time difference between the current time (utilising the NOW FUNCTION) and another parameter, the DIFFTIMESTAMP FUNCTION can be employed. However, it necessitates first defining a Derived Parameter, let us say ‘timeNow’, using the NOW FUNCTION. Subsequently, the DIFFTIMESTAMP FUNCTION can be utilised with FUNCTION notation as demonstrated below:
DIFFTIMESTAMP(${timeExpiry}-${timeNow}) |
It is possible to execute functions only by checking the box Run Function, as some functions do not return values, such as removing data from the cache.
The below screenshot shows the tick box selected and the parameter field not being shown.
All functions are accessible through the Predefined Functions feature. Upon selecting the “Predefined Functions” tick box, a drop-down menu displays a list of predefined functions. Opting for a Predefined Function automatically replaces the Function or Formula box with the template of associated parameters. As show in the screenshot below:
If the box is unticked, the Predefined Function field vanishes, while the function itself persists. As show in the screenshot below:
Note: FORMULAs involve the use of mathematical calculations and are always prefixed with the ‘=’ symbol. FUNCTIONs are not preceded by any symbol.
This Derived Parameter is the most basic of FUNCTIONS in that it enables one to create a Derived Parameter that has a specific INITIAL value. IN the example below the starting value will be ‘0’ for the ‘redeemedAmountFormatted’ Derived Parameter. This enables one to add for example a Handler rule that will overwrite this parameter in the event that another received parameter e.g. ‘redeemedAmount’ is NOT NULL.
The IFNULL Function is used to check whether a parameter is NULL, and if so, return another parameter that is specified by the user. This function is like a try/catch statement in JavaScript.
Function Structure:
IFNULL(${nullParam},${string}) |
When a Derived Parameter is created utilising the IFNULL Function, it checks if the first parameter (`${nullParam}`) is NULL. If it is, the function returns another specified parameter (`${string}`). In cases where no parameter is explicitly specified, a static value is returned. If the first parameter is not NULL, the function simply returns the value of the first parameter.
Examples
Here are some examples to illustrate its usage:
Example 1:
The IFNULL
function checks if Param1
is null or not. In this case, Param1
is provided with the value “Big_Bang.” Since Param1
is not null, the IFNULL
function returns the value of Param1
, which is “Big_Bang.”
Therefore, the result of the expression IFNULL(${Param1},${Param2})
in this specific case is “Big_Bang.”
IFNULL(${Param1},${Param2}) (Param1 isn't sent at all) Param2 = "Hello_World" IFNULL returns "Hello_World" |
Scenario:
Param1
is not sent or is null
.Param2
is set to “Hello_World”.Outcome:
In this scenario, the IFNULL
function will return “Hello_World” because Param1
is either not sent or is null
, and the fallback value is specified as Param2
, which is “Hello_World”.
Example 2:
The IFNULL
function checks if the first parameter (Param1
) is null. If it is null, the function returns the second parameter (Param2
). If it’s not null, it returns the value of the first parameter.
IFNULL(${Param1},${Param2}) Param1 = "Big_Bang" Param2 = "Hello_World" IFNULL returns "Big_Bang" |
Scenario:
Param1
is provided with the value “Big_Bang,” which is not null.Param2
is “Hello_World.”Outcome:
Since Param1
is not null, the IFNULL
function returns the value of Param1
, which is “Big_Bang.”
Therefore, the result of the expression IFNULL(${Param1},${Param2})
with the given values is “Big_Bang.”
Example 3:
IFNULL(${Param1},"Bye_World") (Param1 isn't sent at all) Param2 = "Hello_World" IFNULL returns "Bye_World" |
Scenario:
Param1
is not sent at all, meaning it’s null.Param2
is “Hello_World.”Outcome:
Since Param1
is null, the IFNULL
function returns the value of Param2
, which is “Bye_World.”
Therefore, the result of the expression IFNULL(${Param1},"Bye_World")
with the given values is “Bye_World.”
In each example, the behaviour of the IFNULL function is highlighted, illustrating how it handles NULL parameters and returns the appropriate value based on the specified conditions.
The IFEMPTY function is like to the IFNULL function and is used to check whether a parameter is EMPTY, meaning it lacks an assigned value. If the parameter is indeed EMPTY, the function returns another parameter specified by the user.
Function Structure:
IFEMPTY(${emptyParam},${stringTwo}) |
When a Derived Parameter is created using the IFEMPTY function, if the first parameter is EMPTY, it will return the specified parameter (or, if none is specified, a static value). Conversely, if the first parameter is not EMPTY, it will return the value of the first parameter.
Example
Example 1:
The IFEMPTY
function is used to check if the first parameter (Param1
) is an empty string. If it is empty, the function returns the second parameter (Param2
). If it’s not empty, it returns the value of the first parameter.
IFEMPTY(${Param1},${Param2}) Param1 = "" Param2 = "Hello_World" IFEMPTY returns "Hello_World" |
Scenario:
Param1
is an empty string, as indicated by Param1 = ""
.Param2
is “Hello_World.”Outcome:
Since Param1
is empty, the IFEMPTY
function returns the value of Param2
, which is “Hello_World.”
Therefore, the result of the expression IFEMPTY(${Param1},"Hello_World")
with the given values is “Hello_World.”
Example 2:
The IFEMPTY
function is used to check if the first parameter (Param1
) is an empty string. If it is empty, the function returns the second parameter (Param2
). If it’s not empty, it returns the value of the first parameter.
IFEMPTY(${Param1},${Param2}) Param1 = "Big_Bang" Param2 = "Hello_World" IFEMPTY returns "Big_Bang" |
Scenario:
Param1
is not an empty string, as it is “Big_Bang.”Param2
is “Hello_World.”Outcome:
Since Param1
is not empty, the IFEMPTY
function returns the value of Param1
, which is “Big_Bang.”
Therefore, the result of the expression IFEMPTY(${Param1},"Hello_World")
with the given values is “Big_Bang.”
Example 3:
The IFEMPTY
function is used to check if the first parameter (Param1
) is an empty string. If it is empty, the function returns the second parameter (Param2
). If it’s not empty, it returns the value of the first parameter.
IFEMPTY(${Param1},"Bye_World") Param1 = "" Param2 = "Hello_World" IFEMPTY returns "Bye_World" |
Scenario:
Param1
is an empty string, as it is “”.Param2
is “Bye_World.”Outcome:
Since Param1
is empty, the IFEMPTY
function returns the value of Param2
, which is “Bye_World.”
Therefore, the result of the expression IFEMPTY(${Param1},"Bye_World")
with the given values is “Bye_World.”
IFEMPTY provides a flexible way to handle situations where parameters might lack values, ensuring your program behaves as intended even under varying conditions.
The IFNULLOREMPTY Function is a combination of the IFNULL and IFEMPTY Functions and is used to check whether a parameter is either NULL OR EMPTY and if so, return another parameter that is specified by the user. This function seamlessly navigates between these two states, providing flexibility in handling different conditions.
Function Structure:
IFEMPTY(${emptyParam},${stringTwo}) |
When the first parameter is identified as NULL OR EMPTY, the function returns a specified parameter (or, if none is specified, a static value). Conversely, when the first parameter is neither NULL nor EMPTY, the function returns the value of the first parameter.
Example
Example 1:
IFNULLOREMPTY(${Param1},${Param2}) |
Scenario:
Param1
is an empty string (""
).Param2
is “Hello_World.”Outcome:
The function IFNULLOREMPTY
(assuming it works like IFEMPTY
or IFNULL
) checks if Param1
is either null or empty. In this case, since Param1
is an empty string, the function returns the second parameter, which is “Hello_World.”
Example 2:
IFNULLOREMPTY(${Param1},${Param2}) Param1 = "Big_Bang" Param2 = "Hello_World" IFNULLOREMPTY returns "Big_Bang" |
Scenario:
Param1
is an empty string (""
).Param2
is “Hello_World.”Outcome:
The function IFNULLOREMPTY
checks if Param1
is either null or empty. In this case, since Param1
is an empty string, the function returns the second parameter, which is “Hello_World.”
Example 3:
IFNULLOREMPTY(${Param1},"Bye_World") (Param1 isn't sent at all) Param2 = "Bye_World" IFNULLOREMPTY returns "Bye_World" |
Scenario:
Param1
is not sent or is an empty string.Param
2 is “Bye_World”.Outcome:
IFNULLOREMPTY
function returns “Bye_World” because Param1
is either null or empty, and the default value is used in such cases.This function is useful for providing a fallback value when a parameter may not be present or is an empty string.
The IFNULLOREMPTY function proves to be a versatile solution, offering a comprehensive approach to handle scenarios involving both NULL and EMPTY conditions. Its flexibility allows you to tailor the output based on the state of the initial parameter.
This function is used to retrieve the name of the server where the GLU application is running. This is a placeholder that will be replaced with the actual server’s name when the expression is evaluated.
Function Structure:
${GLU_SERVER_NAME} |
Example
If, for instance, the GLU application is running on a server with the name “DESKTOP-JH9PA6A,” then when you use `${GLU_SERVER_NAME}`, the response will be:
DESKTOP-JH9PA6A |
This allows you to dynamically capture and use the server’s name within your application or responses.
The `GLU_TRX_ID` function is designed to retrieve the unique transaction ID associated with a specific transaction. This identifier serves as a distinct label for each transaction, ensuring that every new transaction is assigned a unique and identifiable value.
Function Structure:
${GLU_TRX_ID} |
Example
If the function or placeholder for obtaining the transaction ID is, for instance, `${TRANSACTION_ID}`, and the ID for a test transaction is “b67f0087-a3c4-4e28-b8f1-d01b21086b1d,” then when you use `${TRANSACTION_ID}`, the response will be:
b67f0087-a3c4-4e28-b8f1-d01b21086b1d |
This allows you to reference and use the unique transaction ID within your application or responses. Please replace `${TRANSACTION_ID}` with the actual function or placeholder used in your system.
`${GLU_REDELIVERY_COUNTER}` is a system variable that provides the count of retry attempts made by the system during a particular operation. It is often used in conjunction with a retry mechanism to manage and control how many times an operation should be retried.
Function Structure:
${GLU_REDELIVERY_COUNTER} |
Example:
Consider a scenario where a message delivery operation is subject to potential transient failures, such as network issues. A retry mechanism is implemented to handle such failures, and `${GLU_REDELIVERY_COUNTER}` is utilised to keep track of the retry attempts.
Explanation:
Result:
Let us examine how the system behaves during different retry attempts:
The SPLIT Function allows users to break down a string based on a specific character or delimiter. Upon execution, this function generates an array where each element corresponds to a segment of the split string, with indices starting at 0.
Function Structure:
SPLIT(${stringOne}, delimiter) |
This function operates by parsing the input string (${stringOne}) and splitting it at every occurrence of the specified delimiter. After that, it constructs an array containing the segmented strings.
Example
SPLIT(${stringOne},_) |
In this example, the SPLIT function divides the string “Jim_and_Pam” at each underscore character (‘_’). Consequently, it generates an array comprising segments, each represented by a key-value pair, where “value” signifies the segmented string, and “key” denotes its index within the array.
The `CREATE_VALUE_AS_STRING_FROM_ARRAYS` function is designed to extract a string from a multi-level array based on specified parameters.
Function Structure :
CREATE_VALUE_AS_STRING_FROM_ARRAYS(<sourceArrayName1>[].<sourceArrayName2>[], <attributeName>, [<delimiterForArray> <delimiterBetweenValues>]) |
Example
When setting up this Derived Parameter, you should specify ‘numbers’ as the ‘derivedParameterName’ and input the following formula in the ‘Formula’ box:
CREATE_VALUE_AS_STRING_FROM_ARRAYS(boards[].selections[], selection, [; <,>]) |
This configuration will create the ‘numbers’ parameter by extracting the ‘selection’ values from the arrays within ‘incomingBoards’, and it will concatenate them into a single string using the specified delimiters [; <,>].
Explanation:
Given the following array structure:
{ |
The function transforms it into the following string:
"numbers": "4,14,18;2,19,20;1,12,18" |
When configuring the Derived Parameter:
The `ADD_ATTRIBUTE_TO_ARRAY_WITH_FIX_VALUE` function is used to add a fixed value to an array.
Function Structure:
ADD_ATTRIBUTE_TO_ARRAY_WITH_FIX_VALUE(${Array}, <AttributeName>, <FixedValue>) |
Example:
ADD_ATTRIBUTE_TO_ARRAY_WITH_FIX_VALUE(${Token}, SerialNumber, ${receiptNo}) |
This function adds a `SerialNumber` attribute to each element in the `${Token}` array and assigns the value of `${receiptNo}` as the fixed value.
Given the input array:
<stdToken units="66.666664" amt="1346" tax="202" tariff="..." desc="Normal Sale" unitsType="kWh" rctNum="639221497438">64879811944360134888</stdToken> |
Applying the function:
ADD_ATTRIBUTE_TO_ARRAY_WITH_FIX_VALUE(${Token}, SerialNumber, ${receiptNo}) |
Results in:
<stdToken units="66.666664" amt="1346" tax="202" tariff="..." desc="Normal Sale" unitsType="kWh" rctNum="639221497438" SerialNumber="1234567890">64879811944360134888</stdToken> |
Here, the `SerialNumber` attribute is added to each `<stdToken>` and `<bsstToken>` element in the `${Token}` array with the fixed value `${receiptNo}` (assuming `${receiptNo}` is dynamically provided). Adjust the parameters as per your specific use case.
The `LENGTH(${string})` function calculates and returns the length (number of characters) of the specified string.
Function Structure:
LENGTH(${string}) |
Example
LENGTH(${attribute}) |
${string} = "Hello_World" |
In this case, the function `LENGTH(${string})` would return the value `11`, as there are 11 characters in the string “Hello_World”.
Given the example:
"attribute": "Hello_world" |
Applying the function:
LENGTH(${attribute}) |
Results:
"lengthOfAttribute": 11 |
Here, the `LENGTH` function calculates the length of the string “Hello_world” in the `${attribute}` parameter and returns the result as a new derived parameter named “lengthOfAttribute”. The value 11 represents the number of characters in the string.
In general terms, this function generates an ISO message and includes length information for the elements within the ${Field12722AllData} variable or field. ISO 8583 messages typically consist of fixed-length or variable-length fields, and the inclusion of length information is crucial for parsing and interpreting the message correctly.
Here for more detail:
Function Structure:
GET_ISO_MESSAGE_WITH_LENGTHS(${string}) |
Example
GET_ISO_MESSAGE_WITH_LENGTHS(${Field12722AllData}) |
Given the example:
"Field12722AllData": "IFSFData...restOfPayload" |
Applying the function:
GET_ISO_MESSAGE_WITH_LENGTHS(${Field12722AllData}) |
Results in:
[ISO_LENGTH] Value: [3584<IFSFData...restOfPayload</IFSFData] |
Here, `ISO_LENGTH` is a derived parameter that contains the length and the length of the length of the string `${Field12722AllData}`. The specific details of how these lengths are calculated are likely part of the internal logic related to ISO 8583 message formatting. Please refer to your system’s documentation for precise details.
The NOW function is used to capture the current date and time, and it can also be customised to display the time in a specific format.
Function Structure:
NOW([format]) |
Example
1. Using Default Format:
NOW() |
This will store the time with the default format, for example:
"now": "Fri Aug 14 13:10:22 SAST 2020" |
2. Using a Specific Format:
NOW(YYYY-MM-DD HH:MM:SS) |
This will store the time with the specified format, for example:
"now": "2020-08-14 13:10:22" |
3. Without Parentheses
NOW |
This will store the time with a specific format, for example:
"now": "14/08/2022" |
Example in a Template:
{ |
In this example, when the template is processed, the `${now}` variable will be replaced with the current date and time based on the specified or default format.
The `DATEFORMAT` function is used to change the date format from one specified format to another.
Function Structure:
dateformat(${date}, <newFormat>) |
Example
dateformat(${date}, yyyy-MM-dd HH:mm:ss:ms) |
This will convert the date “29/09/2021” (in dd/MM/yyyy format) to the new format “2021-09-29 00:00:00:00” (in yyyy-MM-dd HH:mm:ss:ms format).
Example Usage:
{ “date”: “29/09/2021”, “dateInNewFormat”: “${dateformat(${date}, yyyy-MM-dd HH:mm:ss:ms)}” } |
In this example, the `dateInNewFormat` variable will be replaced with the converted date when the template is processed.
Make sure to replace `${date}` with the actual variable or value containing the original date you want to format and adjust the desired format according to your requirements.
The `RANDOM` function generates a random number within a specified range.
Function Structure:
random[min, max] |
Example
random[10, 20] |
This function will return a random number between 10 and 20 (inclusive). Each time this function is called, a different random number within this range will be generated.
Example Usage:
{ |
In this example, the `”randomNumber”` variable will be replaced with a different random number between 10 and 20 each time the template is processed.
Adjust the `min` and `max` values according to your specific range requirements.
The `padrightstring`
function is used to pad a string with a specified character (in this case, ‘0’) to the right until it reaches a certain length. Here is an explanation of the function with your example:
Function Structure:
padrightstring(${string}, length, character) |
Paramaters:
Example
padrightstring(${amountOne}, 10, 0) |
In your specific case:
The `PADLEFTSTRING` function is used to pad a string with a specified character to the left until it reaches a certain length.
Function Structure:
padleftstring(${string}, length, character) |
Example
padleftstring(${amountOne}, 10, 0) |
The `STRIPSTART` function is used to remove leading characters from a string that match the specified character.
Function Structure:
stripstart(${parameterName}, stripChar) |
Example
STRIPSTART(${accountNumber}, 0) |
Result:
The function will remove all leading ‘0’ characters from the account number. So, “00000867512837656” will be saved as “867512837656”, and “00087693487672938” will be saved as “87693487672938”.
The `DIFFTIMESTAMP` function is used to calculate the difference between two timestamps in milliseconds.
Function Structure:
difftimestamp(${dateTwo},${dateOne}) |
Example
difftimestamp(${dateTwo},${dateOne}) |
In your specific case:
Result:
The function calculates the difference between these two dates in milliseconds. In your example, it results in:
difftimestamp = 31536000000 |
This represents one year calculated in milliseconds (1 year * 365 days (about 12 months) * 24 hours * 60 minutes * 60 seconds * 1000 milliseconds).
If you need to calculate the time in minutes between the current time and an expiry time, you can follow these steps:
1. Create a Derived Parameter called `timeNow` using the `${NOW}` function.
2. Then create a Derived Parameter called `calcedExpiryTimeMilliSeconds` using the `difftimestamp` function to calculate the time difference in milliseconds.
3. Now you can use the formula to convert `calcedExpiryTimeMilliSeconds` to `minutes`.
This way, you can effectively calculate the time difference between two timestamps and convert it to the desired unit, such as minutes.
The `RIGHTSTRING` function is used to extract the rightmost characters from a string or parameter.
Function Structure:
${string}.rightString[n] |
Parameters:
Example:
${tax_id}.rightString[8] |
In this example, `${tax_id}` is a parameter or string, and you want to extract the rightmost 8 characters from it.
Result:
If `${tax_id}` contains, for example, “1234567890”, then `${tax_id}.rightString[8]` will result in “567890”.
This function is useful when you need to retrieve a specific number of characters from the right side of a string or parameter.
he `SUBSTRING` function is used to extract a portion of a string based on the specified starting and ending indices.
Function Structures:
1. With only the starting index:
SUBSTRING(${string}, startNumber) |
2. With both starting and ending indices:
SUBSTRING(${string}, startNumber, endNumber) |
Example
1. With only the starting index:
SUBSTRING(${stringOne}, 5) |
In this example, `${stringOne}` is a parameter or string, and you want to extract the substring starting from the 5th index.
Result:
If `${stringOne}` contains “Hello_world”, then `SUBSTRING(${stringOne}, 5)` will result in “o_world” (it extracts characters from index 5 to the end).
2. With both starting and ending indices:
SUBSTRING(${stringOne}, 0, 5) |
In this example, `${stringOne}` is a parameter or string, and you want to extract the substring starting from the 0th index up to the 5th index.
Result:
If `${stringOne}`
contains “Hello_world”, then `SUBSTRING(${stringOne}, 0, 5)` will result in “Hello” (it extracts characters from index 0 to 5, excluding the character at index 5).
This function is useful for manipulating and extracting specific portions of strings.
The `SUBSTRING_BETWEEN` function is used to extract a substring from the original string located between two specified texts or substrings.
Function Structure:
SUBSTRING_BETWEEN(${string}, text1, text2) |
Example
SUBSTRING_BETWEEN(${stringOne}, DE, IZE) |
In this example, `${stringOne}` is a parameter or string, and you want to extract the substring that occurs between the texts “DE” and “IZE” in the original string.
This function is useful for scenarios where you need to extract a specific portion of a string that is bounded by two known texts or substrings.
The `TIMESTAMP` function is used to obtain the current timestamp calculated in milliseconds.
Function Structure:
timestamp |
Example
timestamp |
The `timestamp` function is used independently without any parameters. When called, it returns the current timestamp, representing the number of milliseconds that have elapsed since the Unix epoch (January 1, 1970, 00:00:00 UTC).
Result:
If you call `timestamp` at a specific moment, it will return the corresponding timestamp value.
In the provided example:
This value can be useful for capturing and working with the current time in various scenarios within a system or application.
The `CURRENT_DATE_TIME_UTC()` function returns the current date and time in Coordinated Universal Time (UTC). The format of the returned value is in the ISO 8601 format, which includes the year, month, day, hour, minute, second, and milliseconds, followed by the ‘Z’ indicating UTC.
Function Structure:
CURRENT_DATE_TIME_UTC() |
Example
If you call `CURRENT_DATE_TIME_UTC()` at a specific moment, it will return a result like:
2022-06-22T13:52:50.083Z |
This timestamp provides a standardised representation of the current date and time in UTC and is commonly used in various systems and applications. The ‘Z’ at the end indicates that the time is in UTC.
The expression `${String1}:${string2}:${string3}: ……` is a template or formula used to concatenate (join) multiple strings together using colons (`:`) as separators. The values of `${String1}`, `${string2}`, `${string3}`, etc., will be replaced with actual values when this expression is evaluated.
Function Structure:
${String1}:${string2}:${string3}: ...... |
Example
If you have the following values:
When you substitute these values into the formula `${date}:${string}:${day}`, the result will be:
09/03/2021:Hello_world:Tuesday |
So, the response is a single string where the values of `${date}`, `${string}`, and `${day}` are joined together using colons as separators.
The ADD_DAYS_TO_DATE
function is utilised to add a specified number of days to a given date. The syntax is ADD_DAYS_TO_DATE(${date}, <number of days to add>)
. The number of days can be provided in the request as a variable, for instance, ADD_DAYS_TO_DATE(${date}, ${numberOfDays})
.
Function Structure:
ADD_DAYS_TO_DATE(${date},<numbers of days to add>) |
Example
As mentioned above ADD_DAYS_TO_DATE
function is used to calculate a new date by adding a specified number of days to an existing date. Here are two examples:
Example 1:
ADD_DAYS_TO_DATE(${dateOne},5) |
Example 2:
ADD_DAYS_TO_DATE(${dateOne}, |
ADD_DAYS_TO_DATE(${dateOne}, ${date})
${date}
which is not specified.In both examples, the function returns a new date. The first example adds a fixed number of days (5) to a specific date (${dateOne}
). The second example suggests adding a variable number of days (specified by ${date}
) to the same initial date (${dateOne}
), but the specific outcome depends on the value of ${date}
.
The purpose of the REMOVE_DAYS_TO_DATE
function is to manipulate dates by subtracting a specific number of days from a given date.
Function Structure:
REMOVE_DAYS_TO_DATE(${dateOne}, <number_of_days_to_remove>) |
Paramaters:
${dateOne}
: This variable likely represents the initial date from which you want to subtract days.<number_of_days_to_remove>
: This is a placeholder for the actual number of days you want to subtract from the ${dateOne}
.Example
REMOVE_DAYS_TO_DATE(${date}, 5) |
Explanation
`REMOVE_DAYS_TO_DATE` is a convenient function for scenarios where you need to calculate a new date by subtracting a certain number of days from an existing date. It is particularly useful in data manipulations and can be employed in various contexts, such as managing time-based operations or adjusting timestamps based on specific requirements.
The `DIFFERENCE_BETWEEN_DATES` function in GLU calculates the difference in days between two specified dates. It provides a convenient way to determine the duration or gap between two dates, ignoring the time components.
Function Structure:
DIFFERENCE_BETWEEN_DATES(${dateTwo}, ${dateOne}) |
Example
Suppose you have two dates:
Using the `DIFFERENCE_BETWEEN_DATES` function:
DIFFERENCE_BETWEEN_DATES(${dateTwo}, ${dateOne}) |
Result:
The result of this function will be the number of days between the two specified dates:
Result: 26 days
Note:
The `DIFFERENCE_BETWEEN_DATES` function is useful for scenarios where you need to calculate the difference in days between two dates, such as in scheduling, billing, or other time-related operations.
The `SET_DATA_TO_CACHE` function in GLU is used to store a variable value in a cache, associating it with a specified cache parameter name. This allows you to manage and retrieve values from the cache in your application.
Function Structure:
SET_DATA_TO_CACHE(${NewCacheValuepid},cachepid) |
Example
The example below shows how the SET_DATA_TO_CACHE is used in a handler to assign value to the cache value cachepid.
Suppose you want to store the value of a variable `${NewCacheValuepid}` in the cache and associate it with the cache parameter `cachepid`. Here is how you would use the `SET_DATA_TO_CACHE` function:
SET_DATA_TO_CACHE(${NewCacheValuepid}, cachepid) |
Result:
The specified value `${NewCacheValuepid}` will be stored in the cache under the parameter name `cachepid`.
Note:
This function is useful for caching values that need to be accessed or shared across various parts of your application.
The `SET_DATA_TO_CACHE` function facilitates the storage of variable values in a cache, enabling efficient data management and retrieval in GLU applications.
The `GET_DATA_FROM_CACHE` function in GLU is used to retrieve values from a cache. This function has different forms based on the use case.
1. Array Form:
GET_DATA_FROM_CACHE(array[], column1, column2, ${variable}) |
2. Single Parameter Form:
GET_DATA_FROM_CACHE(singleCacheName) |
3. Dynamic Parameter Form:
GET_DATA_FROM_CACHE_USING_DYNAMIC_PARAM(${variable}) |
Examples
1. Array Form:
GET_DATA_FROM_CACHE (chicken[], message, track, ${findme}) |
message | track | id |
---|---|---|
liver | cside | song4 |
heart | aside | song7 |
feet | bside | song1 |
2. Single Parameter Form:
GET_DATA_FROM_CACHE(param) |
3. Dynamic Parameter Form:
GET_DATA_FROM_CACHE_USING_DYNAMIC_PARAM(${variable}) |
Note:
The `GET_DATA_FROM_CACHE` function is versatile, allowing you to retrieve values from arrays or single parameters in the cache, facilitating data retrieval in GLU applications.
The `GET_MAPPED_FROM_CACHE_CONTAINS` function in GLU is used to perform a comparison between a cached table and a parameter. This function checks if any of the look-up values in the cached array are contained in the specified parameter.
GET_MAPPED_FROM_CACHE_CONTAINS |
Function Syntax:
GET_MAPPED_FROM_CACHE_CONTAINS(tableOfValue[], returnValueColumn, lookUpValueColumn, ${parameter}) |
Example
Parameter: valueToLookInto:”What is my fruit?”
Suppose you have the following data in the cache:
returnValue | lookUpValue |
---|---|
apple | hat |
pair | abc |
orange | xyz12 |
And you want to check if the parameter `${valueToLookInto}` (“What is my fruit?”) contains any of the look-up values in the `lookUpValueColumn`.
GET_MAPPED_FROM_CACHE_CONTAINS(tableOfValue[], returnValue, lookUpValue, ${valueToLookInto}) |
In this case, the function would return:
apple |
This is because “hat” (from the `lookUpValueColumn` corresponding to “apple”) is contained in the `${valueToLookInto}` parameter.
The `GET_MAPPED_FROM_CACHE_CONTAINS` function provides a mechanism to compare a parameter against a cached table and return the corresponding value based on the matching condition. It performs a contains check on the specified parameter against the values in the look-up column of the cached table.
The `GET_MAPPED_ARRAY_FROM_CACHE` function in GLU is used to retrieve a mapped array from cache based on a specified condition. It is useful when you have an array saved in cache, and you want to get a specific parameter from the array based on a condition.
Function Structure:
GET_MAPPED_ARRAY_FROM_CACHE(arrayToCache[], saveAttributeArrayInCache2,saveAttributeArrayInCache1,${conditionCache2},-); |
Example
This command serves the primary purpose of removing cached data associated with a particular parameter. For instance, you can apply REMOVE_DATA_FROM_CACHE(param) to precisely delete cache entries linked to the specified parameter.
Function Structure:
REMOVE_DATA_FROM_CACHE(param) |
Example
Suppose you have cached data associated with a parameter called `${myParameter}`, and you want to remove this data from the cache. You would use the following command:
REMOVE_DATA_FROM_CACHE(${myParameter}) |
This command will delete the cache entries linked to the specified parameter `${myParameter}`.
The `REMOVE_DATA_FROM_CACHE` function is employed to selectively remove cache data related to a specific parameter. It provides a means to clean up and manage cached information in a GLU environment.
The CREATE_ARRAY
function is used to generate an array, and it takes three parameters:
Functional structure:
CREATE_ARRAY(${arraySizeParameter},[Key],[Value]) |
${arraySizeParameter}
: Represents the size of the array, which should be an integer.[Key]
: Represents the key or attribute for each element in the array.[Value]
: Represents the value associated with each key in the array.Example
CREATE_ARRAY(${countScore},[Score],[true])
"boards": [ {"quickpick": true}, {"quickpick": true}, {"quickpick": true}, {"quickpick": true}, {"quickpick": true} ] |
In this example, the scoreArray
is created as an array of objects. Each object has a key ([Score]
) and a value (true
). The size of the array is determined by the value of ${countScore}
, which is set to 5 in this case.
Note: The array elements are identical in structure, and the quickpick
attribute is set to true
for each element.
The `CHANGE_PARAMS_VALUE_IN_ARRAY` function allows GLU functions or formulas to be applied a parameter values in an array.
This function must have 4 arguments:
Examples:
CHANGE_PARAMS_VALUE_IN_ARRAY(arrayName,paramName,[SUBSTRING(${paramName},40,400)],true)
CHANGE_PARAMS_VALUE_IN_ARRAY(links,href,[https://glu.payments.com${href}],true)
CHANGE_PARAMS_VALUE_IN_ARRAY(Product,litres,[=${litres}/100],true)
The `CREATE_ARRAYS_FROM_STRING_WITH_ATTRIBUTES` function in GLU is designed to dynamically create arrays based on a source string parameter (`stringValue`). This function is particularly useful when you have a structured string and you want to parse it into a nested array, allowing for customisation of attributes and delimiters.
Functional structure:
CREATE_ARRAYS_FROM_STRING_WITH_ATTRIBUTES(${stringValue},[arrayName arraychildName....], [attribute], [delimeter1 delimeter2...],[extraAttribute1 extraAttribute2...],[extraAttributeValue1 extraAttributeValue2...], arrayIndex) |
Parameters:
${stringValue}
: The source string parameter already unmarshalled into GLU.Engine.[arrayName arraychildName….]
: Denotes the array tree structure with potential multiple levels.[attribute]
: Represents the name of the parameter to be saved into the lowest level array from the source string.[delimeter1 delimeter2…]
: Specifies delimiters in the source string indicating breaks in the tree structure.[extraAttribute1 extraAttribute2…]
: Names of extra attributes to be added to the array.[extraAttributeValue1 extraAttributeValue2…]
: Corresponding values of the extra attributes.arrayIndex
: Determines the starting position in the array for the extra attributes.Examples
Example 1:
CREATE_ARRAYS_FROM_STRING_WITH_ATTRIBUTES(${numbers},[boards selections], [], [; <,>],[quickpick],[false], 0) |
${numbers}
: Source string parameter.[boards selections]
: Array tree structure with two levels.[]
: No additional attributes at the top level.[; <,>]
: Delimiters indicating breaks in the tree structure.[quickpick]
: Attribute name for the lowest level array.[false]
: Attribute value for the lowest level array.0
: Starting position in the array for the extra attributes."numbers": "1,2,3,4,5,6;11,12,13,14,15,16" |
"boards": [ {"quickpick": "false", "selections": ["1", "2", "3", "4", "5", "6"]}, {"quickpick": "false", "selections": ["11", "12", "13", "14", "15", "16"]} ] |
Example 2:
CREATE_ARRAYS_FROM_STRING_WITH_ATTRIBUTES(${numbers},[boards selections], [], [;<, >],[],[00], 1) |
${numbers}
: Source string parameter.[boards selections]
: Array tree structure with two levels.[]
: No additional attributes at the top level.[;<, >]
: Delimiters indicating breaks in the tree structure.[]
: No additional attributes at the lowest level array.[00]
: Attribute value for the lowest level array.1
: Starting position in the array for the extra attributes."numbers": "1,2,3,4,5,6;11,12,13,14,15,16" |
"boards": [ {"selections": ["00", "1", "2", "3", "4", "5", "6"]}, {"selections": ["00", "11", "12", "13", "14", "15", "16"]} ] |
In summary, the function enables the creation of arrays from a structured string, incorporating extra attributes as needed. The syntax is flexible, allowing customisation of array structure and additional attributes based on specific requirements.
The `CREATE_ARRAY_FROM_ARRAY_AND_ARRAY_CHILDREN` function streamlines the organisation of array data by consolidating both the parent array and its children’s values into a single, cohesive root array. This function simplifies the structure, bringing all child values directly into the parent array.
Function Structure:
CREATE_ARRAY_FROM_ARRAY_AND_ARRAY_CHILDREN(balances) |
Example
In the context of the function CREATE_ARRAY_FROM_ARRAY_AND_ARRAY_CHILDREN(balances), where “balances” represents the parent array, the function operates by consolidating all values from its children arrays into the root array.
For instance, consider the scenario with nested arrays like balances[].balanceResources[]. After applying the function, the parameters originally residing within the “balanceResources[]” array will be reorganised to exist directly within the “balances[]” array.
The ROUND
function is used to round a decimal number to a specified number of decimal places. For example, if you have a number 123.4567 and you want to round it to two decimal places, you would use the function as follows: ROUND(123.4567, 2)
, which would result in 123.46. This function is useful for ensuring consistency and precision in financial calculations and other scenarios where specific decimal accuracy is required.
Functional structure:
|
${amountToRound}
, represents the decimal number you want to round. x
, indicates the number of decimal places to which you want to round the number.Examples:
ROUND(123.4567, 2) –> 123.46
ROUND(987.654, 1) –> 987.7
The `REPLACE` function in GLU is used to replace specific values within strings. It’s a straightforward text replacement function where occurrences of a particular value in the given string are replaced with another specified value.
Functional structure:
REPLACE(${string},${valueToReplace},${valueToReplaceWith}) |
Example
EPLACE(${string}, ${valueToReplace}, ${valueToReplaceWith}) |
Example Scenario:
Given the following inputs:
The `REPLACE` function transforms the string to:
"string": "Hello_user" |
The `REPLACE` function is a simple yet powerful tool for modifying strings by replacing specific values. It’s useful when you need to dynamically update or customize string content within the GLU.Engine environment.
The `ENCODESTRING32` and `ENCODESTRING64` functions in GLU are used to encode a string into either Base32 or Base64 formats, respectively. These encoding schemes are commonly employed for various purposes, including secure data transmission and storage.
Functional structures:
ENCODESTRING32(${string}) |
Or
ENCODESTRING64(${string}) |
ENCODESTRING32:
Base32 is a binary-to-text encoding scheme that uses a set of 32 characters, typically the 26 uppercase letters A-Z and the digits 2-7. It is designed to represent binary data in a human-readable format.
ENCODESTRING32(${stringOne}) |
ENCODESTRING32("Hello")
might return something like "JBSWY3DPEB3W64TMMQ==="
. ENCODESTRING64: Base64 is another binary-to-text encoding scheme that uses a set of 64 characters (commonly A-Z, a-z, 0-9, '+', and '/'). It's widely used to encode binary data for safe transmission over text-based channels, such as email attachments or data in URLs.
ENCODESTRING64(${stringOne}) |
ENCODESTRING64("Hello")
might return something like "SGVsbG8="
.These encoding functions are useful when you need to transform strings into a format suitable for secure and reliable data transmission or storage. Choose between Base32 and Base64 encoding based on your specific requirements.
The `DECODESTRING32` and `DECODESTRING64` functions in GLU are used to decode a string from either Base32 or Base64 formats back to ASCII. These decoding functions are essential when you have encoded data and need to recover the original content.
Functional Structures:
DECODESTRING32(${string}) |
Or
DECODESTRING64(${string}) |
DECODESTRING32:
Base32 decoding involves converting a string encoded in Base32 format back to its original ASCII representation. Base32 is often used to represent binary data in a human-readable format.
DECODESTRING32(${encodedMessageBase32}) |
DECODESTRING64:
Base64 decoding is the process of converting a string encoded in Base64 format back to its original ASCII representation. Base64 is widely used for encoding binary data for secure transmission or storage.
DECODESTRING64(${encodedMessageBase64}) |
These decoding functions are valuable when you need to reverse the encoding process and obtain the original content from Base32 or Base64-encoded strings. Choose the appropriate decoding function based on the encoding method used.
The `ADD_PERIOD` and `REMOVE_PERIOD` functions in GLU are used to manipulate date time values by adding or removing a specified period of time. These functions are helpful when you need to perform operations like adding or subtracting minutes, hours, days, weeks, months, or years from a given date time.
Functional Structures:
ADD_PERIOD(${param},${daystoAdd},periodType) |
or
REMOVE_PERIOD(${param},${daystoRemove},periodType) |
1. ADD_PERIOD (date):
`ADD_PERIOD(${param},${daystoAdd},periodType)` |
ADD_PERIOD(${staticDateAndTime},30, second) |
2. REMOVE_PERIOD (date):
`REMOVE_PERIOD(${param},${daystoRemove},periodType)` |
REMOVE_PERIOD(${staticDateAndTime},30,second) |
Period Types:
Example Scenarios:
Note: These functions are useful for dynamic date and time calculations in various scenarios, such as setting expiration times for transactions or managing time-sensitive operations.
The `ENCRYPT_USING_RSA_PUBLIC_KEY` function in GLU is used to encrypt a value using the RSA public key encryption algorithm. This function is typically used in scenarios where data needs to be securely transmitted or stored, and RSA public key encryption is employed for confidentiality.
Functional Structure:
ENCRYPT_USING_RSA_PUBLIC_KEY(${decryptedValue},${modulus},${exponent},UTF-8) |
Example
Parameters:
Note: The modulus and exponent are critical components of an RSA public key and are typically part of the public certificate. The public key is used for encryption, and the corresponding private key (not involved in this function) is used for decryption.
This function ensures that sensitive information can be securely transmitted or stored, and only entities possessing the corresponding private key (which is kept secret) can decrypt and access the original data.
The `CONVERT_DATE_TO_TIMESTAMP ` function in GLU is used to convert a date to a timestamp. Timestamps are often represented in milliseconds since the Unix Epoch (January 1, 1970). This conversion is useful in various scenarios, such as comparing or manipulating date values.
Functional Structure:
convert_to_timestamp(${date}) |
Parameter:
Example
convert_to_timestamp(${dateOne}) |
1584396000000
.This function is particularly useful when there is a need to work with time in a numeric format, such as when performing date-based calculations or comparisons. The resulting timestamp represents the number of milliseconds that have elapsed since the Unix epoch (January 1, 1970, 00:00:00 UTC), making it a standard format for representing time across various systems and programming languages.
The `MERGE_VALUES_IN_ARRAY` function in GLU is used to merge values from two columns within an array into a new column. This operation is particularly useful when you want to create a new column that combines information from existing columns in an array.
Functional Structure:
MERGE_VALUES_IN_ARRAY(product,[type charge],typecharge,-) |
Parameters:
Example
MERGE_VALUES_IN_ARRAY(arrayToMergeValues, [attribute1 |
The function proves useful when there is a need to perform a lookup in the array by matching on two values, providing a convenient method to establish a combined lookup key. This combined key can serve various purposes, such as enhancing data retrieval and facilitating comparisons.
The `HMAC_SHA_1_BASE64_ENCODER` function in GLU is used to generate an HMAC-SHA-1 (Hash-based Message Authentication Code with Secure Hash Algorithm 1) signature for a given base string using a secret key. The result is then encoded in Base64 format.
Functional Structure:
HMAC_SHA_1_BASE64_ENCODER(${baseString},${SignValueKey}) |
Parameters:
Example
HMAC_SHA_1_BASE64_ENCODER(${payload}, ${secretKey}) |
Outcome:
The function takes the provided `${baseString}` and `${secretKey}`, applies the HMAC-SHA-1 algorithm to create a cryptographic signature, and then encodes the result using Base64. The final output is a Base64-encoded string that serves as a secure representation of the HMAC-SHA-1 signature for the given message and key pair.
Practical Application:
This function plays a crucial role in maintaining the security of data exchanges by generating a reliable
and secure signature that can be used to verify the origin and integrity of transmitted information.
The `HMAC_SHA_256_BASE64_ENCODER` function in GLU serves as a critical component for ensuring the integrity and authenticity of data through the generation of a secure signature. Specifically, it utilizes the HMAC-SHA-256 (Hash-based Message Authentication Code with Secure Hash Algorithm 256-bit) algorithm, coupled with a secret key, to produce a tamper-resistant signature. The resulting signature is then encoded into a Base64 format, enhancing its usability and interoperability.
Function Overview:
HMAC_SHA_256_BASE64_ENCODER(${jsonPayload},${privateKey}) |
Parameters:
Example
HMAC_SHA_256_BASE64_ENCODER({"user": "JohnDoe", "role": "admin"}, "SecretKey456") |
The function performs the following steps:
1. Utilises the HMAC-SHA-256 algorithm to create a cryptographic signature.
2. Encodes the resulting signature into Base64 format.
The `HMAC_SHA_256_BASE64_ENCODER` function is a fundamental tool in securing data transactions, offering a reliable means of generating and verifying cryptographic signatures to fortify the integrity of digital communication.
The `ENCRYPTION_AES_256_BIT_MODE_CBC_BASE64_ENCODER` function in GLU serves as a robust encryption mechanism leveraging the widely adopted Advanced Encryption Standard (AES). This symmetric encryption algorithm, known for its security and reliability, operates with a 256-bit key and employs Cipher Block Chaining (CBC) mode. The purpose is to generate an AES-encrypted representation of sensitive data, typically in JSON format.
Functional Structure:
ENCRYPTION_AES_256_BIT_MODE_CBC_BASE64_ENCODER(${jsonPayload},${privateKey},${initVector}) |
Paramaters:
${jsonPayload}
: This is a placeholder for the JSON payload that you want to encrypt. It should be a variable or value containing the data you wish to secure.${privateKey}
: This represents the private key used for encryption. The private key is a secret cryptographic key that should be kept confidential. It plays a crucial role in the AES-256 bit encryption algorithm.${initVector}
: This is the initialisation vector (IV) used in the encryption process. The IV adds randomness to the encryption, making it more secure. It should be unique for each encryption operation and is typically generated randomly.Example
ENCRYPTION_AES_256_BIT_MODE_CBC_BASE64_ENCODER(${jsonPayload},${privateKey},${initVector}) |
In this example, the function encrypts a JSON payload containing user information with AES using a 256-bit key. The secret key “SecretKey456” is utilized, and an optional initialization vector “InitializationVec123” is provided for added security. The resulting encrypted data is then represented in Base64 encoding.
The `ENCRYPTION_AES_256_BIT_MODE_CBC_BASE64_ENCODER` function provides a secure and standardised approach to encrypting sensitive data, making it an essential tool in scenarios where data confidentiality is of utmost importance.
The `DECRYPTION_AES_256_BIT_MODE_CBC_BASE64_DECODER` function in GLU serves as a crucial component for securely retrieving and processing encrypted data. It utilises the AES-256-bit encryption algorithm in Cipher Block Chaining (CBC) mode, providing a reliable and widely adopted method for ensuring the confidentiality of sensitive information.
Functional Structure:
DECRYPTION_AES_256_BIT_MODE_CBC_BASE64_DECODER(${EncryptedPayload},${secretKey},${initVector}) |
Parameters:
Example
In this example, the function decrypts a base64-encoded payload that was initially encrypted using the AES-256-bit encryption algorithm in CBC mode. The “SecretKey456” serves as the secret key for decryption, and the optional “InitializationVec123” is provided for accurate decryption. The result is the original data represented as a base64-decoded string.
Practical Applications:
1. Secure Data Retrieval: Enables the secure retrieval of sensitive information stored in an encrypted format.
2. Data Processing: Essential for applications that deal with encrypted data, ensuring confidentiality during processing.
3. Security Integration: Commonly used in systems where encrypted data must be decrypted securely for various operational needs.
In summary, the `DECRYPTION_AES_256_BIT_MODE_CBC_BASE64_DECODER` function plays a crucial role in decrypting data encrypted with AES-256-bit in CBC mode, providing a secure and reliable method for accessing confidential information.
The BASE64_TO_HEX function is designed to convert a Base64-encoded value to its corresponding Hexadecimal representation. This conversion is useful in scenarios where Hexadecimal format is required, such as cryptographic operations or data transformations.
Functional Structure:
BASE64_TO_HEX(${encryptToBase64EncoderUsingHmacSHA256}) |
Parameters:
Example
Note: Ensure that the input value provided to the function is a valid Base64-encoded string, as the function expects Base64-encoded input for accurate conversion.
In summary, the BASE64_TO_HEX function serves as a valuable tool for transforming Base64-encoded data into its corresponding Hexadecimal representation, providing versatility in data processing and cryptographic applications.
The `ENCODE_HEX_TO_BASE64` function is employed to transform a hexadecimal value, often representing a SHA-1 Thumbprint, into a Base64URL encoded format. This conversion is integral when constructing JSON Web Signatures (JWS), particularly when including the x5t header parameter.
Functional Structure:
ENCODE_HEX_TO_BASE64(${x5tSHA}) |
Parameters:
Example
The outcome of the `ENCODE_HEX_TO_BASE64` function is the Base64URL encoded representation of the input hexadecimal value. This result, commonly labeled as `x5t`, is essential when constructing JWS headers, particularly when including the x5t parameter to convey the SHA-1 Thumbprint. The encoded value is typically conveyed as a string suitable for JWS header construction.
The MD5_HEX function is designed to generate an MD5 hash in hexadecimal format for a given parameter. MD5 (Message Digest Algorithm 5) is a widely used cryptographic hash function producing a 128-bit (16-byte) hash value, typically expressed as a 32-character hexadecimal number.
Functional Structure:
MD5_HEX(${base_encode}) |
Parameters:
Example
The outcome of this function is the MD5 hash of the input data presented in hexadecimal format. This hash can be used for various purposes, including verifying data integrity and comparing files or values.
Note: While MD5 is widely used, it’s important to note that MD5 is considered insecure for cryptographic purposes due to vulnerabilities that allow for collision attacks. For security-sensitive applications, consider using stronger hash functions like SHA-256 or SHA-3.
The URL_ENCODER function is employed for URL encoding, transforming special characters into a format suitable for inclusion in a URL. This function is particularly useful when dealing with parameters or values that need to be passed in URLs.
Functional Structure:
URL_ENCODER(${publicKey},UTF-8) |
Parameters:
Example
The function encodes the provided parameter for URL usage, ensuring special characters are appropriately represented.
Key Considerations:
1. URL Encoding: URL encoding is necessary to represent reserved characters in a URL to prevent misinterpretation.
2. Character Encoding: UTF-8 is a widely used character encoding scheme that provides support for a broad range of characters.
URL encoding is essential for handling special characters in URLs, ensuring proper functionality and data integrity when transmitting data via web applications.
In summary, the URL_ENCODER function is a valuable tool for preparing parameters or values for inclusion in URLs by encoding special characters, contributing to the overall robustness and reliability of web applications.
The purpose of the CONSOLIDATE
function is to aggregate or consolidate data based on the specified criteria, grouping by accountID
and applying some form of consolidation on the amount
values.
Function Structure:
CONSOLIDATE(${result},accountID,amount) |
Parameters:
${result}
: This is likely the variable or parameter where the result of the consolidation will be stored.accountID
: This is probably the field or column in your data that you want to use as a grouping criterion for consolidation.amount
: This could be the field or column containing numeric values that you want to consolidate, possibly by summing them up for each unique accountID
.Example
The outcome of this function would be the consolidated result, where data is grouped by unique accountID
, and the amount
values are aggregated. The specific consolidation operation (e.g., sum, average) would depend on the implementation details of the CONSOLIDATE
function.
The exact behaviour and implementation of the CONSOLIDATE
function may depend on the context or the system in which it is used. It’s advisable to refer to the documentation or code implementation for precise details.
The `TOLOWERCASE` function is used to convert the contents of a parameter or variable to lowercase.
Function Structure:
TOLOWERCASE(${paramterName}) |
Example
If ${parameterName} is, for example, “HelloWorld”, the outcome would be “helloworld” after applying the TOLOWERCASE function.
The `TOUPPERCASE` function is used to convert the contents of a parameter or variable to uppercase.
Function Structure:
TOUPPERCASE(${paramterName}) |
Example
If ${parameterName} is, for example, “helloWorld”, the outcome would be “HELLOWORLD” after applying the TOUPPERCASE function.
The `MOD` operation in the provided formula is employed to categorise MSISDN numbers based on the evenness or oddness of their last two digits. The formula is structured as follows:
Function Structure:
= ${msisdnlast2Digit}.% 2 |
Example
For instance, if `${msisdnlast2Digit}` is `25`, the operation evaluates to `= 25 % 2`, resulting in `1`. This indicates an odd number.
Outcome:
This approach efficiently segments MSISDN numbers into two distinct categories based on the evenness or oddness of their last two digits. The resulting `routekey` serves as a classification criterion.
The `SIGN_MX_MESSAGE` function is designed to apply the IETF/W3C XML Signature standard, often known as XML-DSig, specifically for ISO 20022 messages. XAdES (XML Advanced Electronic Signatures) outlines profiles of XML-DSig, and XAdES-BES (Basic Electronic Signature) within this context offers fundamental authentication and integrity protection, crucial for advanced electronic signatures in payment systems.
Function Structure:
SIGN_MX_MESSAGE(${messageISO20022},${certificate},${privateKey}) |
Parameters:
The result of this operation is the application of XML-DSig to the provided XML message, creating a digitally signed version. This signature provides assurances of both the authenticity and integrity of the XML document.
The purpose of VERIFY_MX_MESSAGE is to verify the authenticity or integrity of a signed message using cryptographic methods, either with a certificate or a public key.
Function Structures:
VERIFY_MX_MESSAGE(${SignedMessage},${certificate},false) |
or
VERIFY_MX_MESSAGE(${SignedMessage},${publicKey},true) |
The outcome of the function would typically be a boolean value indicating whether the verification process succeeded (true) or failed (false).
JWS or JSON Web Signature, requires some derived parameter inputs to create a signed Jose payload. JWS consists of three parts: Header, Payload, and Signature. Each of these parts is encoded in BASE64URL, then all three parts are connected in one line, the delimiter is a dot.
Header requires an x5t header parameter:
The GENERATE_FINGERPRINT function is used to generate the SHA-1 thumbprint of a certificate.
Function Structure:
GENERATE_FINGERPRINT(${certWithTags},SHA-1) |
Parameters:
Example:
GENERATE_FINGERPRINT(${certWithTags},SHA-1) [saved as x5tSHA in this example] |
Outcome:
The GENERATE_JWS function is used to create a signed JSON Web Signature (JWS) payload by combining the previously created values.
Function Structure:
GENERATE_JWS(${headerJWS},${responseBodyStart},${rpkPrivateKey},${algorithmJWS}) |
Parameters:
Example
This combines the previously created values to create the signed JWS payload:
GENERATE_JWS(${headerJWS},${responseBodyStart},${rpkPrivateKey},${algorithmJWS}) |
Outcome:
Note:
This process is commonly used in securing and verifying the integrity of data in web communications, especially in scenarios like authentication tokens or data exchange between parties where data integrity and authenticity are crucial.
The `SIGN_USING_PRIVATEKEY_AND_BASE64` function is designed to sign data using a private key and encode the resulting signature in Base64 format. This process is commonly employed for data integrity verification and authentication in secure systems.
Function Structure:
SIGN_USING_PRIVATEKEY_AND_BASE64(${certPassword}, ${certFilenamePath},${dataToSignParam}, ${keyStoreProviderParam},${keyStoreTypeParam}, ${signatureProviderParam}, ${signatureAlgorithmParam}) |
Parameters:
Example
The outcome of the `SIGN_USING_PRIVATEKEY_AND_BASE64` function is the Base64-encoded signature generated by signing the specified data using the provided private key. This signature is commonly used in secure communication systems to verify the authenticity and integrity of transmitted data.
The `CURRENT_NANO_TIME` function returns the current time in nanoseconds, providing a level of precision that is crucial for accurate timing and performance measurements. This function is commonly utilized in high-performance computing applications where precise timing is essential.
Function Structure:
CURRENT_NANO_TIME |
Example
The result of the `CURRENT_NANO_TIME` function is a numerical value representing the current time in nanoseconds. This value is often a large number, reflecting the high precision achieved by measuring time at the nanosecond level. Example: The current nano time now is 17122459102375.
The `GET_BODY()` function is designed to retrieve the entire response body from a transaction. It is particularly useful when you need to capture and use the response body in subsequent transactions or for further processing within your application.
Function Structure:
GET_BODY() |
Example
The result of the `GET_BODY()` function is the complete response body obtained from the current transaction. This includes all content, such as text, JSON, XML, or any other format returned in the response.
The `GET_BODY_AS_JSON()` function is designed to retrieve the content of a request body or a specific portion of the body and parse it as JSON data. This function is particularly useful when dealing with API responses or other data formats delivered in JSON.
Function Structure:
GET_BODY_AS_JSON(${variable}) |
Parameters:
Example
The result of the `GET_BODY_AS_JSON(${variable})` function is the parsed JSON data obtained from the specified location within the response body.
Use Case:
The `GET_BODY_AS_JSON()` function simplifies the process of extracting and parsing JSON data from response bodies. By utilising this function, you can seamlessly integrate JSON processing into your application logic, enabling efficient handling of API responses and other JSON-formatted data.
The `CHECK_IF_PAYLOAD_IS_JSON()` function is utilised to determine whether a given payload is in JSON format. It returns a boolean value, `true` if the payload is valid JSON, and `false` if it is not. This function serves as a quick check to ensure that incoming data adheres to the expected JSON format.
Function Structure:
CHECK_IF_PAYLOAD_IS_JSON(${parameter}) |
Parameters:
The result of the `CHECK_IF_PAYLOAD_IS_JSON(${parameter})` function is a boolean value (`true` or `false`) indicating whether the payload is in valid JSON format.
Use Case:
The `CHECK_IF_PAYLOAD_IS_JSON()` function is a valuable tool for quickly validating whether a given payload is in JSON format. By incorporating this function into your data processing workflows, you can enhance the robustness of your applications by ensuring that they handle JSON data correctly and gracefully handle unexpected formats.
The `GET_VALUE_FROM_JSON_PAYLOAD()` function is designed to retrieve the value of a parameter from a specified path within a JSON payload. It provides a convenient way to extract specific data points from complex JSON structures.
Function Structure:
GET_VALUE_FROM_JSON_PAYLOAD(${jsonPayload2},array[1].param) |
Parameters:
Example
The result of the `GET_VALUE_FROM_JSON_PAYLOAD(${jsonPayload}, array[1].param)` function is the value of the specified parameter located at the given path within the JSON payload.
Use Case:
Note:
The `GET_VALUE_FROM_JSON_PAYLOAD()` function enhances the capability to work with JSON data by providing a means to extract specific values based on their paths within the payload. This is particularly useful in scenarios where precise data extraction is required from nested and complex JSON structures.
The `GET_COMBINED_BODY()` function is a versatile tool crafted to simplify data manipulation by amalgamating information that has been previously split. While splitting data into smaller components is a routine operation, the ability to effectively reassemble this fragmented data into a cohesive whole is equally crucial. The GET_COMBINED_BODY function addresses this need.
Function Structure:
GET_COMBINED_BODY() |
Example
The result of the `GET_COMBINED_BODY()` function is the combined or concatenated form of the previously split data segments.
Use Case:
Note:
The `GET_COMBINED_BODY()` function serves as an effective means of consolidating data segments that have been split previously. It plays a pivotal role in scenarios where data needs to be reconstructed or combined after undergoing processes that involve fragmentation.
The GET_COMBINED_BODY_TO_STRING() function is a specialized tool designed for combining data fragments into a single string. While the GET_COMBINED_BODY function is versatile and can handle various data types, GET_COMBINED_BODY_TO_STRING() specifically focuses on string concatenation. It serves as a dedicated tool for simplifying the process of combining fragmented text or character data.
Function Structure:
GET_COMBINED_BODY_TO_STRING() |
Example
Outcome:
Note:
The DECRYPT_SEC_KEY_CIPHER function is designed to decrypt a payload using symmetric key encryption with AES/GCM/NoPadding algorithm.
Function Structure:
DECRYPT_SEC_KEY_CIPHER(${payload},${decrKey},${initVector},${secretKeySpecAlgorithm}, |
Example
The result of the DECRYPT_SEC_KEY_CIPHER function would be the decrypted content of the payload using the provided decryption key, initialisation vector, and other cryptographic parameters. The function uses symmetric key encryption (AES) with GCM mode and no padding to ensure secure and authenticated decryption. The outcome is the original content that was encrypted, now in its plaintext form.
The VALIDATE_CARDS_WITH_LUHN_ALGO function employs the Luhn algorithm to validate a given identity number (presumably representing a credit card number). The Luhn algorithm is a simple checksum formula used to validate various identification numbers, including credit card numbers.
Function Structure:
VALIDATE_CARDS_WITH_LUHN_ALGO(${identityNumber}) |
Example
The following function applies the Luhn algorithm rules. The output is either: ‘false'(doesn’t pass Luhn test) or ‘true'(does pass Luhn test):
VALIDATE_CARDS_WITH_LUHN_ALGO(${identityNumber}) |
The function then performs the Luhn algorithm on the provided identity number and returns either ‘false’ if the number fails the Luhn test or ‘true’ if it passes the test. This provides a quick check to determine the validity of a credit card number based on the Luhn algorithm rules.
The GENERATE_GUID function is a utility designed to create a Globally Unique Identifier (GUID), a unique identifier for objects or entities within a computer system. A GUID is a 128-bit value usually represented as a string of hexadecimal digits separated by hyphens.
Function Structure:
GENERATE_GUID |
Example
When this function is used, it dynamically generates a unique identifier at runtime.
Globally Unique Identifier (GUID): This is a unique identifier that consists of 128 bits, ensuring a high probability of uniqueness. The format is typically a string of hexadecimal digits separated by hyphens (e.g., “550e8400-e29b-41d4-a716-446655440000”).
Use Case:
GUIDs are commonly employed in software development and database management scenarios where ensuring a unique identifier is crucial. They are particularly useful when there’s a need to uniquely identify objects or records across different systems or networks.
Outcome:
The result of calling GENERATE_GUID is a newly generated GUID, ensuring that the identifier is highly likely to be unique within the system or network. This uniqueness is achieved through an algorithm that minimises the probability of collision (two GUIDs being the same).
The GET_STRING_FROM_DUPLICATE_KEYS_ARRAY_In_JSON_PAYLOAD function is designed to extract a string value from a JSON object, specifically addressing scenarios where the JSON payload contains duplicated keys. This function is crucial in situations where parsing duplicated keys as a string might lead to exceptions due to conflicts.
Function Structure:
GET_STRING_FROM_DUPLICATE_KEYS_ARRAY_In_JSON_PAYLOAD(${parentpayload},objectPath) |
Parameters:
Example
Example Usage:
GET_STRING_FROM_DUPLICATE_KEYS_ARRAY_In_JSON_PAYLOAD(${parentPayload}, response.transaction.receiptsFields.line) |
Scenario:
Consider the following JSON payload:
{ |
The function allows for manipulation of the JSON object, ensuring that the string values associated with duplicated keys are extracted without causing exceptions due to conflicts. The extracted value can then be used as needed within the system.
TRIM
FunctionTRIM(${param})
function is used to remove extra spaces from a given string, specifically the spaces before and after the actual content. However, it preserves any spaces within the string, so if there are spaces between words or values, they will remain unchanged.TRIM(${param}) |
${param}
containing spaces before or after the main content." Hello World "
TRIM
: "Hello World"
In this example, the function removes spaces around "Hello World"
, but keeps the space between "Hello"
and "World"
. This is useful for cleaning up user input or data where extra spaces might be present.
This function decrypts a Zone Pin Key (ZPK) encrypted under a Zone Master Key (ZMK). It verifies the decrypted ZPK against the provided Key Check Value (KCV) to ensure data integrity and authenticity.
Function Structure:
GET_DECRYPTED_ZPK_UNDER_ZMK(${comp1},${ccv1},${comp2},${ccv2},${comp3},${ccv3},${ZMK_KCV},${EncryptedZPK},8,16,${ZPK_KCV}) |
Parameters:
${comp1}, ${comp2}, ${comp3}
: Key components used to form the ZMK.${ccv1}, ${ccv2}, ${ccv3}
: Key Check Values (KCVs) for each component, used to validate the components before combining.${ZMK_KCV}
: Combined KCV for the ZMK, ensuring that the ZMK is correct.${EncryptedZPK}
: The ZPK encrypted under the ZMK, which will be decrypted.8
: Block size for decryption.16
: Expected length of the decrypted ZPK.${ZPK_KCV}
: Expected KCV of the decrypted ZPK, used to validate the final output.Example:
Decrypted ZPK: 65291eb84f50e1a8c4589136d9000fbb741a0f541ab2e7e2aaa8fe8f0d762904
This function encrypts a PIN using either AES or TDES according to ISO 9564 format.
Function Structure:
|
This function encrypts a PIN using either AES or TDES according to ISO 9564 format.
Example:ENCRYPT_PIN_USING_ISO_9564_FORMAT(${clear_pin},${acount_number},${decryptedZPK},AES,CBC,NoPadding,4)
<format> – ISO 9564 PIN block format (e.g., 4 for Format 4).
${clear_pin} – The plaintext PIN to encrypt.
${account_number} – The account number for PIN block formatting.
${decryptedZPK} – The decrypted Zone Pin Key (ZPK) used for encryption.
<algorithm> – Encryption algorithm (AES or TDES).
<mode> – Encryption mode (e.g., CBC).
<padding> – Padding scheme (e.g., NoPadding).
This function decrypts a PIN encrypted in ISO 9564 format.
Function Structure:
|
Example:
DECRYPT_PIN_USING_ISO_9564_FORMAT(${encryptedPin}, ${acount_number
}, ${myKey}, AES, CBC, NoPadding, 4)
The TENACITY_PAN_ENCRYPT function is a specialised encryption algorithm designed for use with specific PAN (Primary Account Number) types.
Function Structure:
TENACITY_PAN_ENCRYPT(${pan}) |
Example
When used with an actual PAN value, it encrypts the PAN according to the specific algorithm in use.
PAN (Primary Account Number):
This is a numeric identifier that is essential in financial transactions. In this context, it refers to a credit card number.
Use Case:
The TENACITY_PAN_ENCRYPT function is employed for the encryption of PAN data. The exact details and considerations for using this function are typically provided by GLU Support. Users are advised to consult with GLU Support to understand the appropriate scenarios and guidelines for using this encryption algorithm.
Outcome:
The result of calling TENACITY_PAN_ENCRYPT(${pan}) is the encrypted version of the provided PAN. For instance, if the original PAN is “1944219200122247”, the function might return “1944882297307746” as the encrypted PAN.
The importance of leveraging software to stay competitive in the market and notably the benefits of no-code platforms are important to understand. There are however a number of mis-perceptions and concerns (myths) about no-code platforms, some of the most common of which are outlined below. It is important to recognise these concerns and to understand why they are mis-placed so as to ensure that the opportunities presented by no-code solutions can be embraced.
MYTH #1 – NO-CODE IS ONLY FOR BASIC USE CASES
No-code platforms can lower the cost of building apps and enable experimentation and the exploration of new ideas by building apps and their underlying ‘plumbing’ to test viability and business value quickly. No-code platforms can be used to deliver business mission-critical solutions in isolation, or in some cases, Software developers may still be involved (see Myth #4) to handle more sophisticated requirements. Importantly though, in recent years no-code platforms (such as GLU.Ware) are being used to bring complex Enterprise level Use Cases to life without any software developers being involved.
Myth #2 – NO-CODE IS JUST ANOTHER HYPE
The concept of using visual tools for software development, known as visual CASE (computer aided software engineering) tools, has been around since the 1970s, but early attempts were complex and required specialised knowledge. As a result, business users turned to homegrown tools like spreadsheets or databases, which were easier to build but had performance and security issues. It wasn’t until the mid-2000s, with advancements in cloud computing and software platforms, that the idea of no-code development began to address the historical challenges of software engineering in a way that is enterprise-ready. While the concept of no-code has been around for decades, its simplicity, ease of use, and ability to address enterprise needs has become widely recognised in recent years.
MYTH #3 – THERES NO REAL DIFFERENCE BETWEEN LOW-CODE AND NO-CODE
Low-code and no-code are not the same thing. They both use visual abstractions to simplify software development, but they are designed for different users and offer different benefits. Low-code platforms aim to reduce the amount of code that needs to be written by more junior developers, but still require knowledge of proper application design and architecture, as well as some lightweight coding knowledge. No-code platforms such as GLU.Ware, on the other hand, are intended for non-developers and aim to fully remove the need for coding.
MYTH #4 – NO-CODE PROJECTS CAN’T BE COMBINED WITH TRADITIONAL SOFTWARE DEVELOPMENT
No-code built solutions – both Business Apps and the underlying integration architecture (such as where GLU.Ware is used) can be used for a wide range of software solutions, including mission-critical ones. It is also possible to incorporate traditional software development elements into no-code projects by forming teams that include both no-code creators and software developers. These teams can collaborate efficiently and deliver enterprise-grade applications using no-code.
MYTH #5 – NO-CODE IS GOING TO PUT SOFTWARE DEVELOPERS OUT OF WORK
The idea that no-code development will replace software developers is false. There will always be a need for software developers to work with no-code teams, as software development languages and frameworks continue to evolve and push the boundaries of innovation. No-code tools are typically built on standardised components that were first developed and tested by software developers before being offered as pre-built components for no-code development. Therefore, software developers will continue to play an important role in the development of new digital apps and services.
MYTH #6 – NO-CODE WILL GET OUT OF CONTROL
The notion that no-code platforms are inherently insecure and unreliable is not true. While it is understandable for IT to worry about non-compliant and unreliable apps, modern no-code platforms offer governance and reporting capabilities to ensure proper use. In GLU.Ware, maker-checker controls, workflows and audit trails are just some of the capabilities available to ensure users follow appropriate software ‘development’ (i.e. configuration) practices. By implementing controls and governance, no-code platforms encourage the use of a standard platform that can be consistently governed.
MYTH #7 – NO-CODE PROJECTS FOLLOW THE SAME APPROACH AS TRADITIONAL SOFTWARE DEVELOPMENT
The development practices for no-code platforms should be tailored to take advantage of their unique strengths, rather than simply treating them like traditional development methods. No-code platforms intentionally abstract many details, which means that a different set of skills and backgrounds will be needed for a no-code team. GLU’s no-code methodology is principled on the ability to empower non-developers with the means of creating APIs and Integration components at speed (see the GLU ‘V-model of testing’), which in turn underpins an ability to Innovate at Speed.
Content is based on GLU’s Team experience and interpretation of the summary in Chapter 2 of The No-Code-Playbook – Published 2022 – ISBN 979-8-218-06204-0
GLU.Ware is all about speed. Not just the ability to ‘Integrate at Speed’ but equally so, to ‘Process at Speed’. It’s our mission to ensure that GLU.Engines in a Clients ecosystem are able to scale horizontally and vertically so as to guarantee that those GLU.Engines never cause transactional bottlenecks.
Performance Testing GLU.Engines is thus an integral part of the GLU.Ware Product Quality Assurance discipline. The objective of our performance testing process is to identify opportunities to optimise the GLU.Ware code, its configuration and how it is deployed and in-so-doing to continuously improve the performance of GLU.Engines.
Our Performance Testing process provides GLU and our Clients with insight into the speed, stability, and scalability of GLU.Engines under different conditions.
We have defined three performance test scenarios to cover the spectrum of solutions which GLU.Engines can provide integrations for. To focus on maximum throughput we have defined a simple ‘Straight Line Scenario’; to explore the impact of latency on a GLU.Engine we have included the ‘Latency Scenario’; and to understand the impact of complexity we have included the ‘Complex Integration Scenario’.
The Straight Line Scenario is a simple Asynchronous JSON payload pass through, a delivered JSON Payload simply being offloaded downstream to a Rabbit Message Queue.
The ‘Latency Scenario’ is similar to the Straight line scenario except the payload is a USSD menu payload and it passed through a GLU.Engine which produces transactions in a Rabbit Message Queue. Those transactions are in turn consumed by another GLU.Engine from a Rabbit Message Queue and they are then passed to a stub which has been configured with variable latency in its response (to emulate latency in downstream Endpoint systems).
The Complex Integration Scenario involves multiple layers of orchestration logic, multiple downstream Endpoints including multiple protocol transformations and multiple synchronous and asynchronous calls to Databases and Message Queues.
Straight Line Integration Scenario | Complex Integration Scenario | |
TPS | 4,400 | 754 |
CPUs | 8 | 4 |
Setup | Containers: 1 Docker Swarm Manager (4vCPU, 16 GiB) and x2 Worker Nodes (2 vCPU, 4 GiB) | VM (4 vCPU, 8 GiB Memory) |
Additionally, we have defined a Performance Test scenario for the GLU.USSD solution which is pre-integrated with the GLU.Engine.
USSD Solution | USSD with Latency Injection | |
TPS | 915 | 1 Silo – 350 (Latency of 100ms) 3 Silos – 702 (Latency of 100ms) |
CPUs | 16 | 4 |
Setup | Containers: 1 Docker Swarm Manager (8vCPU, 16 GiB) and x2 Worker Nodes (4 vCPU, 16 GiB) | VM (2 vCPU, 8 GiB Memory) – GLU.Engine Producer Containers: 1 Docker Swarm Manager (8vCPU, 16 GiB) and x2 Worker Nodes (4 vCPU, 16 GiB) – RabbitMQ VM (4 vCPU, 16 GiB Memory) – GLU.Engine Consumer & USSD |
GLU.Engines are CPU bound, so ‘vertically scaling’ CPU leads to a better than linear performance improvement. GLU.Engines can also be horizontally scaled behind a load balancer or a Docker Swarm Manger (proxy) if containerised.
GLU.Engines have the ability to absorb latency in End Points up to 100ms and still achieve considerable TPS, with increased TPS being possible if horizontal scaling is architected into the deployment architecture.
For optimal performance of a system of GLU.Engines, as reflected in the TPS benchmark figures for the systems defined in this document, the following recommendations are advised:
Performance Testing was executed in GLU’s AWS Test Lab within a single VPC. This ensures little to no degradation in performance due to network communications. Swarmpit was used to manage the Docker environments which were used to run the GLU.Engines and Rabbit MQ processes, 3 Nodes were set up over 3 EC2 instances.
Virtual Machine Sizes
EC2 | Virtual AWS System | CPU | Memory |
Swarm Manager | t3a.xlarge | 4 vCPU | 16 GiB |
Swarm Node 1 | t3.medium | 2 vCPU | 4 GiB |
Swarm Node 2 | t3.medium | 2 vCPU | 4 GiB |
System Versions
System | Version |
GLU.Ware | 1.9.13 |
RabbitMQ | 3.8.7 |
Swarmpit | 1.9 |
JMeter Test Setup Properties
Deployment Architecture
Test Criteria | Result |
Users | 400 |
Duration | 1 hour |
TPS | 4,400 |
% Errors | 1.22 % |
Total Transactions | 15,846,714 |
JMeter Results Summary
Rabbit MQ Result Summary
Commentary
An initial test involving a single node with 4 vCPUs and 16 GiB of Memory achieved a result of 1885 TPS. The 4400 TPS result was achieved as described above with a Swampit Manager and two nodes, collectively utilising 8 vCPUs and 16 GiB of Memory. This proves that the GLU.Engine is CPU bound such that by reconfiguring and allocating additional CPU on is able to (better than linearly) scale the performance of a GLU.Engine setup.
The complex scenario represents 2 benchmarks: the 1st excludes USSD and the 2nd includes USSD.
Performance Testing was executed in GLU’s AWS Test Lab with in a single VPC. This ensures little to no degradation in performance due to network communication. In this test a docker container was not used, rather a GLU.Engine was deployed directly to a single AWS c5.xlarge (4 vCPU, 8 GiB Memory) EC2 instance. This did not include load-balancing as the objective was to understand the load a single GLU.Engine could achieve.
The diagram below outlines the complex architecture. Note how Jmeter injects transactions and each transaction is orchestrated across a DB connection to msSQL, REST, SOAP and Rabbit connections, returning a response back to Jmeter where the time of the finished transaction was taken.
Test Criteria | Result |
TPS | 754 |
The graph below illustrates how performance scaled in proportion to VM sizes being increased, with each EC2 instance.
Commentary
The key factor influencing Performance when minimal latency on the response end points was found to be the number of vCPUs available.
Performance Testing was executed in GLU’s AWS Test Lab within a single VPC. This ensures little to no degradation in performance due to network communications. Swarmpit was used to manage the Docker environments which were used to host the GLU.Engines and execute the GLU.USSD tests, 4 Nodes were set up involving 1 Manager and 3 Worker nodes.
Virtual Machine Sizes
EC2 | Virtual AWS System | CPU | Memory |
Swarm Manager | t3.xlarge | 4 vCPU | 16 GiB |
Swarm Node 1 | t3.xlarge | 4 vCPU | 16 GiB |
Swarm Node 2 | t3.xlarge | 4 vCPU | 16 GiB |
Swarm Node 3 | t3.xlarge | 4 vCPU | 16 GiB |
System Versions
System | Version |
GLU.Ware | 1.9.14 |
Swarmpit | 1.9 |
Test Criteria | Result |
TPS | 914,9 |
Performance Testing was executed in GLU’s AWS Test Lab within a single VPC. This ensures little to no degradation in performance due to network communications. Swarmpit was used to manage the Docker environments which supported the container running RabbitMQ.
The latency scenario was designed in such away to maximise performance where the end points were slow to respond with a high degrees of latency. The performance testing was set up with horizontal scaling across 3 silos, with contention on the test stubs being managed through a load balancer. Injection was carried out through a dedicated server for Jmeter, which was injecting USSD menu transactions into a GLU.Engine set up to distribute transactions to 3 separate Rabbit queues in a round robin fashion.
Virtual Machine Sizes
EC2 | Virtual AWS System | CPU | Memory |
Decision Maker | t2.large | 2 vCPU | 8 GiB |
USSD / Integration Engines | t3.xlarge | 4 vCPU | 16 GiB |
Test Stub | t2.medium | 2 vCPU | 4 GiB |
Swarm Manager | a1.2xlarge | 8 vCPU | 16 GiB |
Swarm Node 1 | t3a.xlarge | 4 vCPU | 16 GiB |
Swarm Node 2 | t3a.xlarge | 4 vCPU | 16 GiB |
System Versions
System | Version |
GLU.Ware | 1.9.22 |
Swarmpit | 1.9 |
Test Criteria | Number of Silos | TPS Results |
---|---|---|
Latency 100ms | Silos 1 | 350 TPS |
Latency 100ms | Silos 3 | 700 PS |
GLU.Engines have the ability to be able to absorb increased latency if sufficient memory is allocated and throttle settings are adjusted to allow for the buffering of transactions. See Managing Load with Throttles.
Commentary
Even at extreme high latency in excess of 3 seconds GLU.Engines will still deliver ±90TPS.
With latency reduced to of 100ms increases throughput to ±350TPS.
GLU.Engines scale in a near linear fashion. As additional performance is required additional servers can be added.
An increase in latency may necessitate additional memory allocation for the GLU.Engine to accommodate the buffering of transactions.
It is possible to configure a set of variables such that the values of the Variable will change depending on the Environment which the GLU.Engine is run on. This allows the user to not set fixed values inside the configuration itself and then need to change this during the lifecycle of your engine.
An example of this is if you want to change a Slack channel which messages are sent to depending on whether you are deploying on a development environment or a production environment. It is possible to have a single variable name such as “slackKeyValue” with the channel keys for development and production being different.
Press the “Global Variables” button in the Environments tool, to access the Variable configuration screen.
The Global Variables screen shows the variables that exist. It is possible to “Add Variables”, and to modify and delete existing variables from this screen. Each Variable must have a unique name per Client, with a description of what the variable is used for.
If you add or modify a variable you will be presented with the ‘Edit Variable’ dialogue.
In this dialogue, you can define/modify the name of the variable and description. For each environment, you can set the value to be used. It is not necessary to enter values for Environments that are not used. If the value is left empty, then null will be present in the GLU.Engine when used in that environment. Once Variables have been set per environment, those variables will be used for each GLU.Engine Environment-specific build.
The variable that you have created is now available to be used in your integration. It will be added to the parameter dropdown box with the prefix “env_” .
Example of global variables that will be present in the pulldown box with the “env_” prefix:
Example of using a global variable in the context name:
/${header.env_slackKeyValue} … where the variable defined was slackKeyValue and the env_ is the prefix to identify the variable as a global variable and the /${header. represents part of the parameter being passed to the URI.
See how the values are masked in the logs
Note: avoid using variables in Header, Body, or query sections endpoint calls, as they will not be encrypted when presented in the logs.
Where an Environment Variable needs to have a condition applied and Action taken, when in the Integration Builder, and configuring the Handler, select the Environment Variable from the Parameter Name drop-down.
See the example below.
TCP/IP is an abbreviation for Transmission Control Protocol / Internet Protocol. It is a set of protocols that define how two or more computers can communicate with each other. The protocol is effectively a set of rules that describe how the data is passed between the computers. It is an open standard so can be implemented on any computer with the appropriate physical attributes.
If Properties need to set for the TCP connector, for example for a TCP/IP connector to an HSM the key = textline must be set to the value = true as shown in the example below.
As another example, by default, TCP/IP Connectors are asynchronous. If you require the Connector to be synchronous, the key = synchronous must be set to the value = true.
As a final, slightly more complex example, a messages sent over TCP/IP include a variable byte length header known as the Variable Length Indicator (VLI), proper configuration of the decoder and encoder is important. Here’s how to handle such requirements:
Variable Length Indicator (VLI):
Configuration Properties:
In the screenshot below the above described Decoder / Encoder and Field Length settings are shown. Additionally you’ll see the TCI/IP property key = usebyteBuf is set to value = true … with this setting GLU will turn the message body into ByteBuf before sending it out. Just like an ordinary primitive byte array, ByteBuf uses zero-based indexing. It means the index of the first byte is always 0 and the index of the last byte is always capacity – 1.