Testing End Points

Approach to testing end points


To confirm if an API End Point available and suitable for the integration scenario you are working with, you should follow these steps:

  1. Familiarize yourself with the API’s specifications and requirements.
  2. Define test cases and test data that covers a range of scenarios for the API, such as error handling, edge cases, and normal use cases.
  3. Use a testing tool or programmatically send requests to the API and check the responses against the expected results.
  4. Monitor the API’s performance, such as response times and error rates, to ensure that it meets the required standards.
  5. Repeat the testing process in different environments and at different loads to ensure the API’s stability and scalability.
  6. Address any issues or bugs found in the testing process with the API provider and repeat the testing process until the API is confirmed fit for purpose.
  7. Document the testing process and results for future reference and maintenance.

Commands to use to test End Points

The following table defines the commands in Linux which can be used to check connectivity for each of the types of end points.

TypeMethod and Commands
RESTExample tools : Linux command prompt, Postman and HTTPie

Linux -> curl -X GET https://api.example.com/endpoint

Postman: a graphical tool for sending and visualizing HTTP requests.

HTTPie: a command line tool for sending HTTP requests, with an emphasis on usability.

The specific command used will depend on the tool being used, the type of request being sent (e.g. GET, POST, PUT, DELETE), and the parameters being passed to the endpoint.
DATABASEThe specific command to test a database connection depends on the programming language and database management system being used. Some common ways to test a database connection include:

1. Using the ping command in SQL*Plus for Oracle databases
2. Using the mysqladmin ping command for MySQL databases
3. Using the sqlcmd -Q "select 1" command for Microsoft SQL Server databases
4. Using the psql -c "select 1" command for PostgreSQL databases
5. Using nc -zv glufts.cot7XXXXX.us-east-1.rds.amazonaws.com 3306

In general, the command should establish a connection to the database, send a simple query (such as SELECT 1), and confirm that the query returns a result, indicating that the connection is successful.
SOAPTo test a connection to a SOAP interface, you can use tools such as SoapUI, which allows you to create and execute SOAP requests, as well as view and analyze the response from the SOAP service. You can create a SOAP request with the desired parameters, send it to the endpoint, and verify the response to ensure that it is correct.

Examples of how to test a SOAP interface with SoapUI include:

1. Creating a new SOAP project: This involves providing the WSDL (Web Services Description Language) URL of the SOAP interface and importing it into SoapUI.
2. Creating a SOAP request: This involves selecting the desired operation from the imported WSDL and creating a request with the required parameters.
3. Sending the SOAP request: This involves executing the SOAP request and viewing the response from the SOAP service.
4. Verifying the response: This involves comparing the response from the SOAP service with the expected response and checking for any errors or unexpected results.

Examples of how a SOAP interface could fail include:

1. Incorrect endpoint URL: If the URL provided in the SOAP request is incorrect, the connection will fail, and a response indicating an error will be returned.
Incorrect parameters: If the parameters provided in the SOAP request are incorrect, the connection will fail, and a response indicating an error will be returned.
2. Incorrect security configuration: If the security configuration required by the SOAP service is incorrect, the connection will fail, and a response indicating an error will be returned.
3. Unavailable service: If the SOAP service is unavailable, the connection will fail, and a response indicating an error will be returned.
ACTIVE_M_QUEUETo test a connection to an ActiveMQ interface, you can use a variety of tools and techniques depending on the specific use case. Some common methods include:

1. Telnet – You can use telnet to connect to the ActiveMQ interface and send messages to a queue or topic. This will allow you to confirm that the connection is established and that messages are being sent and received.
2. JMS Client – You can use a Java Message Service (JMS) client to connect to the ActiveMQ interface and send and receive messages. This will allow you to test the end-to-end functionality of the interface, including message delivery and receipt.
3. Web Console – ActiveMQ includes a web console that provides a graphical interface for managing and monitoring the interface. You can use the web console to send and receive messages, view message statistics, and monitor the performance of the interface.

Examples of how an ActiveMQ interface could fail include:

1. Network Issues – If there is a problem with the network connection, messages may not be delivered or received correctly.
2. Configuration Issues – If the ActiveMQ configuration is incorrect, the interface may not be able to connect or may not be functioning as expected.
3. Capacity Issues – If the ActiveMQ interface is overwhelmed with too many messages or too much traffic, it may become slow or unresponsive.
4. Resource Issues – If the underlying resources (such as memory or disk space) are exhausted, the interface may become slow or unresponsive.
5. Software Issues – If there is a bug or issue with the ActiveMQ software, the interface may become slow or unresponsive.
ISO8583To test a connection to an ISO 8583 interface, you can use a variety of tools and techniques, depending on the specific implementation and requirements of the interface. Here are a few examples of how you could test a connection to an ISO 8583 interface:

1. Use a message simulator: A message simulator is a tool that allows you to send test messages to an ISO 8583 interface and receive the corresponding responses. This can help you confirm that the interface is working correctly and that the messages are being processed as expected.
2. Send sample messages: You can send sample messages to the interface using a variety of techniques, such as a command line tool or a custom application. This can help you validate that the interface is working correctly and that the messages are being processed as expected.
3. Verify the response codes: You can verify the response codes that are returned by the interface after each message is sent. This can help you confirm that the interface is working correctly and that the messages are being processed as expected.

Testing a connection to an ISO8583 interface requires careful attention to the standardized message format, message structure, bitmap fields, message processing rules, and network communication. The list below provides the areas which will need special consideration.

1. Message format: ISO8583 is a standardized message format for financial transactions, which means that the message format must be followed precisely in order to communicate successfully with the interface. This requires careful testing to ensure that messages are being formatted correctly.
2. Message structure: ISO8583 messages have a specific structure, with specific fields for different types of information such as account numbers, amounts, and transaction codes. This structure must be carefully tested to ensure that messages are being sent and received correctly.
3. Bitmap fields: ISO8583 uses bitmap fields to indicate which data elements are included in a message. This requires testing to ensure that the correct bitmap fields are being set for each message.
4. Message processing: ISO8583 messages may require specific processing rules to be followed, such as verifying the validity of the card number or checking the amount of the transaction against available funds. These processing rules must be tested to ensure that they are being followed correctly.
5. Network communication: ISO8583 uses a binary message format, which requires careful testing to ensure that messages are being transmitted and received correctly over the network.
RABBIT_QUEUETo test a connection to a RabbitMQ interface, you would need to perform several steps:

1. Connect to the RabbitMQ server: You would need to establish a connection to the RabbitMQ server using a suitable client library, such as the RabbitMQ Java client, the Pika client for Python, or the official RabbitMQ client for your language of choice.
2. Verify the existence of the queue: You can use the client library to verify that the queue you are trying to connect to exists and is accessible.
3. Publish a message to the queue: You can use the client library to publish a test message to the queue and verify that it is successfully delivered and can be retrieved by a consumer.
4. Consume a message from the queue: You can use the client library to create a consumer that retrieves messages from the queue and verify that the message you previously published can be retrieved.

A RabbitMQ interface could fail due to a number of reasons, including:

1. Network issues: The RabbitMQ server may be down or there may be a network issue that is preventing the client from connecting to the server.
2. Authorization issues: The client may not have the necessary permissions to access the queue.
3. Configuration issues: The RabbitMQ server may be configured incorrectly, preventing the client from connecting.
4. Resource depletion: The RabbitMQ server may be running out of resources, such as memory or disk space, leading to performance issues or failure.
5. Application bugs: The client or the server may contain bugs that prevent the connection from working correctly.

AWS ECR configuration

Configure GLU.Engine settings

Please refer to AWS ECR documentation for details (AWS ECR)

Use the example below to configure GLU.Engine Settings so that when the build completes GLU.Console will push the Docker container to the AWS Repository. 

First a temporary token needs to be acquired through the AWS CLI command 

 $ aws ecr get-login-password 

To use this command the server running the CLI commands needs to have access to the AWS environment where the ECR repository is located. To do his use the aws configure command, please refer to AWS documentation on how to use all these commands and set them up for use. 

The following example shows how to view if the settings exist. Press return after each line and it will show you he value which has been set. 

$ aws configure
AWS Access Key ID [****************XXXD]: 
AWS Secret Access Key [****************8996]: 
Default region name [us-east-1]: 
Default output format [json]: 


Once the connection has been esablished then the get-login-password command can be used to get a password, which can be used in the GLUConsole DOCKER PASSWORD field.

$ aws ecr get-login-password
eyJwYXlsb2FkIjoiQm0zNStqa2JrMUZodVF2bDBLUFR0UkNuNnY5aFE1bU5Sekd5N1ZIN0grekg5QlJ3SUNod0RGVlQ1TnI5OVI5OW1EQTZoUG1vTVBvamFqVVBNZDJ
yc1hkK1o1aDYwczBpZTVrUktCSW54NGxFNnhzeHNYWHErWGJ2enZkNjVYUzVQRlhxY2NFV1R0QThuUTR6Tk1VQjhYVzR3dlZ6RFYrVXRBNURVNCtHVWltTDdaTmhYQ0
JzNW9DNWNqOWZxLzVEVG1sT1lSUlFlbXp6SlRSY1FZR3VQMk5kSEtiWHUxZGxyWFdPRWI1VkZrWVFISUFtN2lYQlVQRjRZei9XdHZ0QTEzS3VGa25Kd1RwOHBQL0FHe
m9NVk14YXRFZVdVVCtKSi9pNWhldHZlTGwrQXgzSDkwSVQ1UmhpZWxZZ3NIK0xPSTNuT24vZmxSTmg2XXXXXXXXXXXXXXXXXXXXXXXHVMSVRrTUYrckMvUGZ6
T2JITUZ0S2F1eU8wQUg2dW52eWNyVzYrQVZ5M0JkNFpURzdSbHJ4R1hsd0QyQjlQdW5mTFBWNi9HaWNmUlRFQ2VKYUUvNlJvTW82ZHRDN3BCVmVHRzNidkRkcE1pckM
2Y3plMjVMcGpYNlU5QXVyUTA4MGowUHc0R2IxOWp5RWwyYldQQWgwa0NvWGlHNDFlWUJmOVdGeWxWQkdJK1I5cVhvOVJjdG9BbHdobGZ4WFRNSHJYKzR2U0xYNVFpaV
UyRExPYW5JTEJENGowK0pXVUZvZXBXeHNOQ2d6WVlVWlkvTlQyS2ZnYU1FKzBuVUVmUGcyUks2TXpDZHoxWGVwQUZpV3YrVUFNSDFtckQ3M2E1dkZLQTBiWGdSLzRKc
0JXXXXXXXXXXXXXXXXXXXXSTjM5dGR5ekZNV2pkRkxMZjZyRTAzRjBVZUFYQ0RQdVJSenAva1NjVDE4a1NXMmFNb201T1ArOFFaZmdnZWNDdnFzMG1p
bDVLVm1ialFsRFNZQWdDRUJSWm9nSGRFNkRiQTAxbEQ1Ujkya2JsRWYvREhiWHdDb0hLWWdNNE9ON1ozQmFJZ1hQK3Z6VFJwMEFoL3o5bmN3Mkd4REVab0NrU0taRlR
Nb2NxWEVsWjE0S21ONmh1M0xpS083UjhjMWxwNW1GaFBvOXFidmNaRU51Ym0zSFBSRVpEblpKYW1aS0lwbGw5eXRGRDJEL2dka3hpSWhzdkFUbUxDZDdsbHlSL3JCQj
I0RVMvTzNkdDEwcTEweUU2OEZyTXFOcGVlMDB4MmlmMWhHWXhVNUoybEJnendNR3JVQXNwYm9GVmcvM0IwODJ2TGMrM0p0ZDZ3S09JWU8wV2RNYWxrbmtHcEoyL0Q2R
VhyOTAzR1EwOW94U0kvUTZvKzhrK0E9PSIsImRhdGFrZXkiOiJBUUVCQUhod20wWWFJU0plUnRKbTVuMUc2dXFlZWtYdW9YWFBlNVVGY2U5UnE4LzE0d0FBQUg0d2ZB
WUpLb1pJaHZjTkFRY0dvRzh3YlFJQkFEQm9CZ2txaGtpRzl3MEJCd0V3SGdZSllJWklBV1VEQkFFdU1CRUVESmtIVDBiMFNPUDRXSWhqdGdJQkVJQTdaM1lHMnZqd0V
jUTY5QkNBeWRpdTFBeTRKZ041b3h2UG80anp0K2VSWGkxQnFOSGZqaWJDNEFtUCtiY1dPY0g3YjRZbGNkcko0eHBqM3JnPSIsInZlcnNpb24iOiIyIiwidHlwZSI6Ik
RBVEFfS0VZIiwiZXhwaXJhdGlvbiI6MTY2NTUzODA0MH0=



A = DOCKER IMAGE REPOSITORY , to be entered in the GLU.Engines Settings below

B = Must match the GLU.Engine name (See screen shot below, please also note the constraint applied to the GLU.Engine name by AWS i.e. a name like GLU_USSD_Demo will not work.)

C = DOCKER URL , to be entered in the GLU.Engines Settings below



Please note constraint applied by AWS on the GLU.Engine name if AWS ECR is to be used.



All the fields are mandatory.

nameValueDescription
DOCKER URL42588888888835.dkr.ecr.us-east-1.amazonaws.compath to the docker.io instance (C)
DOCKER USERNAMEAWSFor AWS ECR Repos. this will always be “AWS”
SKIP DOCKERfalsetrue or false, When set to false docker will be used.
DOCKER PASSWORD{Long string of characers from above}The token generated by the aws ecr get-login-password
DOCKER IMAGE REPOSITORYgluwarein docker the repository name (A)






In the example below, when this is configured and a build is executed the GLU.Console will execute the command below at the end of the build process. This will then push the Docker container 1.5-SNAPSHOT to the AWS ECR repo demo.

$ push 42558888888535.dkr.ecr.us-east-1.amazonaws.com/gluware/demo:1.5-SNAPSHOT

Useful AWS commands

CommandDescription
aws ecr get-login-password | docker login –username AWS –password-stdin 92266666477.dkr.ecr.us-east-1.amazonaws.comto be used to log into the AWS instance of the ECR
aws ecr get-login-password If connected to AWS it will display the password, which needs to be put in the GLU Application Settings page.
aws ecr list-images –repository-name gluware/demoUseful to list the repositories which are held by the ECR, also to confirm connection is correct.

Managing Load with Throttles

The implementation of services requires the definition of request limits in the inbound side and response limits to avoid overloading downstream systems in the architecture. This is crucial for maintaining the availability and performance of GLU.Engines during high workloads.

TPS (Transactions per Second) is a common parameter used to measure the limits and effective management of throttling is necessary to ensure optimal performance.

GLU.Ware offers two types of Load control throttle mechanisms.

Throttle Type 1: Requests per Time Period

The Throttle feature in GLU provides the capability to regulate the workload of specific endpoints and prevent overloading. It also enables the enforcement of user quota limits, adherence to external service SLAs, and management of applied SLAs on the API.

The Request Manager Control Panel in GLU allows the definition of Throttles on Inbound Requests. To do so, enable the ‘Add Throttle’ option, which opens the Throttle tool. To create a new Throttle, click the green ‘plus’ sign, revealing the Throttle configuration attributes, as illustrated in the screenshot below.


Base Configuration Options:

  • Time Period (milliseconds) – this sets the duration for which the maximum number of requests will apply.
  • Max Requests per Time Period – this sets the maximum number of requests to allow within the set Time Period. It reflects a Numeric value or a parameter/variable i.e. 73 or ${maxRequests}

Throttle Type 1 – Example

Time Period = 1000ms (1 second), Max Requests per Time Period = 50

When your GLU.Engine receives its 51st Request within the 1000ms (1 second), then that 51st Request can be handled in one of two ways: 


Configuration Options:

1 – Queue excess calls – in the above example, the 51st call will be held in memory in a queue in the GLU.Engine until the Time Period completes. This is the ‘default’ option.

2 – Reject excess calls – by checking the ‘Reject Excess Requests’ box, you will be presented with the configuration options to define the alternative Throttle Option which is to rather than queue excess calls to reject them. In this scenario, you are able to define your HTTP Status Code, the Response Content-Type, and the Response Template to use.

If Reject happens it happens before anything occurs in the API. It will be the first thing to happen and the reject response template will supersede any other templates that the API may generate.

Throttle Type 1 Usage Scenarios

It is possible to adjust the Throttle Type 1 setting dynamically, by setting the Max request per time period parameter/variable through an API call. 

If you set up a query parameter ${throttleSetting} it enables controlling throttle max requests dynamically.

In this way, you could control the number of requests based on some external influence, like the number of credits a client has. More credits more throughput.  

Throttle Type 2: Concurrent Requests using Thread Pools

The Thread Pool Throttle feature allows you to; 

1. Ensure that a specific API does not get overloaded, or 

2. That you do not exceed an agreed SLA with an external service. 


Thread Pool Throttles are defined (configured) on the Inbound Request or Outbound Request Handlers.

See example below which shows the following settings which are set for a call to a orchestration connector “Purchase”.

To help explain how the settings POOL SIZE, MAX POOL SIZE and MAX QUEUE SIZE affect the parallel execution of downstream threads to the “Purchase” connector see the example below based on a real world analogy. We hope the analogy helps in the understanding.


The Passport office analogy.

This analogy reflects a passport office which contains clerks who are responsible for dealing with customers passport renewals one on one.


People arrive at the office with passport in hand, they queue up outside the main passport processing room and are only let in to have their passports processed when a clerk is available to deal with them.

In the floor plan view above 2 clerks are processing customers at Desk 1 & 2, and 2 clerks who are off duty will rush out to serve new customers at desk 3 & 4.

The Tasks or the API call being received or generated are the customers.

Max Queue Size reflects the number of customers in the queue and in the passport processing room being processed. Currently for the floor plan view above has 10 customers.

Max Queue Size Zero setting The passport office also has the ability to set the queue size to Zero, if this is the the case when a customers arrives at the passport office and all the clerks are busy, the customer will be told to go away.

Max Pool Size reflects the number of clerks which the passport office has availble to process customers passports, in this case it is reflected by the desks available. Clerks will retire back to the back room if they are not needed i.e. customers do not need processing.

Pool Size reflects the minimum number of clerks who are always at the desks to serve customers.


Select Type as either ‘None’ or ‘Parameter’ based and after defining the Action you want to perform, you are presented with the option to ‘Enable Thread Pool’, setting this to ‘TRUE’ will allow you to then set the ‘Pool Size’, the ‘Max Pool Size’ and the ‘Max Queue Size’ 


Configuration Options:

  • Pool Size – This specifies the number of threads to keep in the pool, even if they’re idle. The thread pool will always contain at least the number of threads specified. The Pool Size default setting is 10, the default of 10 will be applied if the field is left empty.
  • Max Pool Size – This specifies the maximum number of threads to keep in the pool. The thread pool can grow up to at most the number of threads specified but will release excess threads (above the Pool Size) if not needed. The Max Pool Size default setting is 20, the default of 20 will be applied if the field is left empty.
  • Max Queue Size – This specifies the max queue for holding waiting tasks before they’re executed. Tasks queued beyond the set size will be rejected. Task Queues can contain up to 1000 Tasks which is the default setting so if this field is not set, the default of 1000 will be applied.

Type 2 – enables you to limit the number of concurrent transactions (threads) any Inbound or Outbound Connector is able to process simultaneously. These Throttles are set as Handlers on the Request (Inbound or Outbound).

WARNING

Note: Do not enable SEDA for any orchestration connectors when you use Throttle type 2. Otherwise, the throttle settings will not work and the throughput will be limited to 162 TPS.

Se below a view of how this should be set up.

and an example of how it should not be set up.

INFO Level Log Enrichment

Overview

The GLU.Console provides the capability to configure in the INFO request and response logs a tag which can be used to trace transactions as they occur. These tags can consist of both hardcoded text labels and/or variables generated in the running GLU.Engine.

In the example below we show how 3 tags have been setup for each API. In the log below we can see how the tags are reflected in the logs.

As can be seen tags appear on each INFO line after the TraceID.

Also note the first 2 transactions in the log. These show values for the “responseCode” as this contains a value at this point in the transaction execution. The log filter is set as below for this API.

get light: ${responseCode}



These tags and values are associated with all INFO Level logs for each unique transaction.

This enables the payloads of Inbound and Outbound Connectors to be included in the INFO Level Logs. By selecting the checkbox to ‘Add to INFO Logs’ within the Request / Response Manager Panels, this data can be controlled to be included or not in the INFO logs. 

Considerations

Writing additional data to logs does have a performance impact which must be kept in consideration. Performance testing following such changes is advised.

Sensitive data that is flagged to be excluded or masked from the logs by unchecking of the ‘Make visible in logs’ setting will apply to the tags.

GLU.Engine start up errors

Occasionally you may face challenges starting up your GLU.Engine. Below we outline a few of the Errors you might encounter and how to deal with them.

Startup Warning

If your engine starts up, but there is a warning which looks like this.


WARN o.a.c.i.c.AnnotationTypeConverterLoader – Ignoring converter type: org.apache.camel.component.cxf.converter.CxfConverter as a dependent class could not be found: java.lang.NoClassDefFoundError: javax/xml/soap/SOAPMessage java.lang.NoClassDefFoundError: javax/xml/soap/SOAPMessage at java.base/java.lang.Class.getDeclaredMethods0(Native Method) at java.base/java.lang.Class.privateGetDeclaredMethods(Class.java:3166)


This is likely to be linked to the wrong java version running. Please see here GLU.Engine Server Specifications for the correct java version. 

Startup Error

This error will occur if the port value is missing in the database connector. See below screen shot from the connector screen.


You will see this error in the Database logs:

— error — com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Cannot load connection class because of underlying exception: ‘java.lang.NumberFormatException: For input string: “null”‘.

To resolve, simply update your Connector configuration to include the Port.

GLU.Engine Logging Levels

How to Change GLU.Engine Logging Level

The logging level of a GLU.Engine can be altered through three mechanisms.

The first method involves using the Application Settings within the GLU.Console, however, this requires a complete rebuild and redeployment of the GLU.Engine.

The second method allows for the logging level to be altered without the need for a rebuild and redeployment. This is done through the use of the “changelogginglevel.sh” script located in the GLU.Engine deployment directory.

./changelogginglevel.sh DEBUG


This shell script actually makes use of one of the GLU.Engine APIs to make the change.

This is the third method, which is to call the JMX APIs directly on the server port. This method should be used when using a docker container to run the GLU.Engine.

Changing Log Level via the GLU.Console

In Build Manager, Select GLU.Engine Settings: 


Edit “logging” to the desired level:


The “Path” and “Name” of the GLU.Engine logs can be altered to meet specific requirements. To specify the log path as the root directory of the GLU.Engine folder, utilize a full stop “.” in the “relevant configuration setting”Path“.

Why have different Log Levels?

The logging framework utilized by GLU, the de-facto standard in the Java world, classifies log messages into five categories: ERROR, WARN, INFO, DEBUG, and TRACE. These log levels are assigned based on the urgency of the log message, allowing log filtering by level of importance.

In a production environment, it is crucial to have an efficient filtering mechanism for log messages to quickly identify urgent issues leading to potential losses. Mixing urgent log messages with non-urgent ones which hampers efficient log analysis.

It is important to understand when to categorise log messages into each level of urgency.

ERROR

The ERROR level should only be used when the application really is in trouble. Users are being affected without having a way to work around the issue.

Someone must be alerted to fix it immediately, even if it’s in the middle of the night. There must be some kind of alerting in place for ERROR log events in the production environment. Often, the only use for the ERROR level within a certain application is when a valuable business use case cannot be completed due to technical issues or a bug.

Take care not to use this logging level too generously because that would add too much noise to the logs and reduce the significance of a single ERROR event. You wouldn’t want to be woken in the middle of the night due to something that could have waited until the next morning, would you?

WARN

The WARN level should be used when something bad happened, but the application still has the chance to heal itself or the issue can wait a day or two to be fixed.

Like ERROR events, WARN events should be attended to by a dev or ops person, so there must be some kind of alerting in place for the production environment.

A concrete example for a WARN message is when a system failed to connect to an external resource but will try again automatically. It might ultimately result in an ERROR log message when the retry-mechanism also fails. The WARN level is the level that should be active in production systems by default, so that only WARN and ERROR messages are being reported, thus saving storage capacity and performance.

If storage and performance are not a problem and our log server provides good search capabilities we can actually report even INFO and DEBUG events and just filter them out when we’re only interested in the important stuff.

INFO

The INFO level should be used to document state changes in the application or some entity within the application. This information can be helpful during development and sometimes even in production to track what is actually happening in the system.

Concrete examples for using the INFO level are:

  • the application has started with configuration parameter x having the value y
  • a new entity (e.g. a user) has been created or changed its state
  • the state of a certain business process (e.g. an order) has changed from “open” to “processed”
  • a regularly scheduled batch job has finished and processed z items.


DEBUG

DEBUG level logging shows the detail of every transactions.

Including :

  • error messages when an incoming HTTP request was malformed, resulting in a 4xx HTTP status
  • variable values in business logic.
  • Transaction details

Due to performance implications DEBUG mode should be used in Production environments sparingly. Sparingly means 2-3 hours in a controlled test, and switched back to INFO or higher straight after.

TRACE

Compared to DEBUG, it’s pretty easy to define what to log on TRACE. As the name suggests, we want to log all information that helps us to trace the processing of an incoming request through our application.

This includes:

  • start or end of a method, possibly including the processing duration
  • URLs of the endpoints of our application that have been called
  • start and end of the processing of an incoming request or scheduled job.

Log File Settings & Limits

GLU.Engine Settings

GLU.Engine Settings

The GLU.Engine Settings interface provides analysts with the capability to define a set of parameters utilized by the GLU.Engine during operation. These settings regulate the GLU.Engine’s interaction with the operational environment.


To configure the settings, activate the ‘GLU Engine Settings’ button and select the desired environment to specify the settings for.


Choose the GLU.Engine required and the GLU.Engine Settings page will open on the General tab. 


Navigate the tabs and choose the settings to change. The following settings can be changed:-

  • General settings
    • Server Port
    • switch spring JMX Enabled
    • switch jolokia debug Mode Active
    • switch Management Endpoint Shutdown Enabled
  • Logging level and parameters
  • Docker settings
    • switch on docker usage and distribution
    • all associated docker parameters if docker is switched on
  • Provision customised settings for the GLU.Engine
  • Adjust the JVM settings which the GLU.Engine utilises

General tab

The Server Port or Management Port (used interchangeably) in the context of JMX metrics refers to the port number used for communication between JMX client applications and the JVM (Java Virtual Machine) management interface. JMX (Java Management Extensions) is a Java technology that provides a standard way to monitor and manage Java applications and resources. The JMX Management Port allows remote monitoring and management of a running Java application through JMX client applications. JMX client applications can use the Management Port to connect to the JVM and retrieve information about the application, such as memory usage, thread counts, and performance metrics, as well as perform management operations, such as garbage collection and thread dumps. The Management Port is specified as a command line argument when starting the JVM and its default value is typically set to 9010.


The Server Port field can be set to control which port the GLU.Engine Management APIs use. The server port must be unique for each GLU.Engine running on a server therefore it is important to change the server port if you plan to run multiple GLU.Engines on the same server. If you don’t change the port for each such GLU.Engine, you’ll have conflicts when trying to startup the GLU.Engine. 


The Management Endpoint JMX Domain is used to set the URL for the JMX Domain.

Use the tick boxes to 

  • switch Spring JMX to be enabled, this will enable JMX Metrics to be available on the set JMX Domain via the Server Port.
  • switch Management Endpoint shutdown to be enabled. (This setting exists to provide an extreme level of security for the shutdown of GLU.Engines, if enabled then the API will work and you can shutdown the GLU.Engine from the API call. Shutdown API Call)

Note that if your Application Settings have the path for logs set to a sub-folder within the GLU.Engine folder if / when you deploy a new build you will lose / over-write your logs if you’ve not moved them. So in Application Settings, the recommendation is to set the path to a folder outside the GLU.Engine folder. 

For the details on setting the name of the Log file GLU.Engine Log Name.

Logging tab


For details on log levels settings see GLU.Engine Logging Levels.

For details of each of the settings in the log tab please refer to  GLU.Engine Logs.

Docker tab

It is possible to switch on the use of Docker and set the Docker settings which are used to push a docker container to a container registry. All fields are mandatory. 

For some container registry systems like Azure when the  docker image repository doesn’t exist, then it will be created when GLU pushes to the container registry. 

For more information on the docker settings please refer to GLU.Engine running in Docker

Custom Settings tab


Where required it may be necessary to set up customer parameters for using with the GLU.Engine. If this is required these settings can be configured in this tab.

JVM tab


The JVM Arguments are set to the default arguments for running the Java Virtual Machine which runs the GLU Engine, however with access to these settings it is possible to refine the settings as required to optimise the performance of the GLU.Engine. 

Please refer to  GLU.Engine – Performance-Level Settings for details of how the changes will affect the GLU Engine.

The JVM Arguments will be applied to docker as well if the GLU.Engine is to be hosted in docker. The following extract shows how the JVM settings can be viewed using the docker exec command. Use command docker ps to view the containers.

root@ip-172-31-52-133:~# docker ps
CONTAINER ID   IMAGE                                       COMMAND                  CREATED         STATUS         PORTS                                                                  NAMES
4e71043cef89   gluglobal/glu_data:1.0-SNAPSHOT             "java '-Djava.securi…"   5 minutes ago   Up 5 minutes   9088/tcp, 9195/tcp                                                     data_autoTest.1.4g03oc84p4kbt5b1a8waj5n3y
5


docker exec -u 0 {CONTAINER ID} ps -ef to see the java call.
|

root@ip-172-31-52-133:~# docker exec -u 0 4e71043cef89  ps -ef | grep java
    1 root       0:55 java -Djava.security.egd=file:/dev/./urandom -XX:+PrintGCDetails -Xloggc:gc.log -Xms2g -Xmx2g -XX:+UseG1GC -XX:MaxGCPauseMillis=250 -XX:+UseStringDeduplication -XX:G1HeapRegionSize=32 -XX:G1ReservePercent=15 -XX:InitiatingHeapOccupancyPercent=30 -XX:MetaspaceSize=100M -jar /glu_data-1.0-SNAPSHOT.jar --logging.config=./engine/config/loggingSetting.xml
root@ip-172-31-52-133:~#

Masking Sensitive Data/Parameters in Logs

In situations where the parameters in the messages may contain sensitive information that could pose a security risk if displayed in the logs, the GLU.Engine generates two types of logs that require controlled value representation:

1) logs that include parameters in the ‘PAYLOAD’ and

2) logs that print parameters as ‘PARAM’.

The masking of sensitive data in log entries can be controlled by configuring the PAYLOAD and PARAM log masking.

The PAYLOAD log masking is managed at the transaction level, while the PARAM log masking is managed at the parameter configuration level. It is important to note that while the PAYLOAD log masks only parameters in the PAYLOAD when printed in the log, the mask does not affect the unmarshalled value.

To mask the unmarshalled value, a mask for the full string as it appears in the logs must be added.

Then the PARAM log will be masked in the values when un-marshalled.

To mask PAYLOAD values, the ‘Mask PAYLOAD Values in Logs‘ field in the Transaction Manager Panel can be used to define the ‘tags‘ for any values that need to be masked, along with the GLU reserved word “GLU_MASK” (e.g. “username”:”GLU_MASK”).

This will replace the value for “username” with “**********”.

Since tags are used, any payload value can be masked, not just the parameter values within the payload.

The tags to be masked can be copied from the payload template configuration and will vary depending on the payload type (e.g. XML, JSON, SQL call, etc.).

NOTE: the GLU_MASK value used is the full line, so if you complete the line entry with “username”: “GLU_MASK”, the value will also be included in the masking of username – e.g. “**********”.


To mask PARAM values, the “Mask PARAM Value in Logs” checkbox (which by default is ‘checked’) must be unchecked.

Any payload tag (PAYLOAD logs) or parameter name (PARAM logs) that is configured to be masked will be masked in in all logs (INFO, WARN, DEBUG etc.) for all logs associated with a particular transaction.

GLU.Analytics Deployment Guidelines

All GLU Analytics system runs on a separate server to the GLU.Engine. For testing purposes just a single server is needed however for Production use, an analysis of the the forecast transaction volumes, log levels and types and retention periods, system availability requirements etc. needs to be conducted in order for GLU to advise on the the sizing of the GLU Analytics Server/s.

The underlying enabling technology on which GLU Analytics runs is the Elastic Stack. Depending on client configuration choices, the three components of the Elastic Stack (ElasticSearch, Logstash and Kibana) could be deployed each to their own server.

For Testing purposes, all three components can be deployed to the same VM or server.

Server Specification (Initial)

16 GB RAM 

100 GB SSD disk (or high speed 15k RPM drives) 

1 Dual Core CPU (x86_64)

Operating System: Ubuntu (latest version 18.04 or later)

Java Virtual Machine (JVM) – Java 8

Java Prerequisites

Java installed – version 1.8 of java

$ java -version
openjdk version "1.8.0_201"
OpenJDK Runtime Environment (build 1.8.0_201-b09)
OpenJDK 64-Bit Server VM (build 25.201-b09, mixed mode)

If Java needs to be installed use this command (assuming yum is supported with the OS.)

$ yum install java-1.8.0-openjdk.x86_64

Network Prerequisites

The following ports will need to be opened.Port Server ELK serverice Area

PortServerELK servericeArea
9200ElasticsearchElasticsearchIn Network
5044GLU.EngineLogstashFilebeatsLogstashIn Network
5601KibanaKibanaIntranet Access (Inside Organisation)

Install Instructions for Elastic Stack

Please refer to the Elastic stack documentation for install instructions. 

ElasticStackInstallGuide

GLU.Engine logs have been tested with the following versions of Elastic Stack

Elasticsearch 7.8 [1]

Logstash 7.8 [2]

Kibana 7.8 [3]

filebeats : [4]

GLU.Engine Logs

Overview

Multiple methods are available for modifying the logging settings within the GLU.Engine.

  • All settings can be set through the console when building through the  GLU.Engine Settings.
  • A subset of settings can be configured in the applicationSettings.yml file.
  • A further subset can be set through the GLU.Engine APIs.

It is important to note that changes to log levels within the GLU.Engine can have a significant impact on system performance. As such, it is recommended to only run the GLU.Engine in DEBUG mode during periods of low traffic, or to increase system resource allocations to compensate for the increased logging output. The utilization of the DEBUG mode will result in the generation of substantial log files, which can rapidly consume the allotted log storage space. Given these considerations, it is not advisable to operate the GLU.Engine in a production environment under heavy load for extended durations.

Log file Settings

The GLU.Engine generates a single log file, with historical data being transferred to older log files. The attributes of the file, including the naming convention, can be regulated.

The dynamic name, can be customised in the Application Settings file.

The log file is capable of being processed by any log management tool that can handle text-based log files. GLU has ratified the consumption of GLU.Ware logs through the use of the Elastic Stack, Dynatrace, and AWS Cloudwatch.

See GLU.Analytics Deployment Guidelines for an example of how to integrate to Elastic Stack.

The integration screens provide the capability to configure response payloads to be displayed in the logs as a single line. This enhances the readability of the text-based log files.

See  Masking Sensitive Data/Parameters in Logs section for detail on how to hide any sensitive data from your Logs.

ISO8583 – Q2 Logs

GLU uses the Q2 component to generate ISO8583 logs to provide more detailed information about the processing of ISO8583 messages than is available in the standard GLU.Engine logs. The Q2 logs contain information such as the message type, message format, message fields, and any errors that occurred during processing.

Q2 logs are useful for troubleshooting issues with ISO8583 integrations using the Q2 protocol. They can be used to identify errors in message construction, data formatting, or other issues that may be causing problems with the integration.

The appSettings.yml file

It is possible to configure the various settings in a file which is include with the GLU.Engine.

The file can be found here.

../engine/config/applicationSettings.yml 

If this is changed then this will require the GLU.Engine to be restarted for settings to be applied. The table below describes the parameters which can be changed in the file.

Console Dialogue labelDescriptionappSettings.yml parameter nameSample Value
PathPath to the log filelogging.path/var/log
FileName of the log file, “.log”will be concatenated to the end of the log file.logging.filegluware
LevelThe Logging level. for changing the logging levels see GLU.Engine Logging Levelslogging.level.global.gluINFO
Management Endpoint Shutdown EnabledIt is possible to change the logging levels in real time, with out starting and stopping the GLU.Engine. To allow this to happen this field must be set too false. See GLU.Engine Logging Levels to change the settings.management.security.enabledfalse
Max File SizeAs the log file grows when its size exceeds this value (in MB), it will save this file with a unique identity and create a new file to write the logs to.Not in appSettings.yml file.1000
Max HistoryIn the directory/folder which the logs are being written to, there can only be this number of log files. If the number is exceeded then the oldest file will be deleted.Not in appSettings.yml file.30
Total Size CapFor the directory/folder which the logs are being written to, the size of the directory/folder can not exceed this value (in MB), if it does then the oldest file will be deleted.Not in appSettings.yml file.50000
FileName PatternFormat of mask which will be concatenated to the log file name. i.e. gluware.2021-07-29.1.log, where gluware is the file, 2021-07-29 is the FileName Pattern, 1 is the sequence number given by GLU and .log the extension.Not in appSettings.yml file.yyyy-MM-dd
Pattern FileThis is the header which is written on each log line.Not in appSettings.yml file.%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} – %msg%n

Log file Detail

The GLU.Engines produce 1 types of log file. 

TypeLog NameDescription
GLUgluware.log(logging.level.global.glu)

GLUware.log

The Initial lines in the log file will contain the information in the table below. Once the GLU.Engine is running it is also possible to pull the same information through a GLU API request see Engine Info.

#NameExampleDescription
1Process settings2018-12-12 08:48:47 [main] INFO global.glu.ware.GluWareApplication -Starting GluWareApplication v1.0-SNAPSHOT on ip-172-31-13-254 with PID 17022 (/home/ec2-user/test/TestApp-1.0-SNAPSHOT/engine/TestApp-1.0Datetimestamp LOG Level and GLU DetailsStarting Application NameInternal IP Application starts onProcess ID Application running under (PID)Path the Application on the server the process runs from. 
2Profile2018-12-12 08:48:47 [main] INFO global.glu.ware.GluWareApplication -No active profile set, falling back to default profiles: defaultDatetimestamp LOG Level and GLU DetailsDetails of the profile which is used to run the engine
3StartingTomcat2018-12-12 08:48:55 [main] INFO o.a.catalina.core.StandardService -Starting service [Tomcat]Process ID Application running under (PID)Warning Tomcat Starting for Engine
4Starting Servlet2018-12-12 08:48:55 [main] INFO o.a.catalina.core.StandardEngine -Starting Servlet Engine: Apache Tomcat/8.5.31Process ID Application running under (PID)Warning Servlet starting
5Initializing Spring2018-12-12 08:48:56 [localhost-startStop-1] INFO o.a.c.c.C.[Tomcat].[localhost].[/] -Initializing Spring embedded WebApplicationContextProcess ID Application running under (PID)Warning Spring starting
6Route Policy2018-12-12 08:48:57 [main] INFO o.a.c.s.boot.CamelAutoConfiguration -Using custom RoutePolicyFactory with id: metricsRoutePolicyFactory and implementation: org.apache.camel.component.metrics.routepolicy.MetricsRoutePolicyFactory@75e91545Process ID Application running under (PID)Route Policy
7Type converters2018-12-12 08:48:58 [main] INFO o.a.c.i.c.DefaultTypeConverter -Type converters loaded (core: 193, classpath: 30)Process ID Application running under (PID)Details on the Type converters.
8Integration Path2018-12-12 08:48:58 [main] INFO g.glu.ware.platform.utils.GluLogging -Integration Config Path :/home/ec2-user/test/TestApp-1.0-SNAPSHOT/engine/config/flows.jsonProcess ID Application running under (PID)Path to flows.json file with configuration of the engine defined.
9Application2018-12-12 08:48:58 [main] INFO g.glu.ware.platform.utils.GluLogging -Application :FlashAppProcess ID Application running under (PID)Application name.
10Code2018-12-12 08:48:58 [main] INFO g.glu.ware.platform.utils.GluLogging -Client :Flash CODE : [17194]Process ID Application running under (PID)GLU Client Name Unique Code reference. 
11Version2018-12-12 08:48:58 [main] INFO g.glu.ware.platform.utils.GluLogging -Version :1.0-SNAPSHOTProcess ID Application running under (PID)Version of the Application being run. 
12Spec2018-12-12 08:48:58 [main] INFO g.glu.ware.platform.utils.GluLogging -Integeration Spec Name :Integration Spec.[V.1.0]Process ID Application running under (PID)Name of the Integration being run. 
13Menu builder details2018-12-12 08:48:58 [main] INFO g.glu.ware.platform.utils.GluLogging -Menu :There is NO Menu BuilderProcess ID Application running under (PID)Name of the Menu Builder if it is included in the Application.
14Build Date & Time2018-12-12 08:48:58 [main] INFO g.glu.ware.platform.utils.GluLogging -Date built / Release :Wed, 12 Dec 2018 10:45:31 +0200Process ID Application running under (PID)Date the Application was built. 
15Build EnvironmentConnector details2018-12-12 08:48:58 [main] INFO g.glu.ware.platform.utils.GluLogging -Build Environment :GLUEN3 Conn To STUBSENVProcess ID Application running under (PID)the environment the Build was triggered from.

Log Levels

See the GLU.Engine Log Levels page for details.

Fill the form and we’ll contact you shortly

    I agree with

    cookies
    We uses cookies to make your experience on this website better. Learn more
    Accept cookies