Suppose your GLU.Engine needs to call an HTTP service but this service is slow to respond and you do not want your GLU.Engine to be blocked, waiting for the response. You want your GLU.Engine to do other important computation while the slow response is happening. In such a case you can set your message to be Asynchronous so your GLU.Engine can handle other processing demands while the slow service is processing it’s long running requests.
A GLU.Engine can initiate an outbound message exchange in the Orchestration as either: ‘Request only’ or ‘Request-Reply’.
‘Request only’ messaging is when a caller sends a message but does not expect a reply. This is also known as fire and forget or event message.
The ‘Request-Reply’ messaging is when a caller sends a message and then waits for a reply. This is like the HTTP protocol that we use every day when we surf the web. We send a request to fetch a web page and wait until the reply message comes with the web content.
In GLU a message is labeled with a Message Exchange Pattern that indicates if it’s a Request only or a Request-Reply message. By default messages are treated as Request-Reply i.e. synchronous, in order to treat a message as a Request only, one can flag the message as asynchronous using the ‘Asynch’ checkbox in the ‘Endpoint Manager Panel’.
A synchronous exchange is where the caller sends a message and waits for a response before continuing. A synchronous request blocks the calling thread until a response is received, meaning that it waits for the response before continuing to execute. Synchronous requests are typically used when the response time is short and the results of the request are immediately required for further processing.
Asynchronous requests do not block the calling thread and allow it to continue executing. The response is received at a later time, often through a callback or event handler. Asynchronous requests are used when the response time is long and the results of the request are not immediately required for further processing.
Asynchronous transactions processed by a GLU.Engine send a message to a receiving Endpoint which then returns immediately back to the GLU.Engine with an ACK/NACK response, this message, however, is processed in another thread within the GLU.Engine, the asynchronous thread. This enables the GLU.Engine to continue doing other work, and at the same time while the asynchronous thread is processing the message.
ACK stands for “Acknowledgment” and NACK stands for “Negative Acknowledgment”. In computer networking, ACK is a signal used to indicate that a packet of data has been received successfully and can be used to confirm the receipt of a transmission. NACK, on the other hand, is a signal used to indicate that a packet of data has not been received or has been received with errors and needs to be retransmitted.
Asynchronous transactions do not implement any kind of persistence or recovery if the GLU.Engine terminates while messages are yet to be processed. If you need persistence, some form of queueing such as ActiveMQ or RabbitMQ, or a DB insert could be used.
To provide enhanced asynchronous transactions GLU provides the ability to deliver asynchronous transactions through the SEDA method. SEDA (Staged Event-Driven Architecture) is a software architecture that enables efficient and scalable processing of events, such as user requests or messages, in a multi-stage pipeline.
SEDA provides a framework for managing and coordinating the flow of events through the pipeline, with features such as flow control, back pressure, and dynamic stage scaling. Thus, SEDA adds a layer of orchestration and management to asynchronous messaging, making it more suited for complex, scalable event-driven systems.
In SEDA, events are processed through a series of stages, each of which performs a specific task. For example, a stage might be responsible for parsing incoming requests, while another stage might handle database access. Each stage runs as a separate thread and events are passed from stage to stage in a concurrent manner, allowing for high parallelism and low latency.
SEDA enables system designers to trade off between throughput, latency, and resource usage to meet specific system requirements.
The SEDA component provides asynchronous SEDA behaviour, so that messages are exchanged on a Blocking Queue and consumers are invoked in a separate thread from the producer. A Blocking Queue is a queue data structure that blocks or waits when attempting to dequeue from an empty queue, or to enqueue to a full queue. It allows for coordination between stages in the event processing pipeline, providing a mechanism for back pressure and flow control.
Concurrent Consumers: Concurrent consumers are parallel threads of execution that consume messages from a queue in the SEDA model. The idea is to increase the processing speed by allowing multiple consumers to process messages at the same time, thereby reducing the overall processing time. The number of concurrent consumers can be specified when setting up the SEDA system and can be tuned to achieve optimal performance. Use this field to sets the number of concurrent threads processing exchanges (the default value = 1).
Size: Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold) (Default value = 1000). Queue size in SEDA messaging refers to the maximum number of messages that can be stored in the queue at any given time. This setting is used to manage memory usage and prevent the system from running out of memory due to a large number of messages being stored in the queue. The queue size can be set to an appropriate value based on the expected rate of message arrival and the processing time required for each message. If the queue size is set too low, messages may be lost if they cannot be processed quickly enough. If the queue size is set too high, memory usage may become a concern.
When NOT to use SEDA
When would you not use SEDA for an async connection or set of connections?
While SEDA can be a useful pattern for building scalable, high-performance systems, it may not be appropriate for all asynchronous connector integrations. Here are some of the reasons why you may not want to use SEDA for an async connection or set of connections:
Complexity: SEDA can introduce complexity into your system, as you need to manage the different stages and ensure that data is properly processed through each one. If your integration is relatively simple and does not require a lot of processing, SEDA may not be necessary.
Latency: SEDA can add latency to your system, as data must be passed from one stage to another. If your integration requires real-time processing, SEDA may not be the best choice.
Debugging: Debugging a SEDA-based system can be more difficult, as you need to understand how data is being processed through each stage.
Overhead: SEDA requires additional overhead in terms of system resources, as you need to manage the different stages and ensure that data is properly processed through each one. If your integration requires only a minimal amount of processing, SEDA may add unnecessary overhead.
So, it’s important to consider these factors when deciding whether or not to use SEDA for an async connection or set of connections. If the requirements of your integration are simple, low-latency, and easy to debug, you may want to consider an alternative pattern that is better suited to your needs.