1. Home
  2. GLU.Guide
  3. Integration Builder
  4. Synchronous vs Asynchronous Messages

Synchronous vs Asynchronous Messages

In scenarios where the GLU.Engine needs to call an HTTP service but this service is slow to respond and you do not want your GLU.Engine to be blocked, waiting for a response. You want your GLU.Engine to do other important computation while the slow response is happening. In such a case you can set your message to be Asynchronous so your GLU.Engine can handle other processing demands while the slow service is processing it’s long running requests.

In the Orchestration, the GLU.Engine can initiate outbound message exchanges with two main patterns: ‘Request only’ or ‘Request-Reply’:

  • ‘Request only’ messaging: This involves sending a message without expecting a reply. It’s often referred to as fire and forget or event message.
  • ‘Request-Reply’ messaging: This is when a caller sends a message and then waits for a reply. This is like the HTTP protocol that we use every day when we surf the web. We send a request to fetch a web page and wait until the reply message comes with the web content.

By default, messages in GLU are treated as ‘Request-Reply’, which is synchronous. To designate a message as ‘Request only’ or asynchronous, the ‘Asynch’ checkbox in the ‘Endpoint Manager Panel’ can be utilised.

This distinction allows the GLU.Engine to efficiently manage processing demands, ensuring smooth execution even when dealing with slow responses from external services.


Synchronous Transactions

A synchronous exchange is where the caller sends a message and waits for a response before continuing. A synchronous request blocks the calling thread until a response is received, meaning that it waits for the response before continuing to execute. Synchronous requests are typically used when the response time is short and the results of the request are immediately required for further processing.


Asynchronous Transactions

Asynchronous requests diverge from synchronous ones by not blocking the calling thread, allowing it to continue executing independently. The response to such requests is typically received at a later time, often facilitated through a callback or event handler. Asynchronous requests find utility in scenarios where the response time is protracted, and immediate processing of results is not imperative.

Asynchronous transactions processed by a GLU.Engine send a message to a receiving Endpoint which then returns immediately back to the GLU.Engine with an ACK/NACK response, this message, however, is processed in another thread within the GLU.Engine, the asynchronous thread. This enables the GLU.Engine to continue doing other work, and at the same time while the asynchronous thread is processing the message.

In networking terminology, ACK signifies “Acknowledgment,” confirming successful receipt of a transmitted data packet. Conversely, NACK, or “Negative Acknowledgment,” indicates either non-receipt or receipt with errors, prompting the need for retransmission of the data packet.

Asynchronous transactions do not implement any kind of persistence or recovery if the GLU.Engine terminates while messages are yet to be processed. If you need persistence, some form of queueing such as ActiveMQ or RabbitMQ, or a DB insert could be used.

Asynch-SEDA

To provide enhanced asynchronous transactions GLU provides the ability to deliver asynchronous transactions through the SEDA method. SEDA (Staged Event-Driven Architecture) is a software architecture that enables efficient and scalable processing of events, such as user requests or messages, in a multi-stage pipeline.

SEDA provides a framework for managing and coordinating the flow of events through the pipeline, with features such as flow control, back pressure, and dynamic stage scaling. Thus, SEDA adds a layer of orchestration and management to asynchronous messaging, making it more suited for complex, scalable event-driven systems.


In SEDA, events are processed through a series of stages, each of which performs a specific task. For example, a stage might be responsible for parsing incoming requests, while another stage might handle database access. Each stage runs as a separate thread and events are passed from stage to stage in a concurrent manner, allowing for high parallelism and low latency.

SEDA enables system designers to trade off between throughput, latency, and resource usage to meet specific system requirements.


The SEDA component provides asynchronous SEDA behaviour, so that messages are exchanged on a Blocking Queue and consumers are invoked in a separate thread from the producer. A Blocking Queue is a queue data structure that blocks or waits when attempting to dequeue from an empty queue, or to enqueue to a full queue. It allows for coordination between stages in the event processing pipeline, providing a mechanism for back pressure and flow control.

  • Concurrent Consumers: Concurrent consumers are parallel threads of execution that consume messages from a queue in the SEDA model. The idea is to increase the processing speed by allowing multiple consumers to process messages at the same time, thereby reducing the overall processing time. The number of concurrent consumers can be specified when setting up the SEDA system and can be tuned to achieve optimal performance. Use this field to sets the number of concurrent threads processing exchanges (the default value = 1).
  • Size: Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold) (Default value = 1000). Queue size in SEDA messaging refers to the maximum number of messages that can be stored in the queue at any given time. This setting is used to manage memory usage and prevent the system from running out of memory due to a large number of messages being stored in the queue. The queue size can be set to an appropriate value based on the expected rate of message arrival and the processing time required for each message. If the queue size is set too low, messages may be lost if they cannot be processed quickly enough. If the queue size is set too high, memory usage may become a concern.



When NOT to use SEDA

When would you not use SEDA for an async connection or set of connections?

While SEDA can be a useful pattern for building scalable, high-performance systems, it may not be appropriate for all asynchronous connector integrations. Here are some of the reasons why you may not want to use SEDA for an async connection or set of connections:

  • Complexity: SEDA can introduce complexity into your system, as you need to manage the different stages and ensure that data is properly processed through each one. If your integration is relatively simple and does not require a lot of processing, SEDA may not be necessary.
  • Latency: SEDA can add latency to your system, as data must be passed from one stage to another. If your integration requires real-time processing, SEDA may not be the best choice.
  • Debugging: Debugging a SEDA-based system can be more difficult, as you need to understand how data is being processed through each stage.
  • Overhead: SEDA requires additional overhead in terms of system resources, as you need to manage the different stages and ensure that data is properly processed through each one. If your integration requires only a minimal amount of processing, SEDA may add unnecessary overhead.



So, it’s important to consider these factors when deciding whether or not to use SEDA for an async connection or set of connections. If the requirements of your integration are simple, low-latency, and easy to debug, you may want to consider an alternative pattern that is better suited to your needs.

Was this article helpful?

Related Articles

Need Support?

Can't find the answer you're looking for?
Contact Support
Fill the form and we’ll contact you shortly

    I agree with

    cookies
    We uses cookies to make your experience on this website better. Learn more
    Accept cookies