Serverless Integration Design Pattern

Every option has its own pros & cons. Based on client requirement analysis with each option we found option 2, Azure Function and Event Hub approach most suitable for this type of integration.

Recently, I was working on one of the integration projects where the client had the following requirements.

  1. It should support millions of transactions per second
  2. Based on demand/number of requests, it should auto-scale
  3. The client didn't want to procure a huge hardware up-front.
  4. It should support multiple application integration and should be open for future integrations without any or minimal changes.
  5. If an application is down for some time, the framework should have in-build retry logic after some configured schedule.
Serverless Integration Design Pattern 

We analyzed the requirement and came up with the below options for this integration framework.
  1. Biztalk Server
  2. Azure Function, Logic Apps, and Event Hubs
  3. Third-party integration server

Every option has its own pros & cons. Based on the client requirement analysis with each option, we found option 2, Azure Function and Event Hub approach most suitable for this type of integration.

Below are the main features which played a key role in the selection of this framework for integration.

Azure Functions

Azure Functions is a serverless compute service that enables you to run code-on-demand without having to explicitly provision or manage the infrastructure. 

Azure Logic App

Azure Logic Apps is a cloud service that helps you automate and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations. Logic Apps simplifies how you design and build scalable solutions for app integration, data integration, system integration, enterprise application integration (EAI), and business-to-business (B2B) communication, whether in the cloud, on-premises, or both.

Azure Event Hubs

Azure Event Hubs is a Big Data streaming platform and event ingestion service, capable of receiving and processing millions of events per second.

Cosmos DB

Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. With the click of a button, Azure Cosmos DB enables you to elastically and independently scale throughput and storage across any number of Azure's geographic regions. You can elastically scale throughput and storage, and take advantage of fast, single-digit-millisecond data access using your favorite API among SQL, MongoDB, Cassandra, Tables, or Gremlin. Cosmos DB provides comprehensive service level agreements (SLAs) for throughput, latency, availability, and consistency guarantees, something no other database service can offer.

SQL Azure

SQL Database is a general-purpose relational database managed service in Microsoft Azure that supports structures such as relational data, JSON, spatial, and XML. SQL Database delivers dynamically scalable performance within two different purchasing models: a vCore-based purchasing model and a DTU-based purchasing model.

Let's discuss the processing and data flow in this design pattern.

A partner can send data in two ways -

  1. Post XML files to the Logic App
  2. Post data directly to Azure Function
A partner who has existing XML formatted files and does not want to make any changes in the existing application can send the XML files to the Logic App if they are not able to send the data to Azure Functions.

Below is the process of the Logic App.

  1. Client posts the XML files with other fields for client id and token
  2. Azure SQL contains a list of clients, client id, token, supported integration details; e.g. input schema, XSLT name, and target schema name.
  3. Based on the configuration done for a particular client and message type, Logic App picks the schema and XSLT from integration account, and after processing, sends the message to Azure Functions for further processing.

Working of Azure function

  1. Based on the message type, it first validates the input message.
  2. After successful data validation, it sends a message to process the Event Hubs.
  3. Another Azure Function picks the message and does the processing based on business logic. After business processing, it collects all data and sends to another Event hub for database update.
  4. Another Azure Function picks the message from dbupdate Event Hub and saves the data into CosmosDB or Azure SQL etc.
  5. We have one more event hub which is used to post the data to other applications in an asynchronous way. Whenever we need to post data to another application, we send a message to the posttopartner Event Hub.
  6. Azure Function picks the message from posttopartner and sends to the partner's endpoint with security token etc.

Exception and Retry management

  1. Every Azure Function which works on the event has all the code within try-catch blocks.
  2. Inside the catch block, it validates the error code. If this is related to any partner system connectivity, then it inserts the record into CosmosDB in a separate retry table.
  3. Another Azure Function runs after some duration and checks if any unprocessed record is available in retry table. Then, it picks up those records and posts to the respective Event Hub.
  4. From the Event Hub, the respective Azure Function picks up the failed messages and starts processing.
  5. Inside every business logic, it first checks if the same message has not been processed yet; then only it processes that.

I hope this article will be helpful to you to understand how to achieve high-performance integrations which can handle millions of requests per second with on-demand scaling and serverless technology.

Let me know if you have any queries.