Azure Durable Functions And Chaining Pattern In Practice

Azure is a great choice for all, especially for those who use the .NET platform in their development process.

Our previous article on Azure was dedicated to Azure with N-Layered architecture but in this post, I’ll help you to understand what is Azure's Durable functions and chaining patterns and how to use them in real-world practice!

So, what are you waiting for?! Let's get started…

What are Azure Durable Functions?

Durable functions are extensions for Azure functions. Using them, you can provide stateful behaviors to your system, and write stateful functions through.

We use Durable functions when we need long-running operations.

Through durable function feature, we can use the following programming models:

  1. For Stateful workflow – we often use orchestrator functions for it
  2. For stateful entities – there are entity functions for it.

This article is dedicated to the Orchestrator functions and their practical usage via the Chaining pattern.

Let’s get acquainted with real acceptance criteria: (original requirements from my remote job)

The function should return the following response:

  • Successful response with status URL
  • If the source or destination folder doesn’t exist then the response message should be “Source Folder {name} does not exist." or "Destination Folder {name} does not exist”

When Function is in Progress

  • Response Message: "Sub-Folders moved: {Funding, Renewals, ...} ; Sub-Folder move in progress: {Tax Administration}"

 When the Function is completed

  • Response Message: "Source folder {name} documents successfully moved with {Merge} Attribute Option. Sub-Folders moved: {Funding, Renewals, ...}"

When Destination Sub-Folder does not exist

Accumulate Error with folder name to show in the Response

Response Message: "Destination Folders {Funding, Renewals, ...} do not exist; Documents from those folders are not moved"

When an exception occurred during XX-Service call

Response Message: "Error in XX-Service call {methodName}. Folders moved: {List of Folders that are moved}; Error details: {Error details}"

  • After a successful move all Folders and Documents from Source Folder should exist in the Destination Folder
  • After a successful move Folders and Documents should NOT exist in the Source
  • When ALL documents from Sub-Folder are moved, the empty Sub-Folder should be deleted
  • When ALL Folders and Documents are moved without any error, an empty Source Folder should be deleted
  • Level 1 Folder from Source should exist in the Destination:
  • Level 1 Sub-Folders from Source should be created in the Destination

Long story short, you need to write azure functions to transfer all folders, subsidies, and files from one folder to another folder. But if there are no 1st-level folders that don’t exist in Destination, the process should be terminated. It is possible to move folders if there are root(1st level) folders in the Destination. When there are no further level folders, we create them ourselves.

Note that, like other templates, there are a collection of Templates ready for realizing Azure functions:

Azure Durable Functions and Chaining pattern in practice

The main entry point in Azure Durable Function starts with HTTPStart. You can see that, using the [FunctionName] attribute we provided the original cal name. This function will play the role of the orchestration entry point in our system. When functions start, the application environment provides some default arguments to the function. Using these arguments we send required arguments to other functions.

Here are arguments, accepted by our function,

  1. HttpRequestMessage -> using the HttpTrigger attribute we leverage its authorization level, we “mark” it as a “post” request like HTtpPost in Asp.NET and we create a special route to access the functionality.
  2. IDurableOrchestrationClient – marked with [DurableClient] attribute, given argument helps us to manage the lifecycle of the durable function. An interesting point is that there is no possibility to package this object and provide it the other activities(we will talk about it a little bit later in this article)
public static async Task HttpStart(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = "documents/folder/move")] HttpRequestMessage req,
    [DurableClient] IDurableOrchestrationClient starter, ILogger log) {
    try {
        string stringContent = await req.Content.ReadAsStringAsync();
        JObject obj = JObject.Parse(stringContent);
        var request = obj.ToObject();
        log.LogInformation($ "Request received: {stringContent}");
        string instanceId = await starter.StartNewAsync("OriginVaultMoveFolderDurableFunctions", request);
        log.LogInformation($ "Started orchestration with ID = '{instanceId}'.");
        return new HttpResponseMessageResult(starter.CreateCheckStatusResponse(req, instanceId));
    } catch (Exception ex) {
        log.LogError(ex, "MoveDocument Failed!");
        return new HttpResponseMessageResult(System.Net.HttpStatusCode.BadRequest, new {

 The main responsibility of the Orchestrator entry point is receiving arguments and sending them to the original orchestrator. Main orchestrator here is OriginVaultMoveFolderDurableFunctions. It receives JSON from HttpStart, converts it into an object, and forwards it to the starter.StartNewAsync.

public async Taskstring >> RunOrchestrator(
    [OrchestrationTrigger] IDurableOrchestrationContext context) {
    var output = new List < string > ();
    try {
        //entry point orchestratorun verdiyi obyekti almaq üçün bu metodu çagiririq
        var request = context.GetInput();
        context.SetCustomStatus("Checking if can start moving files from Source to Destination...");
        var startResponse = await context.CallActivityAsync("HV_CanStartMoving", request);
        if (startResponse.StartMoveDocument.CanStartMoving()) {
            context.SetCustomStatus("Request approved. Can Start moving from Source to Destination");
            context.SetCustomStatus("Preparing for getting documents from the folder...");
            var getItemsResponse = await context.CallActivityAsync("HV_GetItemsInFolder", new GetItemsInFolderRequest {
                SourceSystem = request.SourceSystem, SourceFolderId = startResponse.StartMoveDocument.SourceFolderId, FolderName = request.Document.Source.RelativeFolderPath
            if (!getItemsResponse.DocumentListResponse.IsFolderEmpty()) {
                context.SetCustomStatus("Got source folder's documents. Preparing to move...");
                var rsp = new MoveFilesRequest {
                    Request = getItemsResponse.DocumentListResponse,
                        MoveFolderData = request
                var _output = await context.CallSubOrchestratorAsyncstring >> ("HV_MoveFiles", rsp);
            } else {
                string message = "Folder Empty. Nothing to move!";
        } else {
            context.SetCustomStatus("Error : Request declined. Can't start moving. Source or Destination are incorrect!");
    } catch (ServiceException exp) {
        string validationErrors = exp.GetErrrorsAsString();
        if (validationErrors.Length > 0) output.Add(validationErrors);
    } catch (Exception exp) {
    context.SetCustomStatus("Document moving completed...");
    return output;

OriginVaultMoveFolderDurableFunctions is our main orchestrator and we marked it with OrchestratorTrigger and it works again with the IDurableOrchestratorClient interface. It is run by an entry point and receives arguments through GetInput().

IDurableOrchestratorClient can provide data in 2 forms: output and custom status. By default, we use the output to handle the data. But sometimes we use output combined with SetCustomStatus to define our status-based messages.

There are different patterns that we can use to organize our Orchestrator’s lifecycle. Depending on the tasks, we may use them mixed or pattern by pattern. Here is the durable function pattern you can use in practice:

  1. Fan-out/ fan-in
  2. Function chaining
  3. Async Http Apis
  4. Monitoring
  5. Human interaction
  6. Aggregator (stateful entities)

This article is dedicated to the Function chaining pattern.

The Function Chaining pattern

In this pattern, the functions are executed in chain form, one waiting for the other. After one function finishes its work and gives a certain response, another function accepts this response as a request, and this process continues like this. This is where the name of the pattern comes from. This mechanism is the same as the working principle of middleware in ASP.NET Core.

Azure Durable Functions and Chaining pattern in practice

An example of function-chaining can also be found in our RunOrchestrator function. If you pay attention, first of all, CallActivityAsync and HV_CanStartMoving methods start. This function checks the readiness of the folders for the transfer process and returns the identifier of the source and destination folders as objects.

public async Task CanStartMoving([ActivityTrigger] MoveFolderData moveFolderData) {
    var output = new StartMoveDocumentOutput();
    try {
        output.StartMoveDocument = await _moveFolderProvider.CanStartMoving(moveFolderData);
    } catch (Exception exp) {
    return output;

After that, the response received from it is sent to the HV_GetItemsInFolder method as an argument. As you can see, we use one function’s response as a request to another one.

public async Task GetItemsInFolder([ActivityTrigger] GetItemsInFolderRequest request) {
    var output = new DocumentListOutput();
    output.AddMessage($ "Trying to get documents from `{request.FolderName}` folder...");
    try {
        output.DocumentListResponse = await _moveFolderProvider.GetItemsInFolderAsync(request.SourceSystem, request.SourceFolderId);
        output.AddMessage($ "{output.GetItemsCount()} {request.GenerateSuccessOutputMessage()}");
    } catch (Exception exp) {
    return output;

At the last, let's see the final request :

    "SourceSystem": "XXXXX",
    "Document": {
        "MoveOption": "Merge",
        "Source": {
            "BaseFolderPath": "Origin - Residential Working",
            "BaseFolderId": "1313131",
            "RelativeFolderPath": "/10009431"
        "Destination": {
            "BaseFolderPath": "Origin - Residential Working",
            "BaseFolderId": "1313131",
            "RelativeFolderPath": "/10000"

In the first part of the received response, Durable functions return such a set of links and identifiers.

    "id": "ce50495bc31a4f5fb82e956e22f5ca63",
    "statusQueryGetUri": "http://localhost:7076/runtime/webhooks/durabletask/instances/ce50495bc31a4f5fb82e956e22f5ca63?taskHub=TestHubName&connection;=Storage&code;=8x2IdVNL9z5Bu3k3bH93VxXnr03QTbWE0ECeXTViB/2m8pCG28cbRQ==",
    "sendEventPostUri": "http://localhost:7076/runtime/webhooks/durabletask/instances/ce50495bc31a4f5fb82e956e22f5ca63/raiseEvent/{eventName}?taskHub=TestHubName&connection;=Storage&code;=8x2IdVNL9z5Bu3k3bH93VxXnr03QTbWE0ECeXTViB/2m8pCG28cbRQ==",
    "terminatePostUri": "http://localhost:7076/runtime/webhooks/durabletask/instances/ce50495bc31a4f5fb82e956e22f5ca63/terminate?reason={text}&taskHub;=TestHubName&connection;=Storage&code;=8x2IdVNL9z5Bu3k3bH93VxXnr03QTbWE0ECeXTViB/2m8pCG28cbRQ==",
    "purgeHistoryDeleteUri": "http://localhost:7076/runtime/webhooks/durabletask/instances/ce50495bc31a4f5fb82e956e22f5ca63?taskHub=TestHubName&connection;=Storage&code;=8x2IdVNL9z5Bu3k3bH93VxXnr03QTbWE0ECeXTViB/2m8pCG28cbRQ=="

From here, we can get acquainted with step by step operation of the Durable function with statusQueryGetUri and prepare responses for each request. When we click on that URL, we get below:

    "Source and Destination are OK. Can Start Moving Files...",
    "Trying to get documents from `/10009431` folder...",
    "4 Document(s) detected inside `/10009431` folder.",
    "Move Files successfully started...",
    "Warning : An item with the name '3.txt' already exists.. Can't move `3.txt` Document to `/10000`.",
    "Warning : An item with the name '2.txt' already exists.. Can't move `2.txt` Document to `/10000`.",
    "Preparing to find folder : `/Test3` inside Destination...",
    "/Test3 exists in destination!",
    "Trying to get documents from `Test3` folder...",
    "2 Document(s) detected inside `Test3` folder.",
    "Warning : An item with the name 'Test4' already exists.. Can't move `Test4` Folder to `/10000/Test3`.",
    "Warning : An item with the name '2.txt' already exists.. Can't move `2.txt` Document to `/10000/Test3`.",
    "Preparing to find folder : `/Test9` inside Destination...",
    "Warning : ./Test9 doesn't exist in destination!!!"

We have prepared the messages shown above with output gathering, and we will talk about it in detail in the next articles.

Going back to the code, if the intended business requirements are met, then the suborchestrator is called via the CallSubOrchestratorAsync method.

SubOrchestrators are a mechanism used when building a durable function architecture that requires more than one root orchestrator.

In our example, the process of moving files from one place to another is separated into a suborchestrator function. It is through this function that the main code block of the task is executed. In our next articles, as a continuation of this article, you will be able to get acquainted with the fan-in-fan-out application pattern, the possibility of collecting output messages, and the option of creating a status.