Table of Contents
Introduction
The Real-World Challenge: Healthcare Claims Triage
Starting a Durable Orchestration via HTTP
Passing Input Parameters Securely and Efficiently
Complete Implementation Example
Best Practices for Enterprise Deployments
Conclusion
Introduction
In modern cloud-native architectures, Durable Functions—Azure’s serverless implementation of the orchestrator pattern—are pivotal for managing long-running, stateful workflows. But how do you initiate such a workflow on demand? And how do you safely inject contextual data into it?
These aren’t academic questions. In enterprise systems, especially in regulated domains like healthcare, finance, or logistics, you often need to manually trigger complex, multi-step processes with precise input—such as validating an insurance claim the moment it arrives from a partner portal.
Let’s explore this through a real-time scenario that mirrors production-grade demands.
The Real-World Challenge: Healthcare Claims Triage
Imagine you’re the lead cloud architect at a national health-tech platform. Every minute, your system receives electronic healthcare claims from clinics via a secure REST API. Each claim must undergo:
Patient eligibility verification (external API call)
Fraud pattern analysis (AI model inference)
Provider credential check (database lookup)
Approval or escalation based on risk score
This is a textbook use case for a Durable Orchestration: sequential, stateful, and resilient to transient failures. But the workflow doesn’t run on a schedule—it must start the instant a claim is submitted via HTTP.
That’s where manual triggering comes in.
![PlantUML Diagram]()
Starting a Durable Orchestration via HTTP
Azure Durable Functions expose a built-in HTTP management API, but you don’t call it directly in production. Instead, you create a dedicated HTTP-triggered starter function.
This function:
Validates the incoming request
Extracts and sanitizes input
Starts the orchestration
Returns a status query URL for async monitoring
Here’s how it’s done in Python (Azure Functions v4+):
import azure.functions as func
import azure.durable_functions as df
from pydantic import BaseModel, validator
import logging
# Define expected input schema
class ClaimSubmission(BaseModel):
claim_id: str
patient_id: str
provider_npi: str
amount: float
service_date: str
@validator('amount')
def amount_must_be_positive(cls, v):
if v <= 0:
raise ValueError('Claim amount must be positive')
return v
async def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info("Received new healthcare claim submission.")
try:
req_body = req.get_json()
claim = ClaimSubmission(**req_body)
except Exception as e:
return func.HttpResponse(
f"Invalid input: {str(e)}",
status_code=400
)
# Get the Durable Orchestration client
client = df.DurableOrchestrationClient(req.context)
# Start the orchestration with a deterministic instance ID
instance_id = f"claim-{claim.claim_id}"
await client.start_new(
orchestration_function_name="ProcessHealthcareClaim",
instance_id=instance_id,
input=claim.dict()
)
# Return the management URL for status polling
return client.create_check_status_response(req, instance_id)
This starter function returns a 202 Accepted with a Location header pointing to the orchestration’s status endpoint—enabling clients to poll for completion.
Passing Input Parameters Securely and Efficiently
The input passed to start_new() becomes the initial state of your orchestrator. In our case, it’s a structured ClaimSubmission object.
Key considerations:
Never pass raw, unvalidated JSON—always use a schema (e.g., Pydantic).
Avoid sensitive data (like SSNs) in orchestration input; pass tokens or encrypted references instead.
Keep payloads under 60 KB (Azure Storage limit for orchestration messages). For large data, store in Blob Storage and pass a SAS URI.
The orchestrator receives this input as its first parameter:
def ProcessHealthcareClaim(context: df.DurableOrchestrationContext):
claim_data = context.get_input() # dict from claim.dict()
# Step 1: Verify eligibility
eligibility = yield context.call_activity("CheckPatientEligibility", claim_data)
if not eligibility["is_eligible"]:
return {"status": "rejected", "reason": "ineligible"}
# Step 2: Run fraud detection
risk_score = yield context.call_activity("AnalyzeFraudRisk", claim_data)
# Step 3: Validate provider
provider_valid = yield context.call_activity("ValidateProvider", claim_data["provider_npi"])
# Final decision
if risk_score < 0.3 and provider_valid:
return {"status": "approved", "claim_id": claim_data["claim_id"]}
else:
return {"status": "escalated", "risk_score": risk_score}
![2]()
![1]()
Each activity function (CheckPatientEligibility, etc.) receives only the data it needs—enforcing least privilege and reducing blast radius.
Best Practices for Enterprise Deployments
Use deterministic instance IDs (e.g., claim-{id}) to prevent duplicate processing.
Enable Application Insights for end-to-end tracing across orchestration steps.
Set timeouts on external calls to avoid stuck orchestrations.
Authenticate the starter endpoint using Azure AD or API Management.
Monitor replay logs: orchestrator code must be deterministic—no datetime.now() or random calls!
Conclusion
Manually triggering a Durable Orchestration via HTTP isn’t just about calling start_new(). It’s about building a secure, observable, and idempotent entry point into a mission-critical workflow.
In our healthcare claims scenario, this pattern ensures that every submission—whether from a clinic’s EHR system or a patient portal—kicks off a resilient, auditable, and compliant validation pipeline.
As a senior cloud architect, your job isn’t just to make it work—it’s to make it production-ready from day one. And with Durable Functions, you get statefulness, scalability, and serverless economics in one elegant package.