Introduction
In our previous articles, we built a Task Management AI Agent , secured it with Azure AI Content Safety , and added comprehensive observability with Application Insights. However, we've been using .NET MVC with Razor views for the user interface—a traditional approach that limits our ability to create rich, interactive AI experiences.
Modern AI applications demand responsive, real-time interfaces with features like streaming responses, file uploads, and conversation history. To enable these capabilities, we need to separate our concerns: a robust .NET backend for business logic and a modern frontend framework for the user experience.
What you'll learn
Restructuring a monolithic .NET app into frontend and backend services
Building a Next.js 16 chat interface with React 19 and TypeScript
Implementing conversation persistence with PostgreSQL and dual-database architecture
Deploying the frontend to Azure Static Web Apps with CDN distribution
Setting up CI/CD pipelines for independent frontend and backend deployments
Managing conversation threads with metadata extraction and efficient JSON storage
Why this matters
Better User Experience : Modern UI patterns (ChatGPT-inspired interface, smooth animations, responsive design)
Independent Scaling : Scale frontend CDN separately from backend compute
Developer Productivity : Hot module replacement, TypeScript safety, component reusability
Foundation for Advanced Features : Enables future streaming, voice input, file uploads, and multimodal interactions
Production-Ready Architecture : Separate deployments, better security, improved performance
What we won't cover in this article (intentionally deferred):
Source code : GitHub - TaskAgent with Next.js Frontend
The Problem with Monolithic UI
Our current .NET MVC application works, but has limitations:
Current Architecture Constraints
![.NET Monolithic]()
Challenges
Limited Interactivity : Razor views require full page refreshes
Tight Coupling : UI changes require backend redeployment
Scaling Inefficiency : Can't scale static content separately from compute
Developer Experience : No hot module replacement, limited tooling
Complex State Management : Difficult to manage conversation state client-side
The New Architecture: Frontend and Backend Separation
We'll transform our monolithic application into a modern, distributed architecture:
![New Architecture]()
Key benefits
Independent deployments : Update frontend without touching backend
Global CDN distribution : Fast load times worldwide
Horizontal scaling : Scale databases, backend, and CDN independently
Better security : API-only backend, no mixed concerns
Modern development : TypeScript, hot reload, component libraries
Part 1: Dual Database Strategy
One of our key architectural decisions is using two separate databases for different concerns:
Why Two Databases?
![Dual Database Strategy]()
PostgreSQL for Conversations: Native JSON Support
We use PostgreSQL specifically for its native JSON support , which is perfect for storing conversation threads:
Key advantages
Flexible Schema : AI agent conversations evolve (new message types, metadata)
Property Order Preservation : JSON type preserves property order (critical for $type deserialization)
Indexed Queries : Can create GIN indexes on JSON fields for fast searches
Native Operators : Rich query capabilities ( @> , -> , ->> )
Future-Ready : Easy to add pgvector extension for RAG (semantic search)
Conversation Thread Storage Pattern
![Conversation Thread Storage Pattern]()
Each conversation is stored as a single JSON blob with extracted metadata:
// What gets stored in PostgreSQL (SerializedThread column)
{
"storeState": {
"messages": [
{
"role": "user",
"contents": [
{
"$type": "text",
"text": "create a task about learning Next.js with medium priority"
}
]
},
{
"createdAt": "2025-11-16T23:03:26+00:00",
"role": "assistant",
"contents": [
{
"$type": "functionCall",
"callId": "call_QoZcRh5igDmsVQitvAHBLtbf",
"name": "CreateTask",
"arguments": {
"title": "Learn Next.js",
"description": "Study the fundamentals of Next.js framework",
"priority": "Medium"
}
}
]
},
{
"role": "tool",
"contents": [
{
"$type": "functionResult",
"callId": "call_QoZcRh5igDmsVQitvAHBLtbf",
"result": "✅ Task created successfully!"
}
]
},
{
"createdAt": "2025-11-16T23:03:28+00:00",
"role": "assistant",
"contents": [
{
"$type": "text",
"text": "✅ Task created successfully!\n**Title:** Learn Next.js"
}
]
}
]
}
}
Message Types in Thread
role: "user" - User input messages
role: "assistant" + $type: "functionCall" - Agent function invocations (internal)
role: "tool" + $type: "functionResult" - Function execution results (internal)
role: "assistant" + $type: "text" - Final agent responses (visible to user)
Why blob storage?
Microsoft Agents Framework maintains conversation state internally. Rather than duplicating this into relational tables, we store the complete serialized thread and extract metadata for querying.
Why JSON type (not JSONB)? The json type preserves property order, which is critical for System.Text.Json polymorphic deserialization. The Microsoft Agents Framework requires $type to be the first property for correct deserialization. While JSONB offers better performance for queries, JSON ensures compatibility with the framework's serialization requirements.
Implementation: PostgresThreadPersistenceService
Here's the key service that manages conversation persistence:
// Infrastructure/Services/PostgresThreadPersistenceService.cs
public class PostgresThreadPersistenceService : IThreadPersistenceService
{
private readonly ConversationDbContext _context;
private readonly ILogger<PostgresThreadPersistenceService> _logger;
public async Task SaveThreadAsync(string threadId, string serializedThread)
{
// Extract metadata for search and display
var (title, preview, messageCount) = ExtractMetadataFromJson(serializedThread);
var existingThread = await _context.ConversationThreads
.FirstOrDefaultAsync(t => t.ThreadId == threadId);
if (existingThread == null)
{
var newThread = ConversationThread.Create(
threadId, serializedThread, title, preview, messageCount
);
_context.ConversationThreads.Add(newThread);
}
else
{
existingThread.UpdateThread(serializedThread, title, preview, messageCount);
}
await _context.SaveChangesAsync();
}
private (string title, string preview, int messageCount) ExtractMetadataFromJson(
string serializedThread)
{
using var doc = JsonDocument.Parse(serializedThread);
var root = doc.RootElement;
string title = "New conversation";
string preview = string.Empty;
int messageCount = 0;
if (root.TryGetProperty("storeState", out var storeState) &&
storeState.TryGetProperty("messages", out var messages))
{
messageCount = messages.GetArrayLength();
// Extract first user message as title (max 50 chars)
foreach (var message in messages.EnumerateArray())
{
if (message.TryGetProperty("role", out var role) &&
role.GetString() == "user")
{
title = ExtractTextFromContents(message, 50);
break;
}
}
// Extract last assistant text message as preview (max 100 chars)
preview = ExtractLastAssistantPreview(messages, 100);
}
return (title, preview, messageCount);
}
}
Key features
Automatic title generation from first user message (max 50 chars)
Preview from last assistant text message (max 100 chars)
Efficient upsert logic (create or update)
Metadata extraction without full thread deserialization
Navigates nested JSON structure: storeState → messages → contents[0].text
Database Configuration
We use two separate connection strings in appsettings.json :
{
"ConnectionStrings": {
"TasksConnection": "Server=localhost;Database=TaskAgentDb;Trusted_Connection=true;",
"ConversationsConnection": "Host=localhost;Port=5432;Database=taskagent_conversations;Username=postgres;Password=your-password"
}
}
And two DbContexts in the Infrastructure layer:
// Infrastructure/Data/TaskDbContext.cs - SQL Server
public class TaskDbContext : DbContext
{
public DbSet<TaskItem> Tasks { get; set; }
public TaskDbContext(DbContextOptions<TaskDbContext> options) : base(options) { }
}
// Infrastructure/Data/ConversationDbContext.cs - PostgreSQL
public class ConversationDbContext : DbContext
{
public DbSet<ConversationThread> ConversationThreads { get; set; }
public ConversationDbContext(DbContextOptions<ConversationDbContext> options) : base(options) { }
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<ConversationThread>(entity =>
{
entity.HasKey(e => e.ThreadId);
// Store SerializedThread as PostgreSQL json type (preserves property order)
entity.Property(e => e.SerializedThread)
.HasColumnType("json")
.IsRequired();
// Metadata columns for efficient queries
entity.Property(e => e.Title).HasMaxLength(200).IsRequired();
entity.Property(e => e.Preview).HasMaxLength(500);
// Indexes for conversation listing and filtering
entity.HasIndex(e => e.UpdatedAt).HasDatabaseName("IX_ConversationThreads_UpdatedAt");
entity.HasIndex(e => e.IsActive).HasDatabaseName("IX_ConversationThreads_IsActive");
entity.HasIndex(e => e.CreatedAt).HasDatabaseName("IX_ConversationThreads_CreatedAt");
});
}
}
Registration in InfrastructureServiceExtensions.cs :
// Register both DbContexts with their respective connection strings
services.AddDbContext<TaskDbContext>(options =>
options.UseSqlServer(configuration.GetConnectionString("TasksConnection")));
services.AddDbContext<ConversationDbContext>(options =>
options.UseNpgsql(configuration.GetConnectionString("ConversationsConnection")));
Why PostgreSQL json type (not jsonb )? The json type preserves property order, which is critical for System.Text.Json polymorphic deserialization. The Microsoft Agents Framework requires $type to be the first property for correct deserialization.
Part 2: Building the Next.js Frontend
With our database strategy in place, let's build the modern frontend that will consume our backend APIs.
Project Structure
We organized the repository as a monorepo with clear separation:
![Project structure 750p]()
Tech Stack Choices
Why Next.js 16?
App Router : Server Components by default (better performance)
React 19 : Latest features, improved performance
Server Components : Reduce JavaScript sent to client
Built-in optimization : Image optimization, code splitting, automatic prefetching
Why TypeScript?
Type safety catches errors at compile time
Better IDE support (autocomplete, refactoring)
Self-documenting code
Easier refactoring as the app grows
Why Tailwind CSS 4?
Utility-first: Fast prototyping
No CSS bloat: Only used classes in production
Consistent design system
Excellent DX: IntelliSense support
Key Frontend Components
1. Chat Interface (ChatGPT-Inspired Design)
The main chat interface integrates the conversation sidebar with the chat area:
// components/chat/ChatInterface.tsx
export function ChatInterface() {
const [isSidebarOpen, setIsSidebarOpen] = useState(false);
const {
messages,
isLoading,
threadId,
handleSubmit,
sendSuggestion,
loadConversation,
} = useChat();
const hasMessages = messages.length > 0;
return (
<div className="h-screen flex bg-gray-50">
{/* Conversation Sidebar */}
<ConversationSidebar
isOpen={isSidebarOpen}
onClose={() => setIsSidebarOpen(false)}
onConversationSelect={handleConversationSelect}
currentThreadId={threadId}
/>
{/* Main Chat Area */}
<div className="flex-1 flex flex-col">
{/* Header - Always visible */}
<ChatHeader onToggleSidebar={toggleSidebar} />
{/* Messages - Scrollable middle section */}
<div className="flex-1 overflow-y-auto">
{hasMessages ? (
<ChatMessagesList messages={messages} />
) : (
<EmptyChatState onSuggestionClick={sendSuggestion} />
)}
</div>
{/* Input Area - Fixed at bottom */}
<ChatInput onSubmit={handleSubmit} disabled={isLoading} />
</div>
</div>
);
}
Key UX patterns
Dual-pane Layout : Sidebar for conversations, main area for chat
Adaptive Empty State : Centered welcome screen when no messages
Fixed Input : Always visible at bottom (ChatGPT-style)
Independent Scroll : Only message area scrolls
Smart Suggestions : Clickable buttons from AI responses
2. Custom Chat Hook (State Management)
We built a custom hook for complete control over the chat flow:
// hooks/use-chat.ts
export function useChat(options: UseChatOptions = {}): UseChatReturn {
const [messages, setMessages] = useState<ChatMessage[]>([]);
const [isLoading, setIsLoading] = useState(false);
const [threadId, setThreadId] = useState<string | null>(null);
const sendMessageInternal = async (message: string) => {
if (!message.trim() || isLoading) return;
// Optimistic update: Add user message immediately
const userMessage: ChatMessage = {
id: `temp-${Date.now()}`,
role: "user",
content: message,
createdAt: new Date().toISOString(),
};
setMessages((prev) => [...prev, userMessage]);
setIsLoading(true);
try {
// Call backend API
const response = await sendMessage({ message, threadId });
// Update threadId if new conversation
if (response.threadId && !threadId) {
setThreadId(response.threadId);
options.onThreadCreated?.(response.threadId);
}
// Add assistant response
const assistantMessage: ChatMessage = {
id: response.messageId,
role: "assistant",
content: response.message,
createdAt: response.createdAt,
metadata: { suggestions: response.suggestions || [] },
};
setMessages((prev) => [...prev, assistantMessage]);
} catch (err) {
// Rollback on error
setMessages((prev) => prev.slice(0, -1));
options.onError?.(err);
} finally {
setIsLoading(false);
}
};
return { messages, isLoading, threadId, sendMessageInternal, ... };
}
Why custom implementation?
Full control over request/response cycle
Optimistic updates with rollback on error
No external SDK dependencies
Callback system for thread creation
Prepared for future streaming migration
3. Conversation Management
Separate hook for managing the conversation list:
// hooks/use-conversations.ts
export function useConversations(options = {}): UseConversationsReturn {
const [conversations, setConversations] = useState<ConversationThread[]>([]);
const [isLoading, setIsLoading] = useState(false);
const [currentThreadId, setCurrentThreadId] = useState<string | null>(null);
const loadConversations = async () => {
setIsLoading(true);
try {
const response = await listThreads({
page: 1,
pageSize: 20,
sortBy: "UpdatedAt",
sortOrder: "desc",
});
setConversations(response.threads);
} finally {
setIsLoading(false);
}
};
const loadConversation = async (threadId: string) => {
const response = await getConversation({
threadId,
page: 1,
pageSize: PAGINATION.CONVERSATION_PAGE_SIZE,
});
setCurrentThreadId(threadId);
return response;
};
const deleteConversation = async (threadId: string) => {
await deleteThread(threadId);
setConversations((prev) => prev.filter((c) => c.id !== threadId));
};
return { conversations, isLoading, loadConversations, deleteConversation, ... };
}
Key features
Automatic sorting by last updated
Pagination support for large conversation lists
Optimistic UI updates on delete
Current thread tracking
4. Type-Safe API Client
Centralized API client with consistent error handling:
// lib/api/chat-service.ts
import { API } from "@/lib/constants";
const API_BASE_URL = API.BASE_URL; // From constants.ts
// Generic fetch wrapper with error handling
async function apiFetch<T>(
endpoint: string,
options: RequestInit = {},
errorMessage = "Request failed"
): Promise<T> {
try {
const response = await fetch(`${API_BASE_URL}${endpoint}`, {
headers: { "Content-Type": "application/json", ...options.headers },
...options,
});
if (!response.ok) {
const errorData = await response.json().catch(() => ({
error: "NetworkError",
message: `HTTP ${response.status}`,
}));
throw new ApiError(errorData.message, response.status, errorData);
}
// Handle DELETE (204 No Content)
if (response.status === 204) return undefined as T;
return await response.json();
} catch (error) {
if (error instanceof ApiError) throw error;
throw new ApiError(error instanceof Error ? error.message : errorMessage);
}
}
// Export individual functions
export async function sendMessage(request: SendMessageRequest) {
return apiFetch<SendMessageResponse>(
"/api/Chat/send",
{ method: "POST", body: JSON.stringify(request) },
"Failed to send message"
);
}
export async function listThreads(request: ListThreadsRequest = {}) {
const params = new URLSearchParams({
...(request.page && { page: request.page.toString() }),
...(request.pageSize && { pageSize: request.pageSize.toString() }),
});
return apiFetch<ListThreadsResponse>(
`/api/Chat/threads?${params}`,
{ method: "GET" },
"Failed to list threads"
);
}
export async function deleteThread(threadId: string) {
return apiFetch<void>(
`/api/Chat/threads/${threadId}`,
{ method: "DELETE" },
"Failed to delete thread"
);
}
Key patterns
Generic apiFetch wrapper reduces code duplication
Custom ApiError class with status code and response details
Type-safe request/response interfaces from @/types/chat
API URL from centralized constants (set via NEXT_PUBLIC_API_URL )
Backend API Endpoints
The .NET backend exposes these REST endpoints for the frontend:
// WebApp/Controllers/ChatController.cs
[ApiController]
[Route("api/[controller]")]
public class ChatController : ControllerBase
{
private readonly ITaskAgentService _taskAgent;
private readonly ILogger<ChatController> _logger;
// Send message (non-streaming)
[HttpPost("send")]
[ProducesResponseType(typeof(ChatResponse), StatusCodes.Status200OK)]
[ProducesResponseType(typeof(ErrorResponse), StatusCodes.Status400BadRequest)]
public async Task<IActionResult> SendMessageAsync([FromBody] ChatRequest? request)
{
if (request == null || string.IsNullOrWhiteSpace(request.Message))
{
return ErrorResponseFactory.CreateBadRequest(
ErrorCodes.INVALID_REQUEST,
ErrorMessages.MESSAGE_REQUIRED
);
}
var response = await _taskAgent.SendMessageAsync(request.Message, request.ThreadId);
return Ok(response);
}
// List conversations with pagination, sorting, and filtering
[HttpGet("threads")]
[ProducesResponseType(typeof(ListThreadsResponse), StatusCodes.Status200OK)]
public async Task<IActionResult> GetThreadsAsync(
[FromQuery] int page = 1,
[FromQuery] int pageSize = 20,
[FromQuery] string? sortBy = "UpdatedAt",
[FromQuery] string? sortOrder = "desc",
[FromQuery] bool? isActive = null)
{
var response = await _taskAgent.GetThreadsAsync(
new ListThreadsRequest
{
Page = page,
PageSize = pageSize,
SortBy = sortBy,
SortOrder = sortOrder,
IsActive = isActive
}
);
return Ok(response);
}
// Get conversation history with pagination
[HttpGet("threads/{threadId}/messages")]
[ProducesResponseType(typeof(ConversationHistoryResponse), StatusCodes.Status200OK)]
public async Task<IActionResult> GetConversationHistoryAsync(
string threadId,
[FromQuery] int page = 1,
[FromQuery] int pageSize = 50)
{
var response = await _taskAgent.GetConversationHistoryAsync(threadId, page, pageSize);
return Ok(response);
}
// Delete conversation
[HttpDelete("threads/{threadId}")]
[ProducesResponseType(StatusCodes.Status204NoContent)]
public async Task<IActionResult> DeleteThreadAsync(string threadId)
{
await _taskAgent.DeleteThreadAsync(threadId);
return NoContent();
}
}
Key features
RESTful design with standard HTTP verbs
Pagination support for scalability
Type-safe DTOs shared between frontend and backend
Error handling with appropriate status codes
Part 3: Azure Deployment Strategy
Now that we have separate frontend and backend, we need independent deployment pipelines.
Azure Resources Overview
Resource Group: rg-taskagent-prod
├── Azure Static Web Apps (Frontend)
│ └── Name: stapp-taskagent-prod
│ ├── SKU: Standard (supports custom domains, auth)
│ ├── Region: Auto (global CDN)
│ └── GitHub integration: Enabled
│
├── Azure App Service (Backend)
│ └── Name: app-*****
│ ├── SKU: B1 (Basic, Linux)
│ ├── Runtime: .NET 10
│ └── Region: Central US
│
├── Azure SQL Database (Tasks)
│ └── Name: sql-*****/TaskAgentDb
│ ├── SKU: Basic (5 DTU)
│ └── Storage: 2 GB
│
├── Azure Database for PostgreSQL (Conversations)
│ └── Name: psql-*****
│ ├── SKU: Burstable B1ms (1 vCore, 2 GiB RAM)
│ ├── Storage: 32 GiB
│ └── Database: taskagent_conversations
│
├── Azure OpenAI Service
│ └── Deployment: gpt-4o-mini
│
└── Azure AI Content Safety
└── Endpoint: contentsafety-taskagent-prod
Azure Deployment Setup
Prerequisites
Step 1: Create Azure PostgreSQL Database
Detailed setup guide : See Azure PostgreSQL Flexible Server documentation
Create an Azure Database for PostgreSQL Flexible Server with these settings:
SKU : Burstable B1ms (1 vCore, 2 GiB RAM, ~$18/month)
Storage : 32 GiB
Version : PostgreSQL 18
Database Name : taskagent_conversations
Configure Authentication
Authentication Method : PostgreSQL authentication only
Admin Username : Enter a secure admin username
Password : Enter a secure password (min 8 characters, complexity requirements apply)
Confirm Password : Re-enter the password
Configure Networking
Review + Create
Click "Review + create"
Verify settings
Click "Create"
![1]()
Create Database
After deployment, click "Go to resource"
In left menu, select "Settings" , click "Databases"
Click "+ Add"
Database Name : taskagent_chats
Click "Save"
![4]()
Get Connection String
In left menu, select "Connect"
Copy the "ADO.NET" connection string format:
Host=psql-*****.postgres.database.azure.com;Port=5432;Database=taskagent_chats;Username=*****;Password={your_password};Ssl Mode=Require;
Important : Replace {your_password} with the password you created in step 5
Save this connection string - you'll need it for backend configuration
Why Burstable tier? Perfect for applications with variable workloads. CPU credits accumulate during idle periods and burst during active use.
![6]()
Step 2: Update Backend Configuration
Add the PostgreSQL connection string to your backend's App Service configuration using the Azure Portal:
Navigate to Your App Service :
Configure Connection String :
Add PostgreSQL Connection :
Name : ConversationsConnection
Value : Paste the connection string from Step 1 (with your actual password)
Host=psql-****.postgres.database.azure.com;Port=5432;Database=taskagent_conversations;Username=*****;Password=YourActualPassword;Ssl Mode=Require;
Type : Select "PostgreSQL" from dropdown
Click "OK"
Apply Changes :
Your backend will automatically use this connection string (configured in appsettings.json ).
![7]()
Step 2.5: Configure CORS for Production
To allow the Next.js frontend (Azure Static Web Apps) to communicate with the backend (Azure App Service), we need to configure Cross-Origin Resource Sharing (CORS).
Why configure in App Service Configuration instead of appsettings.json?
Security : Sensitive URLs not committed to Git repository
Flexibility : Change URLs without redeployment
Azure Best Practice : Use App Service Configuration for environment-specific values
No code changes : Update configuration through Azure Portal only
Backend Configuration
Configure CORS in App Service :
In Azure Portal, navigate to your App Service: "app-*****"
In left menu under Settings , click "Environment variables"
Switch to "App settings" tab
Click "+ Add"
Add CORS Configuration :
Important Notes :
Use double underscore __ to represent nested configuration (Azure convention)
Array index starts at 0 for the first origin
For multiple origins, add Cors__AllowedOrigins__1 , Cors__AllowedOrigins__2 , etc.
Apply Changes :
Click "Apply" at the bottom
Click "Confirm" in the popup warning
The App Service will automatically restart with new configuration
Verify CORS Implementation :
The backend already reads this configuration (in PresentationServiceExtensions.cs ):
// Configure CORS for Next.js frontend
string[] allowedOrigins = configuration
.GetSection("Cors:AllowedOrigins")
.Get<string[]>() ?? ["http://localhost:3000"];
services.AddCors(options =>
{
options.AddDefaultPolicy(policy =>
{
policy
.WithOrigins(allowedOrigins)
.AllowAnyHeader()
.AllowAnyMethod()
.AllowCredentials();
});
});
How it works
Development : Uses appsettings.json → http://localhost:3000
Production : Uses App Service Configuration → https://*.azurestaticapps.net
Azure App Service automatically maps Cors__AllowedOrigins__0 to Cors:AllowedOrigins[0]
No code deployment needed - configuration takes effect immediately after restart
Common CORS Errors
"No 'Access-Control-Allow-Origin' header" : Verify App Service configuration key is exactly Cors__AllowedOrigins__0
"Credentials flag is 'true', but CORS is not allowed" : Verify AllowCredentials() in policy
Preflight (OPTIONS) fails : Verify AllowAnyMethod() in policy
Still getting errors after configuration : Check App Service logs - restart may be required
![8]()
Step 3: Create Azure Static Web Apps Resource
Create an Azure Static Web App (Standard tier) with deployment source set to "Other" for manual GitHub Actions setup.
After creation
Copy the deployment token from Azure Portal → Static Web App → Manage deployment token
Add it as GitHub Secret: AZURE_STATIC_WEB_APPS_API_TOKEN
Add backend URL as GitHub Secret: NEXT_PUBLIC_API_URL
Detailed setup : Azure Static Web Apps documentation
![2]()
Step 4: Create GitHub Actions Workflow
Create .github/workflows/frontend.yml with this configuration:
name: Azure Static Web Apps CI/CD
on:
push:
branches:
- main
paths:
- ".github/workflows/frontend.yml"
- "src/frontend/task-agent-web/**"
- "!src/frontend/task-agent-web/**/*.md"
jobs:
build_and_deploy:
runs-on: ubuntu-latest
name: Build and Deploy
steps:
- uses: actions/checkout@v4
with:
submodules: true
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
- name: Setup pnpm
uses: pnpm/action-setup@v2
with:
version: 9
- name: Install dependencies
run: |
cd src/frontend/task-agent-web
pnpm install --frozen-lockfile
- name: Build Next.js
env:
NEXT_PUBLIC_API_URL: ${{ secrets.NEXT_PUBLIC_API_URL }}
run: |
cd src/frontend/task-agent-web
pnpm build
- name: Deploy to Azure Static Web Apps
uses: Azure/static-web-apps-deploy@v1
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
repo_token: ${{ secrets.GITHUB_TOKEN }}
action: "upload"
app_location: "src/frontend/task-agent-web/out"
output_location: ""
skip_app_build: true
skip_api_build: true
Key configuration
Node.js 20 (required for Next.js 16)
pnpm package manager
Environment variable injection at build time
Path trigger (only runs on src/frontend/** changes)
Important : The workflow injects NEXT_PUBLIC_API_URL during the build step, as Next.js static exports require environment variables at build time (not runtime).
Deployment Flow
Once configured, deployments happen automatically:
![Deployment Flow]()
Benefits
Independent deployments : Frontend updates don't require backend redeployment
Faster iterations : Frontend-only changes deploy in ~2 minutes
Reduced risk : Smaller, focused deployments are easier to rollback
Parallel development : Frontend and backend teams can work independently
Part 4: Testing and Monitoring
Test the deployed application
Open your Static Web App URL: https://[your-app].azurestaticapps.net
Try these prompts:
Test conversation management:
![chat inteface - 750p]()
Monitor with Application Insights:
Live Metrics : Real-time request monitoring
Transaction Search : Detailed traces for each chat interaction
Failures : Error tracking and diagnostics
Features Deferred to Future Articles
Streaming Responses : Microsoft Agents Framework supports streaming via InvokeStreamingAsync() , but adds complexity (SSE endpoints, chunk parsing). We'll add this in a dedicated article.
File Upload : Requires Azure Blob Storage, multimodal prompts, and additional security. Coming in "Voice and Multimodal Interactions" article.
Key Takeaways
What We Achieved
Separated frontend and backend : Independent deployments, better scalability
Modern chat interface : ChatGPT-inspired UI with React 19 and Next.js 16
Dual database architecture : SQL Server for tasks, PostgreSQL for conversations
Conversation persistence : Efficient JSONB storage with metadata extraction
Global CDN distribution : Fast frontend delivery via Azure Static Web Apps
CI/CD pipelines : Automated deployments for both frontend and backend
Production-ready infrastructure : Proper security, monitoring, and cost optimization
Architecture Benefits
Independent scaling : CDN, backend, and databases scale separately
Better developer experience : Hot reload, TypeScript, modern tooling
Improved performance : Server Components, code splitting, edge caching
Foundation for advanced features : Ready for streaming, file uploads, voice input
Cost-effective : Pay only for what you use, optimize each component independently
What's Next
With our modernized frontend and backend separation, we're ready for the next phase of enhancements. In the upcoming articles, we'll add:
AI Search for semantic task search and RAG patterns
Multi-agent orchestration for complex task scenarios
Voice and multimodal interactions (including file uploads)
Related Articles in This Series