Deciding Between a Normal LLM and AI Agents
Before diving deeper into the Agent Framework, it is important to understand when a normal LLM is sufficient and when an AI Agent becomes necessary. The right choice depends on the complexity, duration, and nature of the task. Choosing the right approach early helps reduce complexity, cost, and maintenance while delivering better results. Below is quick thumb rule to select one of the approaches
Single-step, text-only task use Normal LLM
Multi-step, tool-driven, or stateful task use AI Agent
When to Use a Normal LLM
Use a normal LLM when the problem can be solved with a single well-crafted prompt. These scenarios are usually simple, self-contained, and stateless. If one prompt can produce the final answer, a normal LLM is the right choice.
Key Characteristics
Minimal setup and configuration
Low latency and cost
Easy to debug and maintain
No need for orchestration, memory, or state
Typical Use Cases
One-time text generation (emails, summaries, explanations)
Simple Q&A or conversational responses
Code generation or code review
Translation, rewriting, or classification tasks
Reasoning that completes within a single prompt-response cycle
When to Use AI Agents
Use AI Agents when the task requires planning, decision-making, tool usage, or multiple steps. Agents are designed for problems where the model must think, act, observe results, and adapt. If the task requires iteration, memory, or actions beyond text generation, an AI Agent is the better choice.
Core Capabilities
Maintain state across multiple steps
Decide what action to take next
Recover from errors or failed steps
Interact with tools (APIs, databases, files, web)
Coordinate with other agents or humans
Typical Use Cases
Multi-step problem solving
Tasks involving external tools or systems
Decisions based on intermediate results
Long-running or stateful workflows
Human-in-the-loop approvals or validations
Multi-agent collaboration (planner, executor, reviewer)
What is Agent Framework?
Agent Framework brings together AutoGen’s intuitive abstractions for single-agent and multi-agent patterns with Semantic Kernel’s enterprise-ready capabilities, including thread-based state management, strong type safety, filters, telemetry, and broad support for models and embeddings.
Going beyond a simple unification, Agent Framework introduces workflow constructs that give developers precise control over how multiple agents interact and execute. It also provides a powerful state management layer designed for long-running processes and human-in-the-loop use cases.
In essence, Agent Framework represents the evolution and convergence of both Semantic Kernel and AutoGen into a unified, next-generation platform.
Agent Framework provides two core sets of capabilities:
AI Agents: Self-contained agents powered by large language models that interpret user input, invoke tools and MCP servers to carry out actions, and generate responses. These agents support multiple model providers, including Azure OpenAI, OpenAI, and Azure AI.
Workflows: Graph-based orchestration mechanisms that link multiple agents and functions to execute complex, multi-step processes. Workflows enable type-driven routing, hierarchical composition through nesting, checkpointing for resilience, and request–response flows to support human-in-the-loop interactions.
How to choose between AI Agents vs Workflows?
If your use case requires strict adherence to sequence of events and pre-defined rules that you need to follow then use Workflows otherwise use AI Agents.
Workflow example is checkout cart item that will require us to follow steps like – Check inventory if item of desired quantity exists, Reduce the item from inventory, Temporarily Reserve item for user, etc. requires us to adhere to strict rules.
AI Agents example is I want to create social media campaign. It will require us to use combination of agents that uses mix of Image generation, text generation agents to create email, social media post, etc.
Code Walkthough – AI Agents
Setup environment
uv venv
uv pip install agent-framework –pre #as current version is in preview
To run file use uv run <filename>. In this case its uv run agentfw.py
Below are few simple codes demonstrating how to use an Agent using Microsoft Agent framework with Ollama and Mistral AI model.agentfw.py
A simple implementation
import asyncio
import os
from dotenv import load_dotenv
from agent_framework.ollama import OllamaChatClient
load_dotenv()
async def main():
client = OllamaChatClient(model_id="ministral-3:3b")
agent = client.create_agent(
name="greet_agent",
instructions="You greet everyone with funny message in 10 words",
)
response = await agent.run("Hi")
print(response)
if __name__ == "__main__":
asyncio.run(main())
Output:
"Hey there! Hope your day is as chaotic as my coffee—stay awesome!" 😄☕
Single agent for multi turn conversations:
import asyncio
import os
from dotenv import load_dotenv
from agent_framework.ollama import OllamaChatClient
load_dotenv()
async def main():
client = OllamaChatClient(model_id="ministral-3:3b")
agent = client.create_agent(
name="greet_agent",
instructions="You greet everyone with funny message in 10 words",
)
thread = agent.get_new_thread()
response = await agent.run("Hi", thread=thread)
print(response)
response = await agent.run("What is 1+1?")
print(response)
if __name__ == "__main__":
asyncio.run(main())
Output:
"Hey there, Earthling! Hope your day’s as fun as my sarcasm today!" 😄
"Two! Or did you mean a question about my existential dread?" 😄
A multi-agentic system
Install durable agents’ extension
uv pip install agent-framework-azurefunctions –pre
This multi-agentic system is tightly coupled with Azure Functions so it will require subscription to utilize its full capabilities. Below is the code to demonstrate simple code.
import asyncio
from dotenv import load_dotenv
from agent_framework.ollama import OllamaChatClient
from agent_framework.azure import AgentFunctionApp
load_dotenv()
# ----------------------------
# Simple Greeting Agent App
# ----------------------------
client = OllamaChatClient(model_id="ministral-3:3b")
agent = client.create_agent(
name="greet_agent",
instructions="You greet everyone with funny message in 10 words"
)
app = AgentFunctionApp(agents=[agent])
# ----------------------------
# Workflow: Greet + Fortune
# ----------------------------
from agent_framework import (
AgentRunEvent,
AgentRunResponse,
WorkflowBuilder,
AgentExecutor,
WorkflowOutputEvent,
)
load_dotenv()
async def main():
client = OllamaChatClient(model_id="ministral-3:3b")
greet_agent = AgentExecutor(
client.create_agent(
name="greet_agent",
instructions="You greet everyone with funny message in 10 words",
)
)
fortune_agent = AgentExecutor(
client.create_agent(
name="fortune_agent",
instructions="You have to tell fortune of today in 20 words",
)
)
workflow = (
WorkflowBuilder()
.add_chain([greet_agent, fortune_agent])
.set_start_executor(greet_agent)
.build()
)
results = await workflow.run("Hi, I am Varun")
for event in results:
if isinstance(event.data, AgentRunResponse):
print("----- START -----")
print(type(event.data))
print(event.data)
print("------ END ------")
if __name__ == "__main__":
asyncio.run(main())
Output:
No outgoing edges found for executor fortune_agent; dropping messages.
-----s------
<class 'agent_framework._types.AgentRunResponse'>
"Varun! Welcome—may your day be as chaotic as your coffee!" 😄☕
------e-----
-----s------
<class 'agent_framework._types.AgentRunResponse'>
Today: Unexpected detours, a lucky charm, and a surprise—just like you! 🌟
------e-----
The learning is that not every AI problem needs an agent. A normal LLM is the most efficient choice for single-step, stateless, text-only tasks where one well-crafted prompt can produce the outcome. AI Agents become essential when tasks involve multiple steps, decision-making, memory, tool usage, or long-running interactions. Workflows add another layer of control when strict sequencing, business rules, and reliability are non-negotiable. Microsoft’s Agent Framework unifies these paradigms by combining the flexibility of agents with the predictability of workflows, enabling developers to build scalable, maintainable, and production-ready AI systems. The real skill is not using agents everywhere, but knowing when simplicity is enough and when orchestration is required.
Thanks for reading till the end. I hope this was insightful.