Skip to main content

Command Palette

Search for a command to run...

Microsoft Agent Framework .NET 1.0

The first serious operating model for agentic systems in .NET

Published
19 min read
Microsoft Agent Framework .NET 1.0

Microsoft Agent Framework 1.0 is not just another wrapper around an LLM endpoint. Microsoft is positioning it as the successor to the agent work that previously lived across Semantic Kernel and AutoGen, and the core 1.0 release is described as battle-tested, stabilised, and supported with backward compatibility going forward. At the same time, the current documentation and package still show some mixed signals, with some Learn pages still carrying preview notes and several surrounding packages still using prerelease versioning, so you need to approach 1.0 as a stable core with an environment that is still settling around it.

That tension is exactly why this release is worth understanding. Up to now, most .NET teams interested in agents have been forced into one of two bad choices. They either built thin prompt wrappers and called them agents, or they jumped into orchestration patterns without a solid runtime model for sessions, tools, middleware, durability, hosting, and protocol interoperability. Agent Framework is Microsoft’s attempt to give .NET teams a real application model for this space, not just a demo model. The official overview reduces the framework to two big ideas, agents and workflows, but that simple split is the right place to start because it tells you what the framework is trying to solve and what it is not.

The first capability is agents. In Microsoft’s model, an agent is the runtime boundary around an LLM-backed interaction. It can process input, call tools, work with MCP servers, manage conversational context, and produce responses. The second capability is workflows. Workflows are graph-based compositions that let multiple agents and functions cooperate in explicit execution paths with routing, checkpointing, and human-in-the-loop support. That separation is crucial. An agent is where reasoning and tool use happen. A workflow is where execution policy happens. If you blur those two concerns, your design gets messy fast.

The most important sentence in the current overview is also the most sobering one, if you can write a function to handle the task, do that instead of using an AI agent. That is not a disclaimer buried in the docs. It is the design principle that should govern every production use of the framework. Agents are for open-ended interpretation, ambiguous intent, context-sensitive decisions, and tool-guided reasoning. Workflows are for controlled multi-step execution. Plain .NET code is still the right answer whenever the work is deterministic. Developers that ignore this end up building expensive, slow, flaky systems that would have been better as ordinary application code.

Why .NET needed this framework

The old split between Semantic Kernel and AutoGen was always awkward in practice. Semantic Kernel leaned toward enterprise concerns such as filters, telemetry, structure, and service integration. AutoGen pushed harder into multi-agent patterns and orchestration. Microsoft Agent Framework combines those lines of thought into one model, explicitly calling out session-based state management, type safety, middleware, telemetry, and graph-based workflows as core features. That is a much better fit for how real .NET systems are built, because .NET teams care less about agent demos and more about stable abstractions, observable behaviour, and predictable hosting.

Its also good that the framework is not pinned to one model provider. The current providers overview for .NET calls out Azure OpenAI, OpenAI, Foundry, Anthropic, Ollama, GitHub Copilot, Copilot Studio, and custom providers. Underneath that, the framework leans heavily on Microsoft.Extensions.AI.IChatClient for chat-client-based scenarios, which is a very .NET way to structure the problem because it favours composition, dependency injection, decoration, and provider interchangeability over magical SDK lock-in.

This gives you a clean mental model. Your application owns the domain. The agent owns open-ended reasoning. Tools own deterministic side effects. Workflows own execution order and recovery. Hosting owns transport. That is much healthier than the common anti-pattern where the prompt tries to own everything at once.

The right mental model before you write a single line of code

You should think of Agent Framework as a layered execution pipeline, not as a chatbot library. The pipeline documentation is one of the most useful parts of the current material because it shows where the framework expects you to plug in behaviour. At the outer layer, agent middleware and telemetry wrap the run. Then the raw agent resolves context providers and gathers per-run middleware. Then the chat client pipeline handles model calls, tool invocation, and provider-specific communication. After that, responses flow back through the same layers, and context providers are notified so history can be stored.

That architecture is cool for two reasons. First, it stops you from shoving every concern into prompts. Second, it gives you multiple places to enforce policy. You can intercept a whole run, a tool invocation, or the low-level model call. That means content filtering, redaction, budget enforcement, audit logging, and approval are all first-class runtime concerns instead of brittle prompt instructions that the model may or may not obey. Microsoft’s middleware docs explicitly call out logging, security validation, error handling, and result transformation as intended use cases.

Once you see the framework this way, a lot of design questions become easier. Should this compliance rule live in the prompt, in a tool, or in middleware? Usually middleware. Should this long-running business process be one giant agent? Usually no, it should be a workflow. Should a database write be done by the model? Never. It should be a deterministic tool or plain application code invoked under policy.

The first serious agent in .NET

The quickest way into the framework on .NET is still a single agent backed by a Foundry or Azure OpenAI-style chat client. The current quickstart shows AIAgent created from AIProjectClient and invoked with either RunAsync or RunStreamingAsync. It also shows AgentSession for multi-turn memory, which is where things start to become genuinely useful for applications instead of toy prompts.

The example below is a pattern-focused draft based on the current .NET docs. It reflects the present API shape, but you should still verify exact package and namespace combinations in your chosen provider stack because the surrounding ecosystem is moving faster than the core 1.0 announcement.

using Azure.AI.Projects; 
using Azure.Identity; 
using Microsoft.Agents.AI;

var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT") ?? throw new InvalidOperationException("AZURE_OPENAI_ENDPOINT is not set.");

var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT_NAME") ?? "gpt-4o-mini";

Console.WriteLine(await agent.RunAsync( "My name is Patrick. I have a comprehensive policy and a cracked windscreen.", session));

Console.WriteLine(await agent.RunAsync( "What details do you still need from me?", session));

The important thing here is not the syntax. It is the contract. The agent owns conversation flow and reasoning. The session owns continuity. The instructions define role and boundaries. The rest of your system should decide what the agent is allowed to do. That means tools for deterministic lookups, middleware for governance, and application code for actual business operations.

Sessions are not a nice-to-have, they are the centre of the design

A lot of engineers still treat conversational state as a UI concern. Agent Framework does not. The storage documentation makes it clear that storage controls where conversation history lives, how much history is loaded, and how reliably sessions can be resumed. The built-in model distinguishes between local session state, where full history lives in AgentSession.state, and service-managed storage, where the service owns the conversation and the session points to it via a service session identifier.

That distinction has architectural consequences. Local session state is fine for narrow services, internal automation, or short-lived calls where your app can carry the history. Service-managed storage becomes more attractive when your provider offers durable server-side conversation management. The trade-off is control. If you keep state locally, you own truncation strategy, persistence, encryption, residency, and replay. If the service manages it, you gain convenience but must be much more deliberate about compliance, retention, cross-border concerns, and audit boundaries. Microsoft explicitly warns that third-party servers and agents can affect where your data flows, which is not a footnote for regulated teams. It is a design constraint.

The practical lesson is simple. Do not start with “How do I make the model remember?” Start with “Who should own conversation state in this system?” That question belongs to your architecture, not to prompt engineering.

Middleware is where expert teams separate themselves from demo teams

Agent Framework’s middleware model is one of its strongest features. The current docs describe three distinct middleware types: agent run middleware, function-calling middleware, and IChatClient middleware. That split is not accidental. It mirrors the real places where production failures happen. Sometimes you need to reject a run before it reaches the model. Sometimes you need to wrap a tool call in approval or audit logic. Sometimes you need to instrument the raw model call itself.

A strong .NET design will use middleware to enforce runtime policy instead of treating prompts as law. Prompts are guidance. Middleware is policy. If a model tries to call a dangerous tool, you want code, not wording, to decide whether that happens. If a model request might leak sensitive content, you want code, not wording, to redact or block it. If a request exceeds cost thresholds or token budgets, you want code, not wording, to route it elsewhere.

Here is the kind of middleware pattern that makes sense in a real service.

using Microsoft.Agents.AI; 
using Microsoft.Extensions.AI;

static async Task BudgetAndLoggingMiddleware( IEnumerable messages, AgentSession? session, AgentRunOptions? options, AIAgent innerAgent, CancellationToken stopToken) { var joined = string.Join(Environment.NewLine, messages.Select(m => $"{m.Role}: {m.Text}"));

if (joined.Length > 12_000)
{
    throw new InvalidOperationException("Input too large for this agent. Route to batch workflow instead.");
}

Console.WriteLine($"[{DateTimeOffset.UtcNow:u}] Agent run starting. Session: {session?.Id}");

var response = await innerAgent.RunAsync(messages, session, options, stopToken);

Console.WriteLine($"[{DateTimeOffset.UtcNow:u}] Agent run completed. Messages returned: {response.Messages.Count}");

return response;
}

And then you attach it with the builder pattern the framework documents for agent run middleware.

var guardedAgent = agent .AsBuilder() .Use(runFunc: BudgetAndLoggingMiddleware, runStreamingFunc: null) .Build();

That pattern lines up directly with the current middleware API guidance, including the agent builder flow and the fact that middleware forms a chain around agent execution.

The deeper point is that this makes Agent Framework feel like normal .NET. You are not throwing your engineering discipline away because the application happens to use an LLM. You are applying the same discipline through a runtime that actually gives you seams to plug into.

Tools and MCP are where agent systems either become useful or dangerous

Without tools, most agents are just articulate text generators. With tools, they become operational. Agent Framework supports both traditional function tools and MCP-based tools, but the capability matrix is not uniform across providers. The current tools documentation shows that function tools are broadly supported, while features such as tool approval, code interpreter, file search, web search, hosted MCP tools, local MCP tools, and image generation depend on the provider and on whether you are using chat completions, responses, assistants, Foundry, Anthropic, or something else.

That matrix is one of the most important things to understand before committing to a provider strategy. If your design assumes file search, hosted MCP, or approval flows, you cannot pick a provider first and discover the limitations later. The framework gives you a unified programming model, not identical capability under every backend. That is a meaningful difference. Uniform API shape is not the same as uniform execution semantics. MCP support is especially interesting because it gives Agent Framework a standard way to connect models to external tools and context sources. The current docs describe local MCP tool support and position it as a standardised way for agents to access external tools and services. Common servers include GitHub, filesystem, and SQLite examples. That makes MCP a strong fit when you want reusable tool surfaces and standardised external capability boundaries rather than a pile of one-off function definitions.

Here is the design rule I would use in production. If the operation is a business capability owned by your application, expose it as a deterministic function tool with narrow parameters, validation, and audit. If the capability comes from an external standardised tool ecosystem, MCP is attractive. If the operation has side effects that matter to the business, add approval, policy, or workflow checkpoints around it. Do not let “the model decided to do it” become your control plane.

Workflows are the feature that turns this from an agent library into an application framework

Single-agent systems are useful, but the real power of Microsoft Agent Framework is in workflows. The framework’s workflow material describes graph-based execution, sequential orchestration, handoff, edges, executors, checkpoints, and human-in-the-loop patterns. The official sequential docs show agents processing work in turn and passing their full conversation history forward. The handoff docs describe a mesh-style topology where control can be transferred between agents without a central orchestrator. This is the part of the framework that matters most for serious systems.

That distinction between sequential and handoff is not academic. Sequential orchestration is a pipeline. You know the order in advance. It is ideal for transforms, review chains, structured enrichment, and staged analysis. Handoff is a delegation model. Control moves based on context. It is better for expert routing, triage, and conversational ownership changes. If you pick the wrong pattern, you either over-constrain the system or let it wander. Microsoft’s handoff documentation explicitly contrasts handoff with agent-as-tools, and that is a useful separation. In handoff, ownership moves. In agent-as-tools, the primary agent remains in charge. The checkpoint model is particularly strong. The workflow checkpoint docs explain that checkpoints are created at the end of supersteps and capture executor state, pending messages, requests and responses, and shared state. You can then restore a run from a checkpoint or rehydrate into a new run. For long-running or approval-heavy business processes, this is far better than stuffing everything into one endless conversation thread.

Here is a workflow-shaped example in .NET that follows the current sequential orchestration pattern shown in the docs, but reframes it into a business scenario a .NET team might actually use.

using Azure.AI.Projects; 
using Azure.Identity; 
using Microsoft.Agents.AI; 
using Microsoft.Agents.AI.Workflows; 
using Microsoft.Extensions.AI;

var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT") ?? throw new InvalidOperationException("AZURE_OPENAI_ENDPOINT is not set.");

var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT_NAME") ?? "gpt-4o-mini";

IChatClient chatClient = new AIProjectClient(new Uri(endpoint), new AzureCliCredential()) .GetProjectOpenAIClient() .GetProjectResponsesClient() .AsIChatClient(deploymentName);

var intakeAgent = new ChatClientAgent( chatClient, """ You triage incoming underwriting submissions. Extract the key facts and list missing information. """, "IntakeAgent");

var riskAgent = new ChatClientAgent( chatClient, """ You assess underwriting risk. Classify the case as low, medium, or high risk and explain why. """, "RiskAgent");

var summaryAgent = new ChatClientAgent( chatClient, """ Produce a final summary for a human underwriter. Use plain language. Include only facts present in the conversation. """, "SummaryAgent");

var workflow = AgentWorkflowBuilder.BuildSequential([intakeAgent, riskAgent, summaryAgent]);

var input = new List { new(ChatRole.User, "Small commercial property in Cork. Prior water damage claim two years ago. Re-roofed in 2024.") };

CheckpointManager checkpointManager = CheckpointManager.CreateInMemory();

await using var run = await InProcessExecution.RunStreamingAsync(workflow, input, checkpointManager);

await run.TrySendMessageAsync(new TurnToken(emitEvents: true));

await foreach (var evt in run.WatchStreamAsync()) { switch (evt) 
    { 
        case AgentResponseUpdateEvent update:             Console.Write(update.Update.Text); 
        break;
    
        case SuperStepCompletedEvent superStep:
            Console.WriteLine($"\nCheckpoint captured at superstep.");
        break;

        case WorkflowOutputEvent output:
            Console.WriteLine("\nWorkflow completed.");
            Console.WriteLine(output.Data);
        break;
    }
}
}

This is where Agent Framework starts to feel like workflow infrastructure rather than prompt chaining. The run is explicit. The events are explicit. Recovery is explicit. That is what mature systems need.

Hosting is protocol, not business logic

The hosting story is another area where Agent Framework shows more maturity than most agent libraries. The Learn documentation separates hosting from agent behaviour and presents multiple exposure paths. In ASP.NET Core, the framework provides hosting libraries to register agents and workflows with dependency injection. The docs show AddAIAgent, AddWorkflow, in-memory session store configuration, workflow-to-agent conversion, and protocol adapters. The framework can expose agents via A2A, OpenAI-compatible endpoints, AG-UI, and Azure Functions durable hosting.

This is the right design. Your agent should not care whether it is called by an internal service, a browser client, another agent, or a standards-based protocol endpoint. Your hosting layer should adapt transport into the agent runtime, not the other way around. Microsoft’s own hosting material describes those libraries as protocol adapters around AIAgent, which is exactly the correct abstraction.

A stripped-down ASP.NET Core pattern looks like this.

using Azure.AI.Projects; 
using Azure.Identity; 
using Microsoft.Extensions.AI; 
using Microsoft.Agents.AI;

var builder = WebApplication.CreateBuilder(args);

var endpoint = builder.Configuration["AZURE_OPENAI_ENDPOINT"] ?? throw new InvalidOperationException("AZURE_OPENAI_ENDPOINT is not set.");

var deploymentName = builder.Configuration["AZURE_OPENAI_DEPLOYMENT_NAME"] ?? "gpt-4o-mini";

IChatClient chatClient = new AIProjectClient(new Uri(endpoint), new AzureCliCredential()) .GetProjectOpenAIClient() .GetProjectResponsesClient() .AsIChatClient(deploymentName);

builder.Services.AddKeyedSingleton("chat-model", chatClient);

var app = builder.Build();

That example mirrors the current hosting model, including keyed chat client registration and AddAIAgent. The interesting part is not the ceremony. It is what comes next. You can expose that same agent through A2A for agent-to-agent communication or through OpenAI-compatible endpoints for clients that already speak Chat Completions or Responses. The OpenAI integration docs explicitly position Chat Completions as the simpler stateless compatibility path and OpenAI-compatible hosting as a way to present your agent behind familiar APIs. The A2A integration docs position agent cards, message exchange, long-running tasks, and inter-framework interoperability as first-class concerns.

All this because protocol interoperability is where many agent systems will live or die. Internal teams will not all standardise on the same framework. If you can expose an agent as A2A and consume another one through the same protocol, you are buying yourself architectural room to evolve.

Azure Functions durable hosting is where this becomes very interesting for serverless .NET Developers

For teams already deep in Azure Functions, the durable hosting story is compelling. Microsoft’s Azure Functions integration docs describe durable task-based hosting with built-in HTTP endpoints, orchestration-based invocation, state persistence, and automatic scaling. The functions host can be configured with ConfigureDurableAgents(options => options.AddAIAgent(agent)), which turns an Agent Framework agent into a durable hosted service.

That is a strong fit for long-running conversations, approval workflows, batch-style agent execution, and event-driven systems where you want serverless economics but still need a durable agent runtime. It is also a natural fit for the kinds of systems .NET teams actually own, such as claims intake, underwriting triage, document analysis, support automation, and compliance review. The framework’s durable angle gives you a much cleaner story for pause and resume than trying to bolt durability onto an in-memory chat loop.

The design caution here is the same one I would apply to any serverless workflow. Do not make Azure Functions the place where you improvise architecture. Keep deterministic work deterministic. Keep tool calls narrow. Keep state boundaries explicit. Use checkpoints and durable execution to support business process needs, not as an excuse to blur concerns.

What the framework gets right, and where you still need to be careful

The biggest thing Agent Framework gets right is that it treats agent systems like software systems. You see that in sessions, middleware, checkpointing, protocol adapters, and workflow graphs. This is not a prompt toy wearing enterprise clothes. It is clearly designed for teams that need composition, hosting, recovery, and policy.

It also gets the provider story mostly right. By leaning on IChatClient and separating provider choice from agent and hosting patterns, Microsoft gives .NET teams a path to avoid hard lock-in. That does not remove provider differences, but it does mean your application architecture can stay more stable than your inference backend.

Where you still need to be careful is ecosystem maturity. The 1.0 announcement is real, and the core package is on NuGet as 1.0.0, but a number of adjacent packages and docs still carry prerelease markers or preview language. Workflows, durable task support, A2A hosting, and some integrations are still visibly in that transition zone. That does not make them unusable. It means you should lock versions carefully, validate examples against the specific provider stack you choose, and expect some documentation drift while the platform converges.

The other risk is cultural, not technical. Teams will still be tempted to use agents where they should use code. No framework can save you from that. Microsoft’s own guidance says as much. The right way to use Agent Framework is to narrow the surface area where the model is allowed to think, surround it with middleware and tools, and let workflows handle explicit process. If you do that, you get a powerful application model. If you do not, you get a more elaborate way to be unpredictable.

Where I think Microsoft Agent Framework .NET 1.0 fits best

I would use it when the problem has all four of these traits. The task has genuine ambiguity. The system benefits from tool use. The process needs state across turns or stages. The application needs real hosting and governance rather than a notebook demo. That includes support assistants with approvals, document triage pipelines, guided data collection, internal operations copilots, multi-step research workflows, and controlled delegation across specialist agents.

I would not use it for plain CRUD, deterministic workflows that can already be expressed cleanly in code, or “we want AI somewhere in the architecture” projects with no clear reasoning boundary. In those cases, ordinary .NET remains the better answer. That is not a knock on the framework. It is exactly the discipline the framework itself is asking you to keep.

Microsoft Agent Framework .NET 1.0 is the first time Microsoft’s agent story feels like it has an application architecture behind it rather than just a set of AI demos. The real value is not that you can create a chatty assistant in a few lines. You could already do that. The value is that you now have a coherent runtime model for agents, sessions, tools, middleware, workflows, checkpoints, and hosting in the same .NET-shaped world. That is what makes the release significant.

If you are a .NET engineer, the right way to think about this framework is not “How do I build an agent?” It is “Where does non-deterministic reasoning belong in my architecture, and how do I constrain it so the rest of the system stays reliable?” Microsoft Agent Framework 1.0 gives you a much better answer to that question than the ecosystem had a year ago.

REF: https://learn.microsoft.com/en-us/agent-framework/get-started/