← All articles

Your M365 Declarative Agents Just Got Smarter But You Need to Retest Them

Jason Webster · April 2, 2026 · 3 min read

Every declarative agent running in your M365 tenant just got better. Microsoft upgraded the model powering Declarative Agents in M365 Copilot to GPT-5.2, and if your organization has agents in production, there are things you need to be looking for that may be hard to spot at first.

What Changed

Upgrade of the model to GPT-5.2. This brings four improvements over what was running before:

Reasoning. The model handles multi-step logic more reliably. Agents that were making errors on complex conditional workflows (where the right action depends on several inputs in combination) will perform better. Structured reasoning was a known weak point in earlier versions.

Tool use. Agents call external tools and APIs more accurately. Fewer hallucinated parameters, better handling of tool responses, and more consistent behavior when chaining multiple tool calls in a single session.

Document analysis. The model extracts structured information from unstructured documents more precisely. Agents built to process contracts, reports, or intake forms will produce cleaner outputs with fewer missed fields.

Structured output. JSON and schema-constrained outputs are more reliable. If your agents write to a downstream system or feed data into a pipeline, the formatting consistency has improved.

What's the Issue and What to Do Now?

Microsoft is also warning customers that the upgrade is automatic and if you agents are dependent on consistent reasoning, a new model, even if it's a better one, may produce different outputs.

Validate your critical workflows. Identify those critical workflows your organization relies on most heavily and run them end to end. Compare outputs to what you were getting before. Look specifically for changes in tone, structure, tool call behavior, and any downstream system writes. A more capable model can interpret instructions differently than a less capable one did, and some prompts will need adjustment. Add in additional checks if needed to catch drift from your original testing.

Let your agent builders know. Anyone in your organization who has built or maintains a declarative agent needs to know the model changed. They should run their own validation and flag anything behaving differently. Agents in Copilot Studio are not self-monitoring.

Check structured output agents first. Agents that write to databases, ticketing systems, or downstream APIs carry the most risk if behavior shifts. The tighter the output requirements, the more likely you are to see issues with a model change.

Update your benchmarks. If you've been measuring agent accuracy or output quality against a baseline, reset that baseline now. The previous numbers aren't meaningful comparisons once the model changes. You need to reestablish your baseline and measure against the new normal.

Why This Matters for Enterprise Teams

Most organizations that have built declarative agents in M365 Copilot built them when the underlying model was less capable. Some of those agents were designed with workarounds: explicit instructions to handle edge cases the old model got wrong, extra tight prompts to generate more consistent output, manual validation steps added because the model couldn't be trusted on certain tasks.

Those workarounds might no longer be necessary. More capable reasoning and better structured output mean agents can be simplified: fewer instructions, fewer guardrails, cleaner prompts. Implement a way to see if those checks and balances are still beneficial and simplify.

Probably the coolest thing about this progress is how quickly something that couldn't be done even Months ago, is now possible. Factor change and evolution into your AI plans. Have a way to review, upgrade, or replace legacy solutions that are no longer necessary as models improve.

Microsoft Announcement: https://techcommunity.microsoft.com/blog/microsoft365copilotblog/microsoft-365-copilot-declarative-agents-are-getting-smarter-with-gpt%e2%80%915-2/4504774