Why We Went All-In on Microsoft Foundry
by DeeDee Walsh, on Mar 31, 2026 10:53:52 AM
And what it means for enterprises sitting on millions of lines of legacy code.
If you're modernizing legacy applications at enterprise scale (we're talking millions of lines of Access, PowerBuilder, VB6, and WinForms code) your AI platform choice is more than academic. It's the difference between a modernization engine that ships production code and a science project that produces impressive demos and nothing else.
We chose Microsoft Foundry as the backbone of our VELO platform. Not because Microsoft asked nicely. Because when you look at what Foundry actually delivers versus the alternatives, the decision isn't close.
Here's the calculus behind that choice.
The Problem We're Solving
GAPVelocity AI modernizes legacy enterprise applications. Our clients are running business-critical systems built on Access databases with hundreds of thousands of lines of VBA, PowerBuilder applications spanning millions of lines of code, massive VB6 workloads, .NET Framework monoliths that have been accumulating technical debt since before some of their current developers were born.
VELO, our agentic AI modernization platform, is way more than a syntax translator from one language to another. It reasons about application architecture, decomposes business logic, maps data access patterns, generates modern Blazor and C# code, and validates the output, autonomously. That requires orchestrating multiple AI agents across multiple models, grounded in deep technical context, with the governance controls that enterprise clients demand before they hand over the keys to their production codebases.
We needed a platform that could do all of that. Not a model API with a billing dashboard stapled on.
What Tipped the Decision
1. Model Diversity Without Model Lock-In
Legacy modernization isn't a one-model problem. Different tasks need different strengths including high-reasoning models for complex architectural decomposition, faster models for repetitive syntax translation, specialized models for code generation and validation.
Foundry's catalog gives us access to over 11,000 models, including first-party access to both OpenAI and Anthropic's Claude family. That's the difference between being locked into a single provider's roadmap and being able to route each task to the best model for the job. Foundry's Model Router handles this dynamically. optimizing across cost, performance, and quality in real time.
No other cloud platform offers first-party OpenAI and Anthropic. That alone narrows the field considerably.
2. Agent-First Architecture
VELO is an agentic platform. Our Autonomous Engineer squads aren't single-prompt, single-response interactions; they're multi-step, multi-agent workflows where specialized agents handle discrete phases of the modernization process: code analysis, pattern recognition, architectural mapping, code generation, test generation, and validation.
Foundry was purpose-built for this. The Agent Service provides the orchestration runtime. Multi-agent workflows support the sequential, parallel, and conditional branching patterns our modernization pipelines require. Hosted Agents abstract the infrastructure so our engineering team focuses on modernization logic, not Kubernetes configurations.
Most competing platforms bolted agent capabilities onto what started as model-hosting services. Foundry's agent-first architecture means we're building on a foundation designed for what we're actually doing, not adapting our architecture to fit someone else's afterthought.
3. The Microsoft Ecosystem Advantage Is Real
We could pretend ecosystem doesn't matter. We'd be lying.
Our clients live in Microsoft environments. Their data sits in Azure. Their teams work in Microsoft 365 and Teams. Their security runs through Entra ID and Defender. Their compliance requirements map to Microsoft's certification portfolio: SOC 1/2/3, ISO 27001, HIPAA, and notably ISO/IEC 42001:2023, the AI Management Systems standard.
Foundry inherits all of this natively. We don't bolt on identity management. Entra ID is built in. We don't negotiate separate compliance certifications. They're inherited from the platform. Our agents get Entra Agent IDs, which means every automated action in a client's modernization pipeline is auditable, permissioned, and governed by the same identity infrastructure their IT teams already manage.
For our Microsoft co-sell partnerships, this matters even more. VELO on Foundry is a solution built on the platform Microsoft is investing billions in. That alignment accelerates every co-sell conversation we have.
4. Enterprise Governance That Doesn't Slow You Down
Here's the tension every AI platform has to resolve: developers want freedom to move fast; enterprise IT and security teams want control. Most platforms pick a side.
Foundry's two-tiered architecture, Foundry Resource at the governance level, Projects at the development level, actually solves this. IT sets the guardrails (approved models, networking policies, cost thresholds, security baselines) at the resource level. Our development teams build freely within those boundaries at the project level. Nobody's waiting on a ticket to try a new model. Nobody's deploying an agent that bypasses security policy.
The Control Plane which hit general availability at Ignite 2025 provides fleet-wide observability across every agent in production. For our clients running modernization at scale, this means centralized monitoring of quality, performance, safety, and cost across every VELO agent engaged in their project.
5. The Economics Work
We're not going to pretend cost wasn't a factor. A Forrester Total Economic Impact study published in February 2026 validated a 327% ROI over three years for organizations using Foundry, with a payback period under six months. The biggest driver: a 35% improvement in developer productivity.
For a modernization platform like VELO, developer productivity isn't an abstraction. It directly translates to faster project delivery, lower cost per application modernized, and better margins. Foundry's consumption-based pricing means we're not paying for idle capacity, and Agent Commit Unit pre-purchase plans give us cost predictability at scale.
The platform itself is free. We pay for what our agents actually consume. For a business model built on delivering fixed-price modernization engagements, that cost transparency is essential.
What We Evaluated (And Why We Passed)
We looked seriously at every major alternative. Here's our assessment:
Amazon Bedrock is solid for organizations already deep in AWS. But its agent orchestration is less mature, it lacks first-party Anthropic and OpenAI access on the same platform, and the governance story doesn't match Foundry's Control Plane. More importantly, our clients aren't AWS shops; they're Microsoft shops. Building on Bedrock would have added an ecosystem translation layer to every engagement.
Google Vertex AI excels at custom ML/MLOps. If you're training models from scratch, it's excellent. But we're not training models. We're orchestrating them in complex agentic workflows. Vertex AI's agent capabilities are growing but aren't as purpose-built as Foundry's. And the enterprise productivity integration (M365, Teams, Copilot Studio) doesn't exist.
OpenAI's enterprise platform is interesting but evolving. It offers fast access to the latest OpenAI models, but it's a single-provider ecosystem. For a platform like VELO that needs model diversity and dynamic routing, betting everything on one model family, however capable, is a strategic risk we aren't willing to take.
Databricks is excellent at what it does, data engineering and analytics, and we see it as complementary, not competitive. Foundry's native Databricks connector means our clients with Lakehouse architectures can ground VELO agents in their existing data infrastructure.
Caveats
We'd be the wrong partner to work with if we weren't transparent about the trade-offs.
Several of Foundry's most compelling features (Hosted Agents, Foundry IQ, multi-agent workflows, Memory) are still in public preview. We've built our architecture to be resilient through the preview-to-GA transition, but it's a reality every organization building on Foundry needs to plan for.
The platform has also been through three name changes in two years (Azure AI Studio → Azure AI Foundry → Microsoft Foundry), which creates documentation debt and occasional confusion. We've absorbed that complexity so our clients don't have to, but it's worth noting for anyone doing their own evaluation.
And the MCP Server, the interface for connecting agents to external tools, currently operates with a public endpoint and doesn't support Private Link. For some production scenarios, that's a constraint worth tracking.
What This Means for Our Clients
When an enterprise engages GAPVelocity AI to modernize their legacy applications with VELO, they're not getting a team of developers manually rewriting code with AI. They're getting an agentic modernization platform built on the most comprehensive enterprise AI infrastructure available, with the model diversity, orchestration depth, governance controls, and Microsoft ecosystem integration that large-scale modernization demands.
Foundry isn't perfect. No platform is. But for what we're building, autonomous modernization at enterprise scale, it's the right foundation by a significant margin.
And unlike the legacy systems our clients are modernizing, it's getting better every month.
GAPVelocity AI is the AI modernization business unit of Growth Acceleration Partners, specializing in autonomous legacy application modernization powered by the VELO platform on Microsoft Foundry. To learn more about how VELO transforms legacy Access, PowerBuilder, VB6, Clarion and .NET applications into modern cloud-native solutions, contact us.



