Early generative AI usage was defined by asking LLMs to write poems and summarize meetings that probably should have been emails.
At GAPVelocity AI, we transitioned from basic experimentation to a full-scale Modernization Factory, dragging legacy codebases into the modern era. Through our work (which we call VELO Labs), we’ve identified five realities for any engineering team looking to move past the “Hello World” stage and into production-grade AI adoption.
Here are five lessons from the front lines of 2025:
1. The Worker-Judge Architecture
The biggest lie of early GenAI was that a "perfect prompt" existed. We learned that a single-shot prompt is just a prayer; an Agentic Workflow is a strategy.
- The Lesson: Stop trying to get one model to do everything.
- The Technical Shift: We moved to a Worker-Judge architecture. One agent (the Worker) generates the code, while a second agent (the Judge), trained specifically on architectural constraints and standards, tears it apart. If the Judge "smells garbage," the code is sent back with a stack trace and a list of grievances. This creates a Self-Healing Loop where the AI is forced to fix its own mistakes by looking at actual compiler feedback rather than just guessing again.
2. Build Factories, Not Chatbots
You can’t chat your way through a 500-table database migration. In 2025, we realized that the "holy grail" of modernization isn't an AI that understands code. It's a deterministic pipeline where the AI is just one of many specialized nodes.
- The Lesson: Wrap the non-deterministic nature of AI in cold, hard, deterministic engineering.
- The Technical Shift: Our pipelines now follow a rigid 4-phase factory model:
- Phase 1: Deterministic schema extraction.
- Phase 2: Automated generation of the data access layer (e.g., Entity Framework).
- Phase 3: The "UI/Logic Shift" where the LLM finally earns its keep by converting legacy logic into modern components.
- Phase 4: The Judge/Self-Healing loop to ensure the final product actually builds.
3. Give Your AI "Eyes" (The MCP Revolution)
If you’re still copy-pasting code into a chat window, you’re living in the past. An LLM without direct access to your codebase’s structure is putting up unnecessary barriers.
- The Lesson: Context requires structured tooling, not just more text.
- The Technical Shift: We’ve fully leaned into the Model Context Protocol (MCP). By building an MCP server foundation, we give our agents direct access to the Abstract Syntax Tree (AST) of a project. Instead of the LLM "guessing" what’s in a file, it uses deterministic tools to query the codebase, asking for specific interfaces or repository implementations, and then executes a refactor plan project-wide.
4. AI as a Tool-Maker (The Meta-Engineering Shift)
Asking an LLM to fix a missing semicolon is like using a flamethrower to light a candle. It's overkill and remarkably prone to "accidental" fires.
- The Lesson: Use GenAI to write deterministic tools, then let the tools fix the code.
- The Technical Shift: Instead of having the AI touch the code directly for repetitive syntax fixes, we use GenAI to write Roslyn Analyzers and Code Fixers. The AI identifies the pattern of a bug across thousands of legacy files and generates a deterministic fixer. We then run that fixer across the solution, combining AI's pattern-matching with the C# compiler’s certainty.
5. "Intelligence" is an Infrastructure Problem
Public LLM updates are a nightmare for production workflows. Every time a provider "improves" their model, your specialized prompts break.
- The Lesson: You must own your model versions and your tribal knowledge.
- The Technical Shift: We solved "prompt drift" by moving to Azure AI Foundry, allowing us to snapshot specific model versions and fine-tune them on our own "golden" manual corrections. Furthermore, we realized our best documentation was hidden in years of Slack tribal knowledge. We built Knowledge RAGs that vectorize thousands of internal documents, turning obscure legacy bugs into searchable, actionable context for our agents.
The 2026 Outlook: Trust the Pipeline
The core lesson of 2025 is simple: Don’t trust the AI; trust the pipeline. We’ve evolved from treating AI as a fun image generator to treating it as compiler-integrated middleware. If you want to survive the next year of engineering, stop writing prompts and start writing tools.
VELO Labs Contributors to the GAPVelocity AI research include:
William Quesada, Director of Engineering
Cesar Castañeda, Solutions Architect
Claudio Umaña, Advanced Software Engineer
Santiago Arango, AI & Cloud Transformation Architect
Esteban Alvarado, Product Owner
Robert Encarnação, Software Archaeologist / Solution Architect
Darren Day, Principal Software Engineer