Vibe Coding in .NET

by DeeDee Walsh, on May 11, 2026 5:27:08 PM

The Production Traps AI Will Walk You Into 

C# compiles. That doesn't mean it's safe to ship. Nine recurring patterns where AI-generated .NET code quietly breaks, and what experienced developers do instead.

.NET has 25 years of accumulated convention behind it. Every "use IHttpClientFactory," every "don't block on async," every "DbContext is scoped" exists because somebody (usually somebody on a production support rotation) got burned by the alternative. Those conventions aren't elegant. They're scar tissue.

AI models that generate C# have read all that scar tissue. They've also read everything that came before it: the unsafe samples on old blogs, the Stack Overflow answers from 2012 that nobody updated, the toy console apps that "work fine on my machine." When you ask a model to write you a service, you get a weighted average of all of it.

first vibe coder

Sometimes that average is great. Sometimes it ships you new HttpClient() inside a loop and exhausts your sockets on the first quiet Tuesday afternoon.

This isn't a critique of vibe coders. It's a gap problem. AI is fluent in the perfect path of C#; the code that runs when everything works. Production is mostly the imperfect path, and .NET's imperfect path has very specific shapes. Below are nine of them, in roughly the order you're most likely to hit them.

1. The async deadlock

The single most expensive line of AI-generated C# is .Result.

 // AI will write this. It will compile. It will deadlock.
public string GetUserName(int id)
{
    return _userService.GetUserNameAsync(id).Result;
}


On classic ASP.NET (Framework) and on any UI thread, this deadlocks: the synchronous wait blocks the thread, the awaited continuation needs that thread to resume, and nothing moves. The fix is uncomfortable for vibe coders because it requires changing the calling signature and async is contagious. Make the caller async, propagate Task up, and let it bubble.

The same applies to .Wait() and Task.Run(...).Result. If you see any of those three in a code review, treat them as red flags until proven safe.

A related trap: async void. Acceptable for event handlers. Unacceptable anywhere else, because unhandled exceptions in an async void method tear down the process instead of bubbling up. AI generates async void more often than it should because the training data is full of WinForms event handlers.

2. The HttpClient socket exhaustion

This is the canonical .NET bug, and AI ships it constantly:

 // AI will write this. Looks reasonable. Will exhaust sockets.
public async Task<string> CallApi(string url)
{
    using var client = new HttpClient();
    return await client.GetStringAsync(url);
}


The problem: even when disposed, the underlying TCP connections linger in TIME_WAIT for up to four minutes. Call this method in a loop, or under load, and you'll run out of ephemeral ports. Production goes down with errors that look unrelated to the code that caused them.

The fix is IHttpClientFactory, available since .NET Core 2.1. Register it once, inject it everywhere:

 services.AddHttpClient();

public class MyService(IHttpClientFactory factory)
{
    public async Task<string> CallApi(string url)
    {
        var client = factory.CreateClient();
        return await client.GetStringAsync(url);
    }
}


Same idea applies to DbContext, SqlConnection, and anything else that wraps a pooled resource. The pool is the point. Bypassing it is the bug.

3. The captured-scope DI trap

DbContext is registered as scoped by default. That's correct for a request-per-scope web app. It becomes a footgun the moment you inject it into something with a longer lifetime; a singleton, a hosted background service, a cached factory:

 // AI will write this. It will work for a while. Then it won't.
public class CleanupService : BackgroundService
{
    private readonly MyDbContext _db;
    
    public CleanupService(MyDbContext db) => _db = db;
    
    protected override async Task ExecuteAsync(CancellationToken ct)
    {
        while (!ct.IsCancellationRequested)
        {
            await _db.Users.Where(u => u.Expired).ExecuteDeleteAsync(ct);
            await Task.Delay(TimeSpan.FromHours(1), ct);
        }
    }
}


The hosted service is a singleton. The DbContext it captured at construction time is now alive for the lifetime of the process. Its ChangeTracker will accumulate entities until memory runs out, and any error in one cycle poisons every subsequent one.

The correct pattern is to resolve a scope explicitly inside the loop:

 public class CleanupService(IServiceScopeFactory scopeFactory) : BackgroundService
{
    protected override async Task ExecuteAsync(CancellationToken ct)
    {
        while (!ct.IsCancellationRequested)
        {
            using var scope = scopeFactory.CreateScope();
            var db = scope.ServiceProvider.GetRequiredService<MyDbContext>();
            await db.Users.Where(u => u.Expired).ExecuteDeleteAsync(ct);
            await Task.Delay(TimeSpan.FromHours(1), ct);
        }
    }
}


The same rule applies to anything stateful: HttpContext, repositories, UserManager. If it's scoped, it has to be resolved inside the scope, not captured.

4. Entity Framework's quiet performance cliff

AI writes EF Core queries that look idiomatic and run fine in development with eleven seeded rows. They fall over in production for two reasons.

N+1 queries. The model frequently generates code that looks like a single query but isn't:

 var orders = await db.Orders.ToListAsync();
foreach (var order in orders)
{
    Console.WriteLine(order.Customer.Name);  // one query per order
}


If lazy loading is on, this is N+1. If lazy loading is off, this is a NullReferenceException. Either way, the fix is explicit: Include(o => o.Customer), or a projection that selects only the fields you need.

Tracking everything by default. EF tracks every entity you query so it can detect changes. That's expensive and unnecessary for read-only paths. AI rarely adds AsNoTracking() on its own:

 // For a read-only view, this is wasteful
var users = await db.Users.Where(u => u.Active).ToListAsync();

// This is what you actually want
var users = await db.Users.Where(u => u.Active).AsNoTracking().ToListAsync();


Default to AsNoTracking() for reads. Only track when you intend to modify.

5. The Pokémon catch

catch (Exception ex) and a swallow. It's the most expensive single habit in vibe-coded .NET, because the bug is happening but you can't see it:

 // AI loves this. Don't.
try
{
    await DoTheThing();
}
catch (Exception ex)
{
    _logger.LogError(ex.Message);  // no stack trace, no context
}

 

Two problems. First, logging ex.Message strips the stack trace, which is the only part of the exception that tells you where to look. Use LogError(ex, "context") so the logger captures the full exception. Second, catching Exception should be deliberate, not reflexive. Catch the specific exceptions you can handle and let the rest propagate. A top-level exception handler in middleware can deal with the unexpected ones.

6. Secrets in appsettings.json

AI helpfully writes a connection string with the password inline and commits it. Don't.

In development, use dotnet user-secrets. In production, use Azure Key Vault, AWS Secrets Manager, or whatever your platform offers. The Configuration system in ASP.NET Core layers them automatically: appsettings.json for non-secrets, secrets provider for the rest, environment variables on top. No code change required; just configuration.

The same rule applies to JWT signing keys, API keys, encryption keys, and OAuth client secrets. If it would be embarrassing on a billboard, it doesn't belong in source control.

7. The null-forgiving operator as a warning silencer

C# 8 introduced nullable reference types. The compiler now warns when you dereference something it can't prove is non-null. The correct response is usually to fix the underlying nullability. AI's response is frequently the null-forgiving operator:

 // AI will sprinkle ! to silence warnings
return user!.Email!.ToLower();


Every ! is a runtime NullReferenceException waiting for the wrong input. Use them sparingly and only when you have a reason; typically when an API's nullability annotations are wrong and you know better. Otherwise, the warning is real. Handle the null case.

8. Newtonsoft.Json by default

Newtonsoft.Json was the right answer for fifteen years. It's still a valid choice. But AI defaults to it because the training data is heavily weighted toward older code, and on a modern .NET 8+ project there's no good reason to add it unless you have a specific feature need.

System.Text.Json is the built-in default, is dramatically faster, supports source-generated serialization for AOT and trimming, and doesn't add a dependency. Unless you're parsing something unusual or need a specific Newtonsoft converter, prefer it.

The same applies to a handful of other "old defaults" the model reaches for: WebClient instead of HttpClient, BinaryFormatter for anything at all (it's been deprecated for security reasons), DateTime.Now instead of DateTime.UtcNow for anything stored or compared across time zones.

9. Controllers when minimal APIs would do

This one is less a bug than a missed feature. AI generates ASP.NET Core endpoints as controller actions almost reflexively, because the training corpus is full of MVC tutorials. For a service that's mostly endpoint-per-operation with no shared filters or model binding, minimal APIs are shorter, faster to start, and easier to test:

 app.MapGet("/users/{id:int}", async (int id, MyDbContext db) =>
    await db.Users.FindAsync(id) is { } user
        ? Results.Ok(user)
        : Results.NotFound());


Controllers still earn their keep for complex routing, conventional model binding, and large API surfaces. But "I need an endpoint that takes an id and returns a user" doesn't need a controller, and AI will usually give you one anyway.

Adjacent: AI under-reaches for primary constructors, file-scoped namespaces, collection expressions, records for DTOs, and source generators for regex, JSON, and logging. Every one of those is in modern C#. The model keeps writing 2018-era code because most of its training data is 2018-era code.

The scale point

A vibe-coded .NET microservice can be totally fine. The risk profile is small. If it breaks, you redeploy.

A vibe-coded modernization of a legacy enterprise application is a different category of risk. Past a certain size (usually somewhere in the hundreds of thousands of lines) the model can't hold the whole system in context. It can't tell you that the function it's converting from VB6 is also called by a stored procedure, a Crystal Report, and a batch job that runs on the third Friday of the quarter. It can produce C# that compiles, passes its tests, deploys cleanly, and quietly breaks something nobody will notice for six weeks.

That's the same wall every DIY AI modernization effort hits. Vibe coding works at prototype scale and breaks at enterprise scale, and the failure mode is silent until it isn't.

The shorter version

You don't need to memorize a list. You need one habit: every time the AI hands you C# code, before you accept it, ask the question: What happens when this goes wrong?

If you're dealing with vibe code and need some thoughtful advice, give us a shout.

 

 

 

 

Topics:.NETAI

Comments

Subscribe to GAPVelocity AI Modernization Blog

FREE CODE ASSESSMENT TOOL