The Poke and Hope SDLC

Have you ever witnessed an engineer who is too far down the rabbit hole that they have lost the ability to critically think through an error or problem? It can happen to the best of us—we are too tired, the cognitive load is too great, and we mentally quit. We fall into the trap of changing a setting or variable, refreshing, and hoping the situation has fixed itself. You find yourself in the Poke and Hope SDLC, every iteration slowly eroding your beautiful code until you neither know what was/is working nor what you have changed or why.

AI coding is like that. We start off with the serotonin hit of progress and surprise as the AI fools us into thinking it's smart and strategic. It seems to understand what I want to do and it knows everything!!!! But then we get an error—no worries, I throw that error back at the AI and it confidently tells me it's fixed the problem. No issues, it's all fixed. Rinse & repeat, on and on, until my code is littered with console statements and multiple workarounds. I get to the point where I have no idea what's changed and neither does the AI. He/She Confidently grinds on!! Poke, Hope, Poke, Hope, until even it gives up!!

Act 1

It was all going so well with a simple webapp coded up by Lovable.dev. I then moved into Claude Code to finish off the details. Deployed to Vercel and life was good. But I had a nagging feeling that I should check the security of my app.

  1. Did it really store my OpenAI keys in the browser?
  2. Did it really store my GitHub keys in the browser?

Act 2

Please act like a security analyst and review my codebase looking for vulnerabilities. Document these into a Markdown file so that we can work through them slowly, securing the application.

Genius move, I thought. I was the greatest AI whisperer there was. I heard the echoes:

It's only good for prototypes, It's not secure, It cant do complex enterprise solutions.

But I thought I was like Lucius Verus, battling the Romans whilst commanding an army of AI agents to my will.

But alas, the Poke and Hope loop soon appeared. Back and forth between GitHub Provider settings, Supabase reconfigurations, more console logging, more Vercel logging, refactoring, more code—and yet the AI tells me every time that it's got it, it knows the problem. I even try a different model and tool, but nothing. After a couple of hours of lazily watching YouTube as the AI does my bidding, I get to the point where I realise it's time.

Time that I have to do the work. Time to actually read the docs, work out what the code does. Rummage back into the depths of my brain to try and remember what the PKCE auth flow is!!

Its probably a little unfair to totally blame the AI, as it all fell apart when I started to break the application apart from a simple SPA into a front end and backend solution with multiple API calls. But that is a real world example, it did a sterling job reading the documents of the different services and incorrectly telling me how to configure them.

Somewhere along the line I went from a working solution, to broken. I had made too many large leaps of AI generated code refactoring with little testing to know where the error crept in. I had gone from my mini loop of plan, build, test. To code, code, code as I got high on the power.

Act 3 - The cull

please remove all authentication code so that we can start again.

Quick and simple and back to where we started.

Insights

I have always had an uneasy feeling with Single Page Applications, too much code sitting on the client side. Shifting complexity from backend to frontend, to something that requires a coding ninja to be able to debug. Multiple layers of abstraction, bending javascript (the do anything programming language) to do anything.

#kiss Keep it simple stupid

Dont let the coding power take me over. A micro Plan, Build, Test loop. Building with a test harness and boundaries and deploying through pipelines to "production" from hello world. Dont let the code drift.

#continuousvalidation Compiler Validation

I come from the old days where types saved lives. At some point the AI decided to switch from Typescript to plain old Javascript, Love it, but Types give another layer of constraints and validation. Important when trying to eliminate surprises!! As I started to see all the errors that the AI was hitting the realisation that a compiler would shorten the feedback process and catch basic errors.

#tdd TDD

I have a working hypothesis that tests are the answer. Anecdotally AI tools are not great at testing, but we know they can produce huge amounts of code. My prompts are loose instructions, not specifications. Should I be writing out my expectations as TDD cases, and getting the LLM to solve those problems?

Learning Goals

  1. ❌ Can I use VibeCoding to build a full webapp with integration and sign in?
  2. Using LLMs for content generation.

Tooling

Insights

written by a human