Tone of Voice
Writing has never been my forte. I am one of those people who things though speaking. Which means I often say things I shouldn't. The net result is I agonise over written content. Back and forth, over and again. Worrying about every aspect. I live in the world of bullet points and TL:DR.
To hack the system I have created a workflow that helps me.
- Have something worthy of writing about
- Create a slide deck (bullet points) for a community of practise
- Present CoP as work and record the session and transcription.
- Use LLM to turn transcription into a blog post.
But like everybody else who's inboxes are filled with emails that used to be bullet points and 100 words, now full of emojis, American spelling and essay length diatribes I realise I have a voice and tone that I want to keep. I dont want my blog posts to be like the rest of the worlds click bait on Medium or Twitter. If only my mum ever reads this I want it to be me.
Can I use an LLM to write like me?
So here is the same blog post as Using AI to build AI but written using the below Prompt.
https://claude.ai/share/085f45b0-5927-464e-ab0e-c5a16717d2b7
There are a couple of key learnings for me:
- You cant write a blog post that work for all audiences
- The same core message can be repurposed for different audiences using different tone and voice prompts. I actually love this idea. Can I create knowledge that I can publish to both CIOs and developers alike through channels appropriate to them?
- the easier it is to create content the harder its going to be to make your voice stand out, with out being click bait!!
written by a human
From Floppy Disks to AI Agents
A Practitioner's Journey into Generative AI Development
TLDR
After living through multiple technology epochs, I dove hands-first into generative AI development to understand its true enterprise implications. Building a Rust-based Git commit tool using AI agents revealed surprising productivity gains, unexpected pitfalls, and a fundamental shift in how we approach software development. The key insight: AI doesn't eliminate developer roles—it creates entirely new categories of work around hallucination detection, prompt engineering, and system reliability. While "vibe coding" dramatically accelerates initial development, it demands rigorous code review and intentional learning to avoid technical debt disasters waiting in production.
The Pattern of Technological Disruption
I've been fortunate to witness several fundamental shifts in how we build software. Each transition—from floppy disk development to the Internet era, from Web 1.0 to cloud-native mobile development—taught me that theoretical understanding only goes so far.
Just like Mobile and Cloud Generative AI another tool in our arsenal—it's reshaping the very fabric of how we design, build, govern, and manage systems, just this time it seems like the transformation will happen in months not years.
But here's the thing that keeps me up at night: how do you prepare for a technology shift when the landscape changes faster than quarterly planning cycles? My approach has always been pragmatic—dive in, build something real, and emerge with hard-earned insights rather than PowerPoint speculation.
The Executive Interrogation Waiting Room
Picture this scenario: you're sitting across from your CIO, and they're armed with the questions that matter. Not the fluffy "how will AI transform our business" queries, but the sharp-edged ones that determine budget allocations and strategic direction:
"Which AI tools should our engineers actually be using? What's the real ROI beyond those anecdotal '15% productivity improvement' claims we keep hearing?"
Then comes the curveball: "I just read the DORA report suggesting AI productivity gains might be overstated. How do we reconcile that with our investment thesis?"
The questions cascade from there. Cost projections as free trials evaporate. Whether these tools can handle legacy modernization or just prototype demos. What development teams look like when half the traditional roles might be automated away.
But perhaps the most unsettling question: "How do we build reliable systems when core components are fundamentally non-deterministic?"
These aren't rhetorical exercises. These are the conversations happening in boardrooms right now, and technical leaders need concrete answers backed by real experience.
Building "I Am Committed": An Experiment in Productive Chaos
I decided to tackle two learning objectives simultaneously: understand LLM integration in applications and learn Rust. The project concept was straightforward—build a Git commit message generator that analyzes diffs and produces conventional commit messages.
Simple in concept, revealing in execution.
I evaluated several development environments, ultimately choosing Cline (an open-source tool by Anthropic) specifically because it exposed the cost of every query. Transparency matters when you're trying to understand enterprise implications.
The architecture couldn't have been simpler: a Rust CLI backed by GPT-4 Mini through OpenAI's platform. But what happened next challenged everything I thought I knew about software development velocity.
The First Shock: It Actually Worked
My initial prompt was embarrassingly casual:
"Can you please set up my environment to be able to build a RUST based commpand line application?"
Notice the typo. Notice the complete lack of architectural specifications or detailed requirements. Despite this sloppy input, the AI generated a functional Rust development environment and application structure.
This wasn't advanced IntelliSense or sophisticated code completion. This was architectural decision-making, dependency management, and project scaffolding based on implicit understanding of intent.
Have you ever experienced that moment when a technology shift becomes viscerally real? This was mine.
The Second Shock: It Can Also Destroy Everything
Two hours later, flush with confidence, I made another casual request:
"How can I read a line form the command line? specifically I want to give the user 3 options to select from and the user picks 1 option."
The AI completely obliterated my existing codebase.
This failure taught me more than the initial success. Suddenly, I understood why traditional development practices—micro-commits, feature branches, disciplined saving—become absolutely critical when working with AI agents. The technology amplifies both productivity and destructive potential.
But there's a deeper implication here. My traditional approach treats code like clay—malleable, exploratory, shaped through iteration and discovery. AI agents demand more specification-driven development. You need to articulate intent clearly before execution.
This shift toward "development by specification" isn't just a workflow change—it's a fundamental cognitive reframing. Tools like Tessl are already building around this paradigm.
The Third Shock: UI Generation from Screenshots
When I showed the AI a screenshot of my basic CLI interface with a request for "something better," it generated a significantly improved design. The new interface wasn't just functional—it was thoughtfully structured with better visual hierarchy and user experience patterns.
But here's where it gets interesting: the AI stubbed out functionality that didn't exist yet without explicitly telling me. Features were referenced in the UI that had no corresponding backend implementation.
This behavior reveals something crucial about working with AI systems—they make assumptions and fill gaps based on contextual inference. Without rigorous code review, these "helpful" additions become technical debt time bombs.
Feature Development: The One-Line Revolution
As development progressed, I discovered something remarkable about feature velocity. Complex capabilities that traditionally required research, design, and implementation phases could be added through single-line requests:
- "Add pluggable prompts stored in JSON files"
- "Implement comprehensive logging"
- "Add unit tests for all core functions"
- "Refactor the monolithic method into properly structured modules"
Each request generated production-quality code with appropriate error handling, documentation, and architectural patterns.
But what does this mean for project estimation? How do you scope work when feature development velocity increases by an order of magnitude for certain types of tasks?
The Landing Page Experiment
Using lovable.dev, I generated a complete landing page from a screenshot. The tool produced not just static HTML, but a responsive, aesthetically pleasing site that synced to GitHub for further iteration.
This capability has profound implications for team composition. Many enterprise projects lack dedicated frontend engineers or UX designers. When AI tools can generate solid first iterations across multiple disciplines, how do we restructure teams and skill development?
Cost Realities and Token Economics
My most expensive single operation cost $3—negligible for a small application. But enterprise codebases operate at different scales. How does token consumption scale with codebase complexity? When do organizations need to consider on-premise models like Llama instead of cloud APIs?
Subscription tools like Cursor and Windsurf use caching and token constraints to control costs. Tools like Cline expose real token consumption and pricing. This transparency gap will become critical for enterprise budgeting and capacity planning.
Is there opportunity for "model arbitrage"—dynamically switching between providers based on cost fluctuations, similar to spot instance strategies in cloud computing?
The Hallucination Problem: When Components Lie
Here's the existential question keeping architects awake: how do you design reliable enterprise systems when core components might fabricate responses?
Traditional systems fail predictably. Database connections drop, networks partition, services timeout. We've built entire disciplines around handling deterministic failures.
But what happens when a component confidently returns plausible but completely incorrect information? How do you test for creativity masquerading as accuracy?
The Expanding Responsibility Landscape
The narrative has shifted remarkably quickly from "AI will eliminate developer jobs" to recognizing the explosion of new responsibilities these systems create:
- Hallucination detection and mitigation strategies
- Prompt injection vulnerability assessment
- Prompt strategy development and optimization
- AI system observability and monitoring
- Machine-Callable Protocol (MCP) implementation
- Accuracy verification and feedback loop design
Rather than eliminating roles, AI is generating work that aligns perfectly with platform engineering, testing, and quality assurance disciplines. The question isn't whether we'll need these skills—it's whether we're developing them fast enough.
Model Selection: The New Vendor Evaluation
I struggled with model comparison and selection. How do you evaluate whether GPT-4, Claude, Gemini, or DeepSeek is optimal for specific use cases? Traditional software evaluation criteria—performance, cost, reliability—apply differently to non-deterministic systems.
We need frameworks for model triage that go beyond anecdotal comparisons and marketing benchmarks.
Development Workflow Evolution
Working effectively with AI requires fundamental workflow changes:
Architecture-First Thinking: You need clear system understanding before engaging AI agents. Vague requirements generate unpredictable results.
100% Code Review: Everything generated needs verification. The code looks professional, follows conventions, and often works correctly—making dangerous assumptions easy to miss.
Different Testing Modalities: How do you test non-deterministic components? Traditional unit tests verify deterministic input-output relationships. AI components might generate different valid responses to identical inputs.
Intentional Learning: If you don't review and understand generated code, you're not learning—you're accumulating technical debt.
The Productivity Paradox
Vibe coding undeniably accelerates initial development for basic applications. But increased productivity means more applications, which likely means more security vulnerabilities and quality issues reaching production.
I predict a substantial business opportunity emerging in 2-3 years: companies specializing in fixing hastily vibe-coded applications experiencing problems in production. The technical debt from this development velocity surge will need to be addressed eventually.
Team Composition Questions
Traditional enterprise development required distinct specialists: backend engineers, frontend engineers, DevOps engineers, cloud engineers, test engineers, UX designers, content writers, visual designers.
When AI agents can handle significant portions of this work, how many specialized roles remain necessary? Will we see smaller, more generalist teams taking products through multiple iterations before requiring full specialized expertise?
As enterprises focus on cost optimization, this question becomes strategic rather than theoretical.
Governance and Enterprise Integration
Enterprise adoption requires solving problems that don't exist in individual development contexts:
Cost Tracking and Attribution: How do you allocate AI usage costs across projects and teams? OpenAI tells you tokens consumed but not dollar amounts—enterprise accounting needs both.
Model Lifecycle Management: When do you upgrade models? How do you ensure consistent behavior across development, staging, and production environments?
Compliance and Audit Trails: Regulated industries need to trace decision-making processes. How do you audit AI-generated code for compliance requirements?
Security and Data Governance: What data can be sent to external AI services? How do you prevent sensitive information leakage through prompts?
Next Research Frontiers
My exploration continues in several directions:
Advanced Agentic Development: Exploring Cline's configuration capabilities for more autonomous development approaches.
Model Performance Evaluation: Systematic comparison of different models for specific use case categories.
Caching and Optimization Strategies: Understanding how to balance performance, cost, and accuracy through intelligent caching.
Machine-Callable Protocols: Building "I Am Released" to generate release notes using MCP integration with Git repositories.
Reflection Questions for Leaders
As you consider AI integration in your own development practices, here are the questions worth wrestling with:
How will you maintain code quality and security when development velocity increases dramatically? What new competencies does your team need to develop to work effectively with AI agents?
When AI can generate initial implementations across multiple technical disciplines, how do you restructure hiring and skill development priorities?
How will you measure and manage the hidden costs—both financial and technical—of AI-assisted development as these tools transition from free trials to enterprise pricing?
What governance frameworks do you need to establish now to prevent AI-generated technical debt from becoming a crisis later?
And perhaps most importantly: how do you balance the competitive advantage of rapid AI-assisted development with the discipline required to build maintainable, secure systems?
The answers aren't universal, but the questions are urgent. The technology landscape is shifting faster than traditional planning cycles, and the organizations that develop coherent strategies for AI integration—rather than ad hoc adoption—will have significant advantages.
The future isn't about choosing between human developers and AI agents. It's about architecting hybrid systems where both contribute to better outcomes than either could achieve independently.
To explore the "I Am Committed" tool mentioned in this post, visit the landing page or check out the source code on GitHub.
written by Claude