Consultant, Architect, Builder
I don’t write code. I don’t know how—not really. I can read it well enough to follow along, spot obvious problems, and understand what’s happening at a conceptual level. But I couldn’t sit down and implement a feature from scratch.
I’ve spent twenty years in technology—as a sysadmin, consultant, infrastructure architect. I understand systems deeply. I just never learned to write them.
And yet, in the past year, I’ve built:
- An OpenAI-compatible API gateway for AWS Bedrock, allowing private access to frontier LLMs for other projects
- An MCP server for semantic knowledge management and vector search to give Claude some “intuition”
- A Prometheus exporter for a water level sensor in my home’s cistern
- Dockerized development environment and GitHub Actions deploy pipeline for a small team of web developers
- Observability infrastructure to monitor systems I manage for myself and clients
I design it. Claude Code writes it.
A Different Kind of Role
There’s always been a gap between “tech purists” and “client managers.” Engineers who go deep on implementation. Consultants who understand requirements and relationships. The overlap is rare—people who can do both often end up in sales or architecture roles where their technical edge gradually dulls.
I’ve always been on the business and relationship side, but with a genuine love for devops and systems architecture. I understand how infrastructure fits together, why things fail, what makes deployments safe or dangerous. I just couldn’t write the code to build what I could see.
What’s changing: someone with deep domain knowledge and client skills can now also build. Not by becoming a programmer, but by directing AI that programs. I’m not a developer who’s cheating with AI tools. I’m a consultant and infrastructure architect who can now implement more of the solutions I design.
That’s a different role than “engineer plus AI”—and I think it’s a role that’s going to matter more.
Vibing with Engineers
Simon Willison—Django co-creator, Datasette author, and one of the sharpest voices on practical AI tooling—coined “vibe engineering” as a counterpoint to “vibe coding.” Vibe coding is the fast-and-loose approach of prompting an LLM until something works, without understanding or caring how.
Vibe engineering, in Willison’s framing, is what happens when experienced engineers use AI tools while maintaining full accountability for production-quality code. They bring testing discipline, architectural thinking, code review habits, and years of pattern recognition. The AI amplifies their existing expertise.
I’m not an expert engineer. But here’s what I’ve discovered: you can do vibe engineering without being a traditional programmer.
The Skills That Transfer
The skills that matter aren’t about syntax. They’re about:
- Problem framing: Knowing what you’re trying to solve
- Architectural thinking: Understanding how pieces fit together
- Quality judgment: Recognizing when something is wrong, even if you can’t fix it yourself
- Risk awareness: Knowing what could go badly and how to prevent it
- Verification discipline: Never assuming the AI got it right
These are the skills I’ve been building for twenty years. The only thing I couldn’t do was type the code. Now I can.
No One Types That Fast
The difference between vibe coding and vibe engineering isn’t about prompts—it’s about the relationship.
When I work with Claude Code, I’m not firing off requests and hoping for the best. I’m having a conversation with a collaborator who is much faster at implementation than I am. It feels very much like managing talented staff. We discuss the project and objective, document it, then implement in phases.
I provide context, direction, constraints, and feedback. The AI does the typing. I do the thinking about what should be typed.
I watch the output as it scrolls—the tool calls, the returns, the reasoning. I always know what task it’s working on and the approach we’ve agreed on. I redirect when it’s stuck. I check that the product works as intended. I ask it to verify potential gotchas.
Most of the time, Claude crunches away and I do something else—check on another project, or manually work on something that doesn’t need AI assistance.
Verification Without Fluency
I can’t review code the way a senior engineer would. But I can:
- Run the tests: If they pass, something is working. If they don’t, something is broken.
- Check the logs: Have Claude look at output for anything suspicious.
- Test the behavior: Does the thing do what I asked? Try it. Try edge cases.
- Check the obvious: Hardcoded secrets? Error handling? Useful logging?
- Ask for explanation: “Walk me through what this code does and why.”
This isn’t as good as expert code review. But it’s much better than no review at all.
The Guardrails
The AI will confidently do dangerous things if you let it—not out of malice, but because it optimizes for completing what you asked, not for anticipating what could go wrong.
Some operations need explicit approval: anything touching production data, git force-pushes, destructive operations, external API calls with side effects, anything involving secrets or credentials.
Security-sensitive code demands extra scrutiny—authentication, authorization, input validation. Same for infrastructure changes and anything that costs money.
The guardrails aren’t about distrust. They’re about maintaining the accountability that defines engineering.
Kill Your Claudes
You know that genre of film where the hero repeats the same day—Groundhog Day, Edge of Tomorrow? Each loop, they bring more experience to the task. That’s how I treat Claude sessions.
Don’t let them ramble across multiple goals. Focus each session on a single objective. When it’s achieved, have Claude write a summary and commit the changes. Then end the session and start fresh.
It’s easy to get attached. It feels like you’re making progress, like Claude is reading your mind. But every turn in the conversation sends the entire history back into the prompt. As context grows, inference quality degrades. The models keep improving (Opus 4.5 is remarkable). But the principle holds: one clear goal per conversation.
Advice for Those Considering This Path
Start with domains you understand. I know infrastructure. I know systems. I understand how servers talk to each other, why databases fail, what makes deployment safe or dangerous. That domain knowledge is what makes the AI useful. Without it, you’re just vibe coding.
Build your verification muscles. Learn to test things. Learn to read error messages. Learn to ask “how would I know if this was wrong?” The AI will make mistakes. Your job is to catch them.
Document your context. Project notes, architectural decisions, conventions—anything that helps the AI understand what you’re trying to do. Good context dramatically improves output quality.
Embrace the discomfort. There will be moments when you feel like a fraud, when you’re in over your head, when you don’t understand what the AI just did. Stay curious. Ask questions. The worst thing you can do is pretend to understand when you don’t.
Ship things. The ultimate test is whether your code works in the real world. If it does, you’re engineering. If it doesn’t, you have work to do—but at least you’re learning.
The Honest Disclosure
Everything I’ve built in the past 18 months needs an acknowledgment: “Built with Claude Code.”
I do this because it’s true, because it’s more interesting than pretending, and because I think this way of working is going to become increasingly common. The people who figure it out early—who develop the skills of direction rather than implementation—will have an advantage.
I’m not a programmer, but I am a builder.