
AI in Software Projects: The Reality Behind the Hype
Developers are split between those with AI superpowers and those locked out. Stakeholders expect 10x speed. Costs are ballooning. And the SDLC still exists. Here's what's actually happening with AI in software delivery.
Everyone's talking about AI transforming software delivery. Some teams are already running with Claude Code and Copilot. Others haven't touched it yet. And somewhere in a boardroom, a stakeholder just asked why your project isn't going twice as fast.
Let's talk about what's actually happening.
The Developer Divide Nobody's Talking About
Walk into any tech company today and you'll find two very different realities coexisting on the same team.
On one end, you have developers who are deep into AI-assisted development — using tools like GitHub Copilot, Cursor, or Claude Code to write boilerplate in seconds, generate test cases, debug faster, and navigate unfamiliar codebases with ease. For them, AI isn't a gimmick anymore — it's muscle memory.
On the other end, you have developers who haven't touched these tools at all — not by choice, but because their company hasn't approved the subscriptions, IT has locked down external tools, or there's a blanket policy of "no AI-generated code" for compliance reasons.
The result? Wildly uneven teams. A senior developer on a restricted corporate laptop is technically "slower" than a junior who got access to Claude Code. The skill gap is being redrawn in real time, and companies that aren't paying attention will feel it in their hiring pipelines and delivery velocity — but not in the ways they expect.
The uncomfortable truth: AI access is fast becoming a form of developer privilege. Teams without it aren't just missing out on productivity — they're being measured against benchmarks set by teams that have it.What Stakeholders Think Is Happening
Here's the story playing out in boardrooms and sprint reviews:
"AI can write code now. So if a feature used to take 2 weeks, it should take 3 days. We need to revisit our estimates."
It's not an unreasonable instinct. If the tool writes the code faster, the math seems simple.
Except the math isn't simple.
Software delivery has never been bottlenecked purely by the speed of typing code. It's bottlenecked by:
- Understanding the problem — requirements gathering, business analysis, stakeholder alignment
- Designing the solution — architecture decisions, trade-offs, tech debt considerations
- Writing the code — yes, this part is genuinely faster with AI
- Reviewing the code — arguably slower now, because AI-generated code needs careful scrutiny
- Testing — unit tests, integration tests, UAT, regression
- Deployment and stabilisation — CI/CD pipelines, environment configs, hotfixes
AI meaningfully accelerates one or two of those stages. The rest remain largely the same — and some are getting more complex, not less.
The expectation gap is real and it needs to be managed. Engineering leaders need to have honest conversations with stakeholders before the pressure distorts sprint planning into something undeliverable.The Cost Conversation Nobody Wants to Have
AI tools aren't free. And when used carelessly, they aren't cheap either.
A developer who leans on a hosted LLM for every small decision — generating a 50-line function, then asking for a review, then asking for a refactor, then asking for documentation — can rack up significant token costs in a day. Multiply that across a 20-person team and you're looking at a bill that can surprise finance teams who signed off on "just a few API subscriptions."
The hidden cost factors:
- Model selection matters. Using GPT-4o for tasks a smaller model handles fine is like hiring a consultant to send your emails.
- Context window overuse. Stuffing 20,000 tokens of codebase into every prompt for tasks that need 500 tokens is wasteful.
- Iteration loops. AI doesn't always get it right first time. Multiple prompt-response cycles compound costs quickly.
- Team-wide adoption without guardrails. Without usage policies, different developers use AI in wildly different (and expensive) ways.
Smart organisations are starting to treat AI model usage like cloud resources — with budgets, monitoring, and right-sizing conversations. The ones who don't will find AI tooling becoming a new line item that's hard to justify without a rigorous ROI story.
The "We'll Ship Faster" Trap
This is the one that's going to hurt the most teams in 2026 and beyond.
AI can make certain parts of development dramatically faster. Boilerplate generation, CRUD APIs, unit tests for well-defined functions, documentation, code translation — these are genuine wins. A developer can produce in 2 hours what might have taken a day.
But here's what doesn't compress:
Preparation time. Before a single line of code is written (AI or otherwise), someone needs to define what "done" looks like. User stories, acceptance criteria, API contracts, data models, edge cases — none of this writes itself. AI can assist, but it can't replace domain knowledge and stakeholder conversations. Review time. AI-generated code can look correct and be wrong in subtle ways. It can introduce security vulnerabilities, ignore project conventions, misunderstand business logic, or generate technically valid code that fails in the specific context of your system. Reviewers need to be more vigilant, not less. Test automation gaps. Even the best test suites don't cover 100% of scenarios. AI can help write tests, but it can only test what you tell it to test. The unknown unknowns — the production edge cases, the user behaviours nobody anticipated — still slip through. Faster development without proportionally faster and more thorough testing is just shipping bugs faster. The SDLC doesn't disappear. Planning → Design → Development → Testing → Deployment → Review. You can accelerate individual phases, but skipping them or compressing them arbitrarily creates technical debt, bugs, and rework that costs more time in the end than was "saved" at the start.The honest message to stakeholders: AI makes good teams better. It doesn't make broken processes fast.
What Realistic AI Adoption Actually Looks Like
The teams getting genuine value from AI in software delivery right now are doing a few things right:
- They treat AI as a pair programmer, not a replacement developer. The human is still driving — AI is accelerating the journey, not choosing the destination.
- They've updated their review processes. Code review checklists now include AI-specific red flags. Reviewers are trained to look for hallucinated logic, over-engineered solutions, and missed context.
- They haven't changed their test coverage expectations. If anything, they've raised the bar — because faster code generation should mean more time available for testing, not less.
- They're measuring outcomes, not output. Lines of code per day is a vanity metric. Cycle time, defect escape rate, and deployment frequency are what matter.
- They're being honest about the learning curve. Getting good at AI-assisted development takes time. Early productivity gains are often followed by a plateau as teams figure out how to use the tools well.
The Bigger Picture
AI in software development is real, valuable, and here to stay. But it's not a magic accelerator that rewrites the rules of software delivery.
The teams that will win aren't the ones that use AI to go faster. They're the ones that use AI to go better — fewer bugs, cleaner architecture, faster feedback loops, more time for the genuinely hard problems that require human judgment.
Stakeholders, engineering managers, and developers all need to land on the same page: AI is a multiplier, not a shortcut. Used well, it raises the ceiling. Misused — through unrealistic expectations, poor governance, or uneven access — it creates new problems while solving old ones.
The hype will settle. The question is how much damage gets done before it does.
Have thoughts on this? I'd love to hear what AI adoption actually looks like on your team — whether it's working, where it's falling short, or what's surprised you.