I Went All-In on AI. The MIT Study Is Right.
My all-in AI experiment cost me my confidence
You’ve seen the MIT study. 95% of corporate AI initiatives FAIL.
You’ve probably shared it in meetings, posted about it on LinkedIn, used it to justify your AI concerns. But do you know why that number is so high? I do. Because I lived it.
I spent three months becoming part of that 95% on purpose.
My Three-Month Experiment in Failure
As a fractional CTO and advisor, I kept getting the same question: “How should we use AI in our engineering teams?” I could have given the standard consultant answer about augmentation and efficiency. Instead, I decided to find out what actually happens when you go all-in.
I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.
I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.
Twenty-five years of software engineering experience, and I’d managed to degrade my skills to the point where I felt helpless looking at code I’d directed an AI to write. I’d become a passenger in my own product development.
Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.
The Pattern Every Failed Initiative Follows
The company gets excited about AI. Leadership mandates AI adoption. Everyone starts using AI tools. Productivity metrics look great initially. Then something breaks, or needs modification, or requires actual judgment, and nobody knows what to do anymore.
The developers can’t debug code they didn’t write. Product managers can’t explain decisions they didn’t make. Leaders can’t defend strategies they didn’t develop.
Everyone’s pointing at their AI tools saying, “It told me this was the right approach.”
During my experiment, I found myself in constant firefighting mode. Claude Code would generate something, it would be slightly off, I’d correct it, it would make the same mistake again, I’d correct it again. I was working harder than if I’d just written the code myself, but with none of the learning or skill development.
Bob Galen watched me go through this and called it perfectly in our latest podcast: “Who owns that product, Josh? You or Claude Code?” The answer was Claude Code. I’d abdicated ownership while telling myself I was being innovative.
The Right Balance (That Few Achieve)
The formula should be AI + HI, where HI (Human Intelligence) is larger than AI. What’s actually happening in those 95% of failures? It’s AI with a tiny bit of human oversight, if any.
When AI helps you write better code faster while you maintain architectural understanding—that’s augmentation. When AI writes code you don’t understand—that’s abdication.
When AI helps you analyze customer feedback while you make product decisions—that’s augmentation. When AI tells you what to build next—that’s abdication.
When AI helps you write better faster while maintaining your voice—that’s augmentation. When AI writes for you in a voice that isn’t yours—that’s abdication.
I know the difference because I’ve been on both sides. The abdication side feels easier initially. You’re shipping more! You’re moving faster! Then you realize you’re not actually in control anymore, and when something goes wrong—and something always goes wrong—you’re helpless.
The Masters We’re Losing
We’re about to face a crisis nobody’s talking about. In 10 years, who’s going to mentor the next generation? The developers who’ve been using AI since day one won’t have the architectural understanding to teach. The product managers who’ve always relied on AI for decisions won’t have the judgment to pass on. The leaders who’ve abdicated to algorithms won’t have the wisdom to share.
Bob and I represent something that might disappear: masters of our craft who learned by doing, failing, debugging, and doing again. We have 25+ years of accumulated scar tissue that tells us when something’s about to go wrong, why that architectural decision will haunt you, and what that customer feedback really means.
You can’t prompt your way to that knowledge. You can’t download that experience. You have to earn it. And if you’re letting AI do the work, you’re not earning anything except a dangerous dependency.
Your Abdication Audit
Time for a little uneasiness. Look at your recent work:
Can you explain every decision in detail without referencing what AI suggested? Could you do your job tomorrow if all AI tools disappeared? Are you getting better at your craft, or just better at prompting? When something breaks, is your first instinct to fix it or to ask AI to fix it?
If you’re squirming, you’re part of the 95%.
The Challenge
For the next week, pick one core skill of your job. Just one. Do it without any AI assistance. Write code without Copilot. Make product decisions without ChatGPT. Write strategy without Claude.
Feel that discomfort? That’s not incompetence. That’s your actual skill level revealing itself. That’s the gap between who you are and who you’ve been pretending AI makes you.
Now you have a choice. You can close that gap by developing your actual skills, using AI as a training partner rather than a replacement. Or you can keep abdicating, keep telling yourself you’re being innovative, and become part of that 95% failure rate.
The companies that will thrive aren’t the ones with the best AI tools. They’re the ones whose people use AI to become better, not to become lazier. They’re the ones where humans own the decisions, own the code, own the strategy, and use AI as an amplifier, not an autopilot.
I spent three months learning this the hard way. I let AI own my product development and almost lost myself as a developer. Don’t make my mistake. Don’t become another statistic in that 95%.
Own your craft. Use the tools. Don’t let the tools use you.
Stay courageous,
Josh Anderson
The Leadership Lighthouse
P.S. MIT’s study isn’t an outlier. Gartner, McKinsey, and others are finding similar failure rates. The pattern is consistent: abdication fails, augmentation succeeds. The question is: which side of that divide are you on?
P.P.S I received a TON of fantastic feedback both here and on LinkedIn, asking for more information about my approach. Here’s the follow-up article with those details:
How I Built a Production App with Claude Code
After I published “I Went All-In on AI. The MIT Study Is Right.”, my inbox exploded. That article came from three months of forcing myself to use only Claude Code - AKA the experiment that nearly cost me my confidence as a developer.


Quite so. This is an example of a far older phenomenon: Automate something, the corresponding skill set and experience atrophy. If the only experience is with the automated system, the skill set is never acquired. A problem far older than LLMs. The epitaph on the Air France 447 was a training and experience shortfall with respect to manual flying experience.
Those who cheat in courses by copying others' work may, if undetected, acquire a grade. They will not, however, have achieved the knowledge and mental skills that are the intended end product of the course.
The skills pipeline is necessary. One does not become a senior professional without the underlying experiences. The challenge is to develop and maintain an effective talent and skills development pipeline. It is a problem that long predates LLM availability.
Good piece. You might enjoy Lisanne Bainbridge's 1983 article "Ironies of Automation" which explains well the problem we are facing. But, there is also a real risk that people who know how to write production code want to keep working the way they did before, and not figure out new ways of working that augment – as you aim for – the human's intelligence.
Also, in software teams, wouldn't it be typical to find code you don't quite get, and have to spend time figuring out, because it was written by others and poorly documented?
In any case, expertise in software architecture will likely become ever more valuable when the agents roll in. Thanks for posting!