Back to News
Market Impact: 0.5

Software Developers Say AI Is Rotting Their Brains

METAGOOGLMSFTRDDT
Artificial IntelligenceTechnology & InnovationCompany FundamentalsManagement & GovernanceCorporate Guidance & OutlookM&A & Restructuring
Software Developers Say AI Is Rotting Their Brains

Tech executives say AI is now generating a growing share of code at companies like Google, Microsoft, Meta and Anthropic, with Google citing 75% of new code AI-generated, Microsoft up to 30%, and Anthropic saying 90% of code from most teams is AI-generated. But the article highlights developer backlash over flawed output, rising tech debt, and skill erosion, while AI-driven efficiency claims have largely been used to justify layoffs, including Meta's planned 10% workforce cut, Microsoft's voluntary retirement offer to 7% of its U.S. workforce, and Snapchat's 16% staff reduction.

Analysis

The market is still pricing AI as a clean productivity flywheel, but the more interesting signal is a widening gap between executive narrative and operating reality. If internal AI use is mostly substituting for junior labor while adding review overhead, the near-term beneficiaries are not the mega-caps’ own customers but the vendors that sell the picks-and-shovels layer: model hosts, inference infrastructure, code review/security tooling, and workflow orchestration. That creates a second-order risk for the big platform names: they can claim efficiency gains today while quietly accumulating technical debt that shows up 6-18 months later as slower release cadence, higher defect rates, and more expensive remediation. For META, MSFT, and GOOGL, the incremental negative is not that AI usage is fake; it is that the productivity dividend may be delayed and more capital-intensive than consensus expects. If managements keep using headcount reduction as evidence of AI ROI, the market could over-earn the savings into margin forecasts and underwrite too-aggressive operating leverage for 2026. The more fragile part is governance: when companies broadly mandate AI-generated code, any security incident or major outage will force an immediate reassessment of how much of the “efficiency” was actually risk transfer onto users and future maintenance budgets. The contrarian angle is that this is likely a timing issue, not a thesis killer. Enterprises usually overestimate the first-round productivity gains and underestimate the compounding effect once tools are embedded into standards, tests, and review layers. That means the selloff risk is best expressed tactically over the next 1-3 quarters, while the longer-term structural adoption case remains intact; the key variable is whether AI becomes a complement to engineers or a low-quality replacement that degrades output quality faster than it saves labor costs.