Why AI's Productivity Promise Feels Exhausting
By Jordan Hauge — Published November 3, 2025 — Category: Product Management, Artificial Intelligence, Workplace Productivity, Human-AI Collaboration
Product managers are saving 6-8 hours weekly with AI, but feeling more exhausted. Research reveals four hidden costs of AI productivity: trust calibration tax, cognitive load paradox, always-on burden, and the productivity Jevons paradox.
I saved eight hours a week when I started using AI for product management.Specification writing that used to take me 12 hours now takes 4-6. Requirements documentation that consumed entire afternoons gets done before lunch. User stories that required careful deliberation practically write themselves.So why the hell do I feel more tired?The productivity gains are real. The exhaustion is also real. They're not contradictory - they're two sides of the same transformation.This wasn't supposed to be how it worked.The promise was simple: AI handles the grunt work, you focus on strategy, everyone wins. I'd reclaim hours for deep thinking, long-term planning, maybe even leave work at a reasonable hour.Instead, I'm working just as much. My calendar is still full. And at the end of each day, I'm mentally wiped in a way I wasn't before.I spent four years tracking my time allocation as a product manager, documenting the shift from pre-AI workflows (2020-2021) to AI-augmented ones (2024-2025).The quantitative gains are real and measurable: a 50% reduction in specification writing time, a 25% increase in hours allocated to strategic work. My productivity metrics look fantastic.But here's what the spreadsheets don't capture: the new forms of exhaustion that come with those gains.After talking with dozens of other product managers and diving into emerging research on human-AI collaboration, I've identified four hidden costs that explain why AI's productivity promise feels less like freedom and more like a different kind of burden.The Trust Calibration Tax: A Thousand Micro-Decisions You Didn't Know You Were MakingHere's a situation that happens to me several times a day:Claude generates a feature specification:It looks good. Probably good. Maybe good?Do I ship it as-is?Do I spot-check a few sections?Do I review every line?Do I regenerate it with different context?That moment of hesitation? That's the trust calibration tax.Research from IBM and UC Berkeley on human-AI decision making reveals something intriguing: knowing when to trust or distrust AI recommendations requires constant case-by-case judgment (Zhang, Liao, & Bellamy, 2020).Unlike working with a human colleague where you develop generalized trust over time, AI systems require what researchers call "appropriate calibration" on every single interaction.Think about how you work with a trusted teammate.After six months, you know Sarah's work needs minimal review. You know Jake sometimes misses edge cases in his user stories. You've built mental models that reduce decision overhead.AI doesn't work that way.The model that nailed your last spec documentation might hallucinate nonsense on the next one. The same prompt that produced clean documentation yesterday might generate verbose garbage today.You can't build reliable heuristics because AI performance varies in ways that don't map to human consistency patterns.So you're stuck making hundreds of micro-decisions:Trust this output or verify it?Good enough or regenerate?Spot-check or deep review?Use as-is or as inspiration?Each decision feels trivial. Collectively, they're exhausting.You're making hundreds of micro-decisions daily about what to trust. Each decision feels trivial. Collectively, they're exhausting.What's worse, research on trust dynamics in human-AI collaboration shows that first impressions create persistent trust patterns that are hard to override, even with contradictory evidence (Pareek, Velloso, & Goncalves, 2024).When AI produces a bad output early in a project, you over-verify everything afterward. When it performs well initially, you under-verify later-and miss errors you should have caught.You're constantly calibrating, but you can never fully calibrate.I tracked my own learning curve over six months. For the first two months, I verified everything, negating most of AI's speed benefits.Months three and four, I started trusting too much and shipped work with subtle errors. By month five, I found some equilibrium-but it required conscious effort on every task.Research from MIT suggests this isn't unique to my experience.Studies on human-AI collaboration identify trust calibration as one of the most significant cognitive challenges in adopting AI systems (Bansal et al., 2019). The challenge isn't learning to use the tool-it's learning when to trust what it produces.And unlike learning to trust a colleague, you never fully get there. Because the AI keeps changing. Tools update. Capabilities expand. What worked last month might not work this month.The tax compounds.The Cognitive Load Paradox: Why Managing AI Is Invisible LaborI used to write specifications myself. It was tedious work, but cognitively straightforward: think through requirements, document them clearly, review for completeness.Now I design context for AI to write specifications. Sounds easier, right?It's not.Here's what "letting AI handle it" actually involves:Before the task:Determine what context the AI needsDecide what I need to tell it versus what it should inferStructure the information architectureAnticipate where it might go wrongDuring the task:Monitor output as it generatesCatch drift from requirements earlyDecide when to let it continue versus restartManage the conversational context windowAfter the task:Validate accuracy at varying levels of depthDetermine what changes require regeneration versus manual editsAssess whether errors are acceptable versus criticalDecide if the output needs another passThis is what researchers call "hybrid orchestration," managing workflows that span both human and AI contributions with different capabilities and trust requirements (Bansal et al., 2021).Your boss sees 'specification completed in 4 hours instead of 12' and thinks you gained 8 hours of capacity. What actually happened? You traded explicit labor for cognitive labor.The problem?This cognitive work is invisible. It doesn't show up in time tracking. It doesn't appear in productivity metrics.Your boss sees "specification completed in 4 hours instead of 12" and thinks you gained 8 hours of capacity.What actually happened? You traded explicit labor (writing) for cognitive labor (orchestrating).And cognitive labor is often more exhausting.Research using physiological sensors and eye-tracking technology in human-AI collaboration reveals something striking: measured cognitive workload during AI-assisted tasks often exceeds the workload of performing tasks manually (Hilmi, Hamid, & Ibrahim, 2024).Heart rate variability, electrodermal activity, and other stress markers show higher mental effort even when tasks complete faster.Why?Because you're managing two parallel processes:The actual work (requirements analysis, strategic thinking)The AI coordination layer (context design, output validation, error detection)Cognitive Load Theory tells us that working memory has limited capacity-roughly 7±2 pieces of information at once (Sweller, 1988). When you write specifications manually, your cognitive load is concentrated on the requirements themselves.When you orchestrate AI to write specifications, you're splitting cognitive resources between the requirements and managing the AI system.And here's the kicker: unlike manual work where effort scales linearly with task size, AI orchestration creates fixed overhead on every task.Small tasks see proportionally huge cognitive loads. You spend as much mental energy setting up context for a simple user story as you do for a complex feature spec.The speed improvement is real. But the cognitive cost? Also real. And it's harder to see, harder to measure, and harder to recover from.I used to leave work mentally tired from writing. Now I leave work mentally tired from orchestrating. The exhaustion feels different-more fragmented, more pervasive, harder to shake.The Always-On Burden: How AI Broke the Boundaries Agile BuiltRemember Scrum ceremonies?I used to complain about them. The retrospectives that felt like therapy sessions. The planning poker sessions that dragged on. The daily standups that always ran over.But here's what those ceremonies actually provided: boundaries.Sprint planning meant two weeks of committed scope. You knew what you were building. The scope was locked. You had permission to say "we'll address that next sprint" without guilt.Daily standups created a rhythm. Problems surfaced once per day, not continuously. You could batch coordination overhead instead of context-switching constantly.Retrospectives forced reflection at defined intervals. You weren't expected to optimize continuously-just periodically.Agile methodology's time-boxed structure wasn't just about managing work. It was about managing attention. About creating spaces where you could focus. About giving people permission to not be responsive every single moment.AI broke that.Not intentionally. Not maliciously. But definitely completely.Here's what changed: AI doesn't respect sprint boundaries.It's always available. It doesn't need standups to communicate. It doesn't batch its feedback for retrospectives.When a developer hits a blocker at 8pm, they can ask Claude for help. When a designer needs input at 6am, they can query GPT-5. When a stakeholder wants to see options on Sunday afternoon, they can generate them instantly.And here's the insidious part: when AI makes work possible at any hour, it creates pressure to actually do work at any hour.Research on "always-on" workplace culture reveals the psychological mechanism behind this.When office workers spend 90% of their time at their desks (up from 30% in the pre-digital era), the expectation of constant availability becomes normalized (Perceptyx, 2024). The "digital leash" created by smartphones and Slack was bad enough.AI supercharges it.Because now it's not just about responding to messages. It's about the fact that you could be making progress on any task at any moment.The barrier to productivity is lower than it's ever been.This creates what researchers call "invisible burnout," exhaustion that doesn't show up in performance metrics but infiltrates how you interact, decide, and function (Othman & Conbere, 2025).You're not obviously overworked. Your hours might be reasonable. But the cognitive load of continuous availability wears you down in ways that are hard to articulate.I found myself checking Claude at 10pm to unblock tomorrow's work.Reviewing AI-generated specs during breakfast.Queuing up research prompts while walking my dogs.And here's the thing: it felt productive. It felt efficient. I was "winning" at the productivity game.But by Thursday afternoon, I was toast. Not from overwork in the traditional sense. From the absence of boundaries. From the erosion of defined "on" and "off" states.The Mayo Clinic's research on workplace burnout identifies blurred work-life boundaries as one of the primary drivers of exhaustion (Swensen & Shanafelt, 2024). When your workplace and personal space are the same, and your AI coworker is always available, those boundaries effectively disappear.Agile gave us permission to defer. To say "not in this sprint." To batch our responsiveness.Agile gave us permission to defer. To say 'not in this sprint.' AI doesn't respect those boundaries. We optimized for flow without considering that humans need friction to rest.Continuous Adaptive Development - the framework I am working on defining through my research - describes AI-augmented workflows that need to solve for this. But right now, most organizations are just layering AI tools onto existing structures without rebuilding the boundary mechanisms that made Agile sustainable.We optimized for flow without considering that humans need friction to rest.The Productivity Jevons Paradox: Why Saving Time Means Doing MoreIn 1865, economist William Stanley Jevons observed something counterintuitive: when coal-powered steam engines became more efficient, coal consumption didn't decrease, it exploded.Better efficiency didn't reduce demand. It amplified it. Cheaper energy meant more applications, more uses, more consumption.This pattern repeats throughout technological history.Personal computers didn't reduce office work.Email didn't reduce communication overhead.Smartphones didn't reduce information processing demands.Each efficiency gain unlocked latent demand that exceeded the capacity we saved.AI is doing the same thing to knowledge work.I saved 8 hours weekly on specification writing. That should mean 8 hours for strategy, right?Wrong.Here's what actually happened:Week 1: I used those hours to write better specifications. More detail. More edge cases. Higher quality.Week 2: Stakeholders noticed the quality improvement. Asked for specifications on three additional features we'd normally defer.Week 3: With specs completing faster, we accelerated prototyping. More prototypes meant more feedback cycles. More refinement work.Week 4: Leadership increased roadmap expectations. If specs complete in half the time, we can deliver twice as much, right?I thought AI would give me my evenings back. Instead, my capacity expanded to fill the scope available. And the scope available is infinite.Within a month, my saved time had been consumed by expanded scope.This is the productivity Jevons Paradox in action, and research confirms it's not just my experience.Stanford's 2024 AI Index Report documented that knowledge workers using AI assistants are completing more work in the same timeframe-but critically, they're not working less (Brynjolfsson, Li, & Raymond, 2024).The productivity gains are real.The time savings are real.But they're being reinvested immediately into more work rather than more rest.Research from BetterUp Labs and Stanford Social Media Lab quantifies the downstream effects: 40% of US desk workers report receiving low-quality AI-generated work ("workslop") from colleagues, spending an average of 2 hours per incident resolving issues that shouldn't exist (BetterUp Labs, 2025).For a 10,000-person company, this creates $9 million in annual wasted expenses.The productivity multiplier creates a quality dilution that consumes the time it saved.And here's the psychological trap: when an hour of your effort yields what used to take a day, rest starts feeling like a loss. Every hour not spent working feels wasteful. The opportunity cost of downtime becomes infinite.Researcher Arush Sharma describes this as a "strange psychological trap: we're more powerful than ever, but feel increasingly inadequate" (Sharma, 2025). The baseline keeps moving. When everyone can do 10x more, you're not ahead-you're just keeping up.This is the part nobody warned us about.I thought AI would give me my evenings back. Instead, my capacity expanded to fill the scope available. And the scope available is infinite.Because here's the brutal reality: AI doesn't just handle existing tasks faster. It makes previously impossible tasks possible. Analysis you couldn't do. Research you couldn't complete. Scenarios you couldn't model.Every efficiency gain reveals ten new things you could be doing.The research on this is stark.A Danish study tracking 25,000 workers found that despite AI tool adoption, there were virtually no changes in wages, working hours, or employment levels (Humlum & Vestergaard, 2024). Workers are more productive on paper.But from a macro view? They're just doing more work for the same compensation.The productivity gains accrue to the output, not to the worker.What Actually Works: Navigating the Exhaustion Without Losing the BenefitsSo what do we do?Because here's the thing: I'm not going back to manual specification writing. The efficiency gains are real. The quality improvements are measurable. AI genuinely makes me better at my job.But sustainable?That's the question we haven't solved yet.AI coordination is real work. Context design, output validation, and trust calibration are genuine cognitive labor. When you account for it, the invisible becomes visible.After experimenting for a year and talking with dozens of product managers in similar situations, here are the strategies that seem to actually work:1. Name the New WorkThe first step is recognizing that AI coordination is real work."Letting AI handle it" sounds passive. It's not.Context design, output validation, and trust calibration are genuine cognitive labor.When you account for it in estimates and workload planning, the invisible becomes visible.When leadership asks "why did that take 6 hours if AI wrote it in 15 minutes?" you can explain: "Because I spent 5 hours designing the context architecture and validating the output."Organizations that track AI orchestration time separately from task completion time see more realistic planning and less burnout (Gartner, 2024).2. Build Trust Calibration FrameworksYou'll never fully eliminate trust calibration overhead, but you can systematize it.I built a decision tree for myself:High-risk output (customer-facing, security-related, compliance-sensitive): Full verification requiredMedium-risk output (internal specs, docs, analysis): Spot-check key sections + sanity validationLow-risk output (brainstorming, drafts, research summaries): Use as-is with quick scanResearch on appropriate trust in AI shows that explicit frameworks reduce cognitive load and improve accuracy (Wischnewski, Krämer, & Müller, 2023).You're not making a fresh decision every time, rather, you're applying a consistent protocol.Does it still require judgment? Yes.But it's one judgment (risk categorization) instead of ten (should I check this? what about this section? how deep should I review?).3. Protect the Boundaries Agile Gave YouAI's always-on nature is seductive. Resist it.I implemented hard rules:No AI work after 7pm (I'm working on it, ok!)No AI work before 7amNo "quick prompts" during family time (Do. not validate this with my wife!)One day per week with zero AI usageAre there exceptions? Of course.But treating them as actual exceptions, not the norm, makes a psychological difference.Research on workplace boundaries shows that constantly being "on" leads to exhaustion, and that boundaries allow you to recharge and return to work refreshed and focused (University of Kentucky HR, 2025).Continuous Adaptive Development doesn't mean continuously working. It means adapting how you work while protecting when you don't.4. Resist Jevons Paradox Through Explicit ScopingThe hardest part: saying no to expanded scope even when you technically have capacity.When AI makes something possible, the default answer doesn't have to be yes.I started having explicit conversations with leadership about the difference between:Capacity: What we technically could deliverStrategic value: What we should deliverSustainable pace: What we can deliver without burning outResearch on the productivity J-curve suggests that AI's true economic value takes years to materialize because it requires complementary organizational changes (Brynjolfsson, Rock, & Syverson, 2017).Organizations that race to capture short-term efficiency gains often sacrifice long-term sustainability.Your saved time is not automatically available time.It's opportunity time that requires conscious allocation.5. Redefine What "Productive" MeansThis is the philosophical shift that matters most.Traditional productivity measured outputs per hour. AI multiplies outputs, so by that metric, we're all incredibly productive.But what if we measured:Quality of strategic decisions madeDepth of customer understanding developedStrength of relationships builtSustainability of pace maintainedInnovation created rather than tasks completedWhen I shifted my mental model from "how much did I produce?" to "how much value did I create?" the exhaustion eased.Not because the work changed.Because the framing changed.AI is a tool for value creation, not task completion.When you optimize for the wrong metric, you end up exhausted and unsure why.The Uncomfortable Truth: This Is Just the BeginningHere's what pulls me into that deep rabbit hole of negativity and fear:AI capabilities are improving monthly. What feels like manageable cognitive overhead today might be overwhelming in six months.The tools I've built equilibrium with will change. New capabilities will create new forms of exhaustion I haven't anticipated.And we're all figuring this out in real-time with no playbook.The research on AI productivity is only now catching up to practitioner experience. Most studies still focus on task completion speed, not cognitive sustainability.Most analyses measure output gains, not exhaustion costs.We're in the middle of a massive societal experiment, and the data won't be clear for years.What I do know:The productivity promise is real. The exhaustion is also real. They're not contradictory-they're two sides of the same transformation.AI doesn't reduce work. It changes the nature of work. And that change comes with costs we're only beginning to understand.My 50% reduction in specification writing time is genuine. My 25% increase in strategic work allocation is measurable. Those numbers belong in the "wins" column.But the trust calibration tax, the cognitive load paradox, the always-on burden, and the productivity Jevons paradox? Those belong in the "costs we need to account for" column.The question isn't whether AI makes us more productive. It clearly does.The question is whether we can sustain that productivity without burning out.And right now, I'm not sure anyone has the answer.What Comes NextI'm treating this article as the start of a conversation, not the end of one.If you're a product manager experiencing this same exhaustion...if your productivity metrics look great while you feel progressively more tired...you're not alone. This isn't a personal failing. It's a systemic challenge we're all navigating.AI doesn't reduce work. It changes the nature of work. And that change comes with costs we're only beginning to understand.The framework I am developing - Continuous Adaptive Development - describes how AI changes product workflows. But it needs an equally important counterpart: Continuous Adaptive Recovery.We figured out how to work continuously. Now we need to figure out how to rest continuously.Because the alternative is a future where we're all incredibly productive right up until we're not.And I don't think that's the future any of us actually want.This article is part of an ongoing research series examining the transformation of product management in the age of generative AI. If you have interest in participating in the research, please email jordan@jamcreative.co.