Productivity: What Changes When AI Does More of the Work
AI improves productivity most when engineers can constrain the task, provide context, and verify output quickly. The productivity question is less “does AI write code?” and more “does it shorten the validated path from intent to working software?”
High-value productivity uses
GitHub Copilot, Cursor, Claude Code, OpenAI Codex, Amazon Q Developer, Gemini Code Assist, JetBrains AI Assistant, Windsurf, and Tabnine help draft functions, glue code, examples, CLI scripts, migrations, and repetitive changes.
Sourcegraph Cody, repository-aware IDEs, and agentic CLI tools help engineers summarize unfamiliar code, trace call paths, identify owners, and build a mental model faster.
AI is strong at generating first-draft unit tests, API usage examples, docstrings, README updates, release notes, and PR summaries — all work that often blocks throughput but is easy to review.
Agents can apply repeated edits across files, convert APIs, modernize syntax, and propose dependency upgrades when backed by test suites and human review.
Measured evidence is mixed
| Finding | Interpretation |
|---|---|
| Copilot controlled experiment: developers completed a bounded JavaScript task 55.8% faster. | Strong evidence for local coding acceleration; not proof of enterprise delivery acceleration. |
| McKinsey: some coding tasks up to twice as fast; estimated 20–45% direct productivity impact depending on activity. | Large potential, especially in coding, documentation, and task support; requires workflow redesign. |
| METR RCT: experienced OSS maintainers took 19% longer on familiar mature repos. | AI can create review burden and context mismatch on complex high-context work. |
| Uplevel study: throughput did not automatically increase and bug rate rose ~41% in one enterprise sample. | Tool access alone is not an adoption strategy; quality gates matter. |