How AI will change SW Engineering
How AI will change SW Engineering home

Quality: More AI-Generated Code Raises the Bar for Verification

AI can improve quality when it increases test coverage, catches obvious issues before human review, explains risks, and shortens feedback loops. It can also degrade quality if it increases code volume without stronger validation.

Why this changes the profession: as AI lowers the cost of producing code, verification becomes the bottleneck. Test strategy, code review judgment, security thinking, observability, and incident learning become more central to engineering identity.

Quality use cases

Testing

Diffblue generates Java unit tests; mabl and Applitools support AI-assisted end-to-end and visual testing; Launchable uses predictive test selection to speed CI feedback.

Code review

GitHub Copilot, CodeRabbit, Qodo, and GitLab Duo can summarize diffs, flag bugs, suggest tests, and enforce standards before human reviewers spend attention.

Security

Snyk DeepCode AI, SonarQube, GitHub Advanced Security, and GitLab security workflows use AI-assisted explanation, prioritization, and fix suggestions for vulnerabilities.

Operations

Datadog Watchdog, Dynatrace Davis AI, and Braintrust support anomaly detection, root cause analysis, incident summarization, and LLM application observability.

Quality pattern

  1. Generate tests before or with code. Ask AI for edge cases and failure modes, not only happy paths.
  2. Require deterministic checks. Lint, type-check, unit test, integration test, security scan, and run evals for AI features.
  3. Use AI review as pre-review. It should reduce human reviewer load, not replace judgment.
  4. Track escaped defects and rework. More PRs are not useful if more bugs reach production.
Risk: AI can produce plausible but subtly wrong code. Quality improves only when teams strengthen the surrounding engineering system: small changes, clear ownership, tests, architecture, observability, and security review.