The AI Era Trap 'Workslop': How to Ensure Quality in AI Coding vs AI Code Review?
In the world of software development, the adoption of AI coding assistance tools is rapidly advancing. Not only has coding speed increased, but AI code review has recently appeared, seemingly promising dramatic improvements in our productivity.
However, a new challenge is emerging behind the scenes: Workslop. This term, coined in Harvard Business Review, refers to “AI-generated output that appears complete at first glance but is actually low quality and requires extra effort to fix.”
This workslop has the potential to quietly but surely erode quality and productivity in software development environments through two aspects: AI coding and AI code review. This article examines the dangers of workslop from these two perspectives and explains specific approaches to continue producing high-quality software while coexisting with AI.
AI coding tools generate code at remarkable speed. However, that code is not always optimal.
Easily accepting and merging these with the mindset of “It should be fine because AI wrote it” is exactly embedding workslop into the system. It becomes new technical debt, increases future modification costs, and ultimately reduces the entire team’s productivity.
On the other hand, AI code review can also become a breeding ground for workslop. For example, AI may generate review comments like:
The problem is when human reviewers blindly accept AI’s comments. Thinking “AI pointed it out, so this review should be sufficient,” they neglect the deeper, more essential discussions that humans should have: “Why is this change necessary?” “Is it architecturally appropriate?” “Has future extensibility been considered?”
This doesn’t improve review quality; rather, it becomes a mere formality. This too is serious “workslop” in the development process.
So, what should we do to receive AI’s benefits while avoiding the workslop trap? Here are five specific approaches:
The most important thing is mindset. AI is merely a powerful “Copilot,” and final judgment and responsibility always rest with humans (developers). Don’t blindly accept code or review comments generated by AI; always develop the habit of critically evaluating them with your own eyes.
AI output quality heavily depends on input quality. When generating code, describe requirements and constraints in detail through comments. When requesting AI reviews, clearly describe the background, purpose, and concerns in the pull request description. This increases the likelihood of AI generating higher-quality, context-appropriate output.
Position AI review as “primary screening.” In human reviews, it’s important to focus on essential aspects that are difficult for AI to judge: design philosophy, architectural consistency, business logic validity, and future maintainability.
Establishing team guidelines for AI tool usage and having common understanding prevents confusion: “Confirmation criteria when merging AI-generated code,” “How to handle AI review comments,” etc. Clarify what range to entrust to AI and where human responsibility begins.
Paradoxically, to master AI, developers’ own fundamental skills become more important than ever. The programming ability, algorithmic knowledge, and design skills to judge the quality of AI-generated code and appropriately modify and improve it become the best weapons for detecting workslop.
AI has the power to dramatically transform our development processes. However, simply accepting that power uncritically risks unknowingly building mountains of “workslop” and having productivity eroded.
Neither AI coding nor AI code review is inherently good or evil. What’s important is that we developers wisely utilize AI and maintain an attitude of taking responsibility for final quality. Making AI a trusted partner while humans focus on more creative and essential challenges - building such a collaborative relationship is the key to achieving high-quality software development in the AI era.
That’s all from the Gemba, where we aim to eliminate workslop.