What a Machining Study Tells Us About the Future of Coding with AI Agents

A 2023 paper published in the Journal of Manufacturing Processes set out to answer a seemingly narrow question: how does a machinist’s behavior differ when working on a conventional (manual) milling machine versus a CNC machine? The answer turns out to be surprisingly relevant far beyond the machine shop — particularly for software engineers navigating the rise of AI coding agents.

The Study

Researchers at Rochester Institute of Technology designed a multi-modal study in which a trained machinist fabricated the same part — a leaf spring shackle — across four consecutive production runs on both conventional and CNC milling machines. They tracked everything: eye movements during design evaluation, dual-camera video of every production run, self-reported confidence, fatigue, and nuisance scores, dimensional quality measurements, and in-depth post-process interviews (Jarosz, Chen, & Liu, 2023).

The results were striking.

Manual Work Breeds Deeper Learning

During conventional machining, the participant changed their cutting strategy between every single run. They eliminated unnecessary fixturing, merged operations, split complex pocketing steps, and even moved reaming to a different machine — all to achieve better quality and reduced complexity. The ratio of changed operations ranged from 14% to 50% across runs.

During CNC machining? The strategy was modified once — between the first and second parts — and then held constant for the remaining runs. The changes were minor: adding deburring cycles and a finish pocketing pass.

The conventional process was harder, more fatiguing, and more annoying (as reflected in the participants’ self-reported nuisance scores, which were consistently higher). But it was also where the real learning happened. Confidence grew from 3/5 to 5/5 across manual runs. It stayed flat at 4/5 for every CNC run.

The Correlation That Matters

Perhaps the most telling finding: in conventional machining, the more effort the participant invested — measured by cutting time, measurement time, and perceived nuisance — the better the parts came out. The Pearson correlations between these factors and part quality all exceeded 0.9.

For CNC machining, no statistically significant correlations between effort and quality were found. The human’s engagement level simply mattered less to the outcome.

Sound Familiar?

If you replace “conventional machining” with “writing code by hand” and “CNC machining” with “using an AI coding agent,” the parallels are hard to ignore.

When developers write code manually, they’re deeply engaged in every decision. They reason about architecture, debug line by line, and build intuition through repeated struggle. Each iteration of a solution teaches them something. It’s harder and slower, but it builds the kind of foundational understanding that proves invaluable when things break.

When developers delegate to coding agents, execution is faster and more consistent, but the human’s role narrows to supervision, evaluation, and high-level direction — much like the CNC machinist who spends 38% of their time watching a screen versus 5% in conventional mode. The strategy stabilizes quickly. There’s less iteration, less improvisation, and less learning-by-doing.

The study found one more detail worth noting: during CNC production, the participant used the sound of the cutting process to adjust parameters — a skill they had developed through manual machining experience. Experienced developers who learned their craft by hand bring a similar intuition to evaluating AI-generated code. They can sense when something is off, even in an automated workflow. But that intuition had to be built somewhere.

The Takeaway

The researchers concluded that conventional machining remains essential for workforce training because it promotes deeper engagement, richer strategy development, and more iterative learning. The machining industry, despite all its automation, hasn’t abandoned manual training — and there’s a good reason for that.

The software engineering community is in the middle of its own version of this transition. AI coding agents are here, they’re powerful, and they’re only getting better. But this study from a seemingly unrelated field offers a useful reminder: the hard, manual, sometimes tedious process of doing things yourself isn’t just an inefficiency to be optimized away. It’s where expertise is forged.

The question isn’t whether to adopt AI agents — it’s how to ensure that the people using them have enough hands-on experience to know when the machine is wrong.


Reference: Jarosz, K., Chen, Y.-T., & Liu, R. (2023). Investigating the differences in human behavior between conventional machining and CNC machining for future workforce development: A case study. Journal of Manufacturing Processes, 96, 176–192. https://doi.org/10.1016/j.jmapro.2023.04.037