blog: Deskilling from AI
This commit is contained in:
parent
7067e780e9
commit
ee5733275b
1 changed files with 105 additions and 0 deletions
105
src/content/blog/2026-04-22-deskilling.md
Normal file
105
src/content/blog/2026-04-22-deskilling.md
Normal file
|
|
@ -0,0 +1,105 @@
|
|||
---
|
||||
title: >-
|
||||
Deskilling is my biggest fear about AI use (that isn't continued environment
|
||||
and economic degradation).
|
||||
date: 2026-04-22T20:23:20.278Z
|
||||
slug: 2026-04-22-deskilling
|
||||
author: Thomas Wilson-Cook
|
||||
tags:
|
||||
- AI
|
||||
- essay
|
||||
|
||||
---
|
||||
|
||||
Let's put aside the fact that AI is definitely a bubble ([paywall link](https://www.wheresyoured.at/premium-how-much-of-the-ai-bubble-is-real/)). And also that the energy and physical infrastructure required to build the systems we're being told we absolutely need *is* going to harm us ([link](https://www.theguardian.com/environment/2025/apr/09/big-tech-datacentres-water)). Let's set aside that despite reducing reliable access to clean water and electricity, the companies sucking up the water and power don't need record and report *any* kind of information about e.g. energy use that would let anyone guess at that impact *before* it happens ([pdf](https://arxiv.org/pdf/2511.17179)). Let's *definitely* set aside that the ultra-wealthy companies promising to build unfathomable warehouses for computers that last either six years ([if you like AI](https://siliconangle.com/2025/11/22/resetting-gpu-depreciation-ai-factories-bend-dont-break-useful-life-assumptions/)) or three-fours years ([if you're more honest](https://www.techstories.co/what-happens-when-ai-hardware-dies/)) will continue to accumulate wealth if (when) the bubble pops, and someone else can figure out what to do with the buildings.
|
||||
|
||||
Let's set all that aside.
|
||||
|
||||
In the last six-months I have seen the output from generative AI systems (namely Claude) act in ways that I would happily describe as "competent". I've seen it do that in a lot of contexts, like designing system architecture, reviewing code, or generating and improving documentation to a level that at can match the quality of people I have worked with. I don't think the output beats any individual person across *all* metrics (e.g. it won't out-write a good technical writer), and I think the quality of output is correlated with the quality of input, plus some dose of randomness. Despite all that, it does a remarkably good job a remarkable amount of the time.
|
||||
|
||||
This is a) because AI companies have singled out my profession (that of building software) as one to optimise for, and b) the artefacts produced by my job (code, documentation, etc.) are perfect fodder for the underlying large language model technology.[^1]
|
||||
|
||||
But even before this recent industry focus on software development — could it be the money pile is threatening to run out? — we've known that generative AI tools can help software developers feel more productive. A study from 2023 ([pdf](https://arxiv.org/pdf/2303.08733)[^2]) analysed posts on Stack Overflow and GitHub, and saw that developers felt generative AI was helping them write code faster. A study from 2025 ([link](https://www.sciencedirect.com/science/article/pii/S0950584925000904)[^3]) based on eighteen interviews with software-related professionals draws the conclusion that
|
||||
|
||||
> contemporary GenAI systems extend beyond mere code-assistance, offering additional support for assisting in architectural design decisions or other conceptualization tasks and improved code analysis, enabled by natural language interaction
|
||||
|
||||
Let's keep on the research train for a minute. In a paper from May 2025, researchers analysed survey responses (700 responses from ~12,000 developers at IBM), who were using their in-house generative AI coding assistant ([pdf](https://dl.acm.org/doi/pdf/10.1145/3706599.3706670)[^4]).
|
||||
|
||||
In this IBM paper, buried in the second to last sub-heading, the mention (sandwiched between some other findings) that several respondents mentioned being worries about losing their own abilities. How many said this? Don't worry about it.
|
||||
|
||||
The quote they cite from a respondent is: "I suspect we’re all going to get a lot stupider, doing a worse job of maintaining larger amounts of worse code". I adore the long-running tradition of software folks who stay pessimistic about sort of everything - whether that's anarchism or egalitarianism, or just edge lord-y tendency. The researcher's response:
|
||||
|
||||
> technological acceptance takes time, and new technologies have a learning curve before people can use them effectively
|
||||
|
||||
_**fart noises**_, big thumbs down.
|
||||
|
||||
Scared that things are going to get worse? Get with the times, dummy! We're building the future, get your loser ass in here, let me tell you (a software developer) about technological acceptance[^5].
|
||||
|
||||
I think it's important we consider the research findings because journalists, or the companies that hire people to write things that resemble journalism, keep showing us they can't be trusted to act in any kind of normal way about AI. They can't be chill.
|
||||
|
||||
2025 was declared the "year of the AI agents". In [this IEEE piece](https://spectrum.ieee.org/2025-year-of-ai-agents) they interrogated if 2025 *was* the year of the agent, and tell us "the answer is yes, absolutely—or no, not at all. It depends on who you ask".
|
||||
|
||||
*fart noises intensify*. Super helpful, thanks IEEE!
|
||||
|
||||
The nuance of their argument (don't read their piece) is "people *want* agents to exist". Yeah, I'm sure people have a lot of interest in replacing people who have to be paid and also sleep and take weekends off with machines that can do everything and never rest. That would be cool. Does that make the answer "yes and no"? No, it makes the answer no. It was a year where business people imagined how good the world could be.
|
||||
|
||||
2025 was, at most, the year of imagining magic computers, increased C-suite profits, and mass layoffs for everyone else. Cool!
|
||||
|
||||
Look, sorry, I was making a point about how actually I'm one of the chill ones.
|
||||
|
||||
The research gives us a (slightly) more impartial, reasoned assessment about what's happening with AI tooling and software development — corporate-funded studies aside. There are more barriers to publishing strong findings, and more social consequences for lying.
|
||||
|
||||
All this is to say that I am concerned that using AI will fundamentally de-skill the individual practitioners of our craft - and I'm not seeing that concern be taken seriously. I have seen this referred to in the literature as *skill decay*, *deskilling*, or simply *loss of skills*. For now, I just mean a general decrease in one's professional skills.
|
||||
|
||||
I see the exact reasons that people praise AI — quick code generation, navigation of existing codebases and existing conventions — as the same pathways to de-skilling. This makes me worried, because now the solution to the problem becomes a human one. That means, realistically, it's down to the organisation to decide their own culture. And I just don't have faith that's going to work. Remember when we all realised that agile delivery patterns are objectively more efficient than waterfall, and we stopped doing waterfall everywhere?
|
||||
|
||||
*Remember?* Remember how we could say to the executives and customers "sorry we can't give you a firm timeline for delivery of what you're asking for, but we'd love to ship you 10% of what you think you need to see how that would actually work in practice" and they went "well, that's sensible, it *would* be pretty risky to only put this in my hands after it's 100% perfect! And also, if you need to deliver a bit faster so that you can respond to unforeseeable disasters, that probably *is* better for the long-term health of the whole team's output".
|
||||
|
||||
Now imagine you're a software developer in the real world again, writing code with your best friend (who is a predictive language model). Why consider the syntax or tools available to you, if there's ghost text[^6] that appears in your browser within a second of you pausing your typing to think? Why consider the appropriate place, abstraction, or boundary for the code you want to add if it just gets added for you? Why think about the highest value test cases to add if they're all added equally in one fell-swoop as you tell your robot friend 'write tests for this method'?
|
||||
|
||||
I worry that AI takes away the "messy middle" of doing work, as well as the opportunity to reflect on work.
|
||||
|
||||
I also have a second thought about AI taking the aesthetics of expertise without the nuance of *doing* expertise. AI is a confident speaker, but I have seen it struggle with longer reasoning, despite what companies and people will tell you about their "advanced reasoning" models. I have seen it both outright ignore, or see then disregard, evidence during a reasoning chain. Both this month, both with frontline coding-optimised models.
|
||||
|
||||
The tone of the output reads exactly the same, regardless of if the model has ingested enough context to provide an informed opinion. The opinion *presents* itself as informed. And what am I going to do, double check everything? What would be the point in having the AI?
|
||||
|
||||
The problem is there's a difference between a parallel rewrite of existing functionality, and a subtle evolution of existing code. Both achieve the same outcome in the short term. Novice software technicians will fall for this trap a lot in their early career.
|
||||
|
||||
The AI is taking the aesthetics of expertise, like technical wording and justifications, and suggesting things that are, on balance, wrong.
|
||||
|
||||
Not only are *we* the ones who live with the consequences of these kinds of mistakes — e.g. unruly code, higher AI bills as the required token context gets larger — but over time you'll stop noticing it. We won't have opinions because we won't have the experience to inform them. It'll all be code that looks the same from a distance. Something either won't smell *off*, or we won't be given the chance to sniff the air around the code, because we're just letting the agent rip, while we orchestrate twelve other agents and cling desperately to the job we managed to not get laid off from (for now).
|
||||
|
||||
How long until there's a whole generation of developers who haven't been given the chance to develop the skills then then *get* decayed?
|
||||
|
||||
I don't think of myself as someone who's anti generative AI. Despite the destruction of economic, environmental, and intellectual property systems. I lead a team of software engineers and we're finding our way through the evolving social norms that come with adopting it day-to-day, just like any other tool. Thank goodness that IBM researcher explained technology adoption to me.
|
||||
|
||||
I encourage my colleagues to use it where appropriate, and I let people know when (and how) I'm finding it useful so that we're not all fumbling in the dark.
|
||||
|
||||
As part of my PhD in education I did a lot of work on formative assessment ([wikipedia](https://en.wikipedia.org/wiki/Formative_assessment)) - assessments which aren't graded, but aim to give learners the chance to reflect on their own proficiencies and processes. There's a lot of research dating back decades that shows us how improving metacognition — one's knowledge of one's own thinking or knowledge — is correlated with expertise and performance. Formative assessment is a great tool to help improve metacognition.
|
||||
|
||||
The problem is it's hard, and people sort of resist doing it. It can feel pointless, especially for novices. Perhaps because it can feel separate to the "main" activity of doing. I'm not paid to metacognate[^7] - I'm paid to build software.
|
||||
|
||||
Also I think most people don't know the word *metacognition*, but I think most people who get skilled at something will have an intuitive understanding of it. At very least you've met someone with poor metacognition: someone who thinks they actually learn better if they watch TV and do it at the same time.
|
||||
|
||||
To my mind the increased pace of delivery from generative AI isn't going to free up more time for reflection and evaluation. It's going to be seen as a tool to deliver feature after feature, even though we know that maintenance of existing software accounts for far more complexity than delivering new features. And we're going to be stopped from *feeling* that source of complexity, but it's not going to just go away.
|
||||
|
||||
To my previous point, I don't think that we have the cultural norms to fight against this, and I don't think the people who sell us AI have any incentive to help this.
|
||||
|
||||
It's a complicated, fascinating, profession - and I don't think predictive language models will change that. No matter how elaborately you string them together, or how much you subsidise their cost to the end user.
|
||||
|
||||
---
|
||||
|
||||
[^1]: David Pierce did a recent (April 2026) piece on this ([link](https://www.theverge.com/column/910019/ai-coding-wars-openai-google-anthropic)), writing "AI coding seems like the first truly mainstream AI use case — not to mention the first potentially great AI business". Personally I don't think any business model that works so long as you offer the customer a product at half the price it costs you to manufacture it ([link](https://sacra-pdfs.s3.us-east-2.amazonaws.com/anthropic.pdf)) can be called "potentially great", Pierce makes a good point about why a company could disproportionately care about software.
|
||||
|
||||
[^2]: Zhang, Beiqi, et al. "Practices and challenges of using github copilot: An empirical study." arXiv preprint arXiv:2303.08733 (2023).
|
||||
|
||||
[^3]: Banh, Leonardo, Florian Holldack, and Gero Strobel. "Copiloting the future: How generative AI transforms Software Engineering." Information and Software Technology 183 (2025): 107751.
|
||||
|
||||
[^4]: Weisz, Justin D., et al. "Examining the use and impact of an ai code assistant on developer productivity and experience in the enterprise." Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. 2025.
|
||||
|
||||
[^5]: Also 64% of respondents in this IBM survey said they wanted to use the in-house generative AI coding tool because of a "responsibility to try IBM products", and 60% because their managers expected them to use it. You'd think if something was *really* good, there'd be other reasons to use it
|
||||
|
||||
[^6]: The AI-generated suggested text that appears at e.g. 50% opacity in your text editor, that you normally accept by hitting TAB
|
||||
|
||||
[^7]: sorry to my non-native English-speaking readers - I tried to make a verb of the process "to do metacognition", and *metacognate* felt right. This isn't a real word, if you use it without supervision you could actually be in danger.
|
||||
|
||||
Loading…
Reference in a new issue