Is software de-skilling programmers?

eclipse1

One of the themes of “The Great Forgetting,” my essay in the new Atlantic, is the spread of de-skilling into the professional work force. Through the nineteenth and twentieth centuries, the mechanization of industry led to the de-skilling of many manual trades, turning craftsmen into machine operators. As software automates intellectual labor, there are signs that a similar trend is influencing white collar workers, from accountants to lawyers.

Software writers themselves don’t seem immune from the new de-skilling wave. The longtime Google programmer Vivek Haldar, responding to my essay on his personal blog, writes of the danger of de-skilling inherent in modern integrated development environments (IDEs) like Eclipse and Visual Studio. IDEs automate many routine coding tasks, and as they’ve grown more sophisticated they’ve taken on higher-level tasks like restructuring, or “refactoring,” code:

Modern IDEs are getting “helpful” enough that at times I feel like an IDE operator rather than a programmer. They have support for advanced refactoring. Linters can now tell you about design issues and code smells. The behavior all these tools encourage is not “think deeply about your code and write it carefully”, but “just write a crappy first draft of your code, and then the tools will tell you not just what’s wrong with it, but also how to make it better.”

Haldar is not dismissing the benefits of IDEs, which, he argues, can lead to “a cleaner codebase” as well as greater productivity. His comments point to the essential tension that has always characterized technological de-skilling: the very real benefits of labor-saving technology come at the cost of a loss of human talent. The hard challenge is knowing where to draw the line—or just realizing that there is a line to be drawn.

Photo by Nathan Bergey.

19 thoughts on “Is software de-skilling programmers?

  1. Raj

    Well, the intent behind some of the IDE like environments is to help model more and more complex problems. For example, back in the day when the semiconductors chips had only a few transistors, designers used to lay them out by hand (I kid you not), but as the transistor count exploded, you now have automated tools taking care of all the low level routing and interconnect stuff, so that, the designers can focus on more complex system-level problems, network-level problems. Same with programming. In this context, I do not consider it as a deskilling. It is more like thinking in higher layers of abstraction, so that you can model the complexity, of whatever it is you are trying to solve, better.

    Raj

  2. Anon

    The only person concerned with this is the programmer, who is merely a labor saving device for an employer who needs the code for a business purpose of his or her own. The employer doesn’t care if you use an IDE, IEEE, ABC, or XYZ. They just want the code done. So really it’s all a matter of perspective.

  3. Lisa Spangenberg

    I am not a programmer. While I can generally parse well-written and well organized code, I do so much like I read a complicated book in a language in which I am not fluent.

    From the point at which I first became involved in hiring programmers and software engineers some twenty or so years ago, I made it a habit to ask potential hires to walk me through code they had written, explaining what each section did, and I asked to see code they had commented.

    Many could do neither.

    I didn’t hire them.

  4. Premek

    While I agree with the “higher abstraction level” argument, the point of the post is very clear and real — I should first master any technique like refactoring using “just my head and hands” in order to understand its assumptions, working and its effects; only then I can fully appreciate the increased efficiency, quality etc brought by tools and use them as their master.

  5. Robert Impey

    I’m not sure that this is a problem. In the end, it’s the capability of the software that is an asset. Having to write it is a cost. If a computer can complete a task as well or better than the most skilled programmers, we can all move forward.

    I’ve come across old school developers who refuse to write unit tests. They regard them as a safety net for morons who can’t predict what their code will do. Unsurprisingly, these developers get lots of practice debugging their code and end up being very skilled at it. For my part, I prefer writing tests and the computer confirming that my code is running correctly rather to relying solely on my skills at reasoning about code in advace and having to a lot of my time honing my debugging skills.

  6. Thomas

    Hi,

    I completely agree with the author. Another part where I noticed a great deskilling is when people start to learn programming the object oriented way. Since they never had to think about doing complex things with only data and functions (C, ASM) they did not develop the skill to analyze problems to the deepest possible level. I noticed it when I had to work with such programmers on DSP-code. It was very difficult for them to understand the more abstract code since it is further away from real world object based thinking.

    Thomas

  7. Rob

    Thomas’ comment above is interesting, in that it reminds me of two things:

    Firstly, people have been saying for a long time that “programmers these days” don’t understand how the underlying systems work. This is a fairly well-known meme by now.

    Secondly, it’s mostly bollocks. Programming is still a field where solving a problem just opens up more problems to solve – you figured out how to make networking easy enough that anyone can write networked apps, now you need to figure out how to handle the load this is putting on your servers, and so on.

    Also, IDEs haven’t actually got all that more powerful in the last 30 years. I do most of my coding these days using Emacs, a piece of software several years older than I am. We’re still digesting ideas Alan Kay had years ago, and there’s enough of a backlog to keep us going for a long time yet.

    Finally, most of the problems in software development aren’t really problems to do with churning out “code”, they’re about understanding the problem domain and selecting the appropriate technologies to solve whatever problem you’re given. IDEs are a very, very long way off being able to anything more than help humans overcome drudge-work in those areas.

    I think it’s interesting to wonder whether or not the sheer scope of the applications of programming (pun intended) means that de-skilling is unlikely, because each advance opens up more new opportunities than it removes, at least for the foreseeable future, and the programmers who find their skills (knowledge of how to do the drudge work that IDEs can remove) are also the same people best placed to take advantage of the new opportunities, because just understanding how computer systems work is vital, and they have that skill. I must, of course, admit the possibility of wishful thinking on my part, but this seems to be plausible at the very least.

  8. R.Carey

    I see it.

    The tool was meant to increase productivity. The craftsman can leverage it for great things, meanwhile the new-comer can use it for quick entry and not feel the need to become a craftsman. Before these tools (from IDEs to digital cameras), one had to exert discipline practice to become a skilled. These advanced tools make it easy to be “good enough. ”

    Ironically, the tools that should allow us to go deeper and do more instead allow us to spend less time to do the same. Imagine what a motivated and disciplined person can do if he pushes on to become a craftsman while leveraging the tools. (Too bad there is not tool to instill motivation and discipline.)

  9. Nick Post author

    Thanks for the thoughtful comments.

    re: “each advance opens up more new opportunities than it removes”

    This is a common defense of automation in pretty much all fields. (See the quote from Alfred North Whitehead in my Atlantic piece.) And it’s a good defense, as it’s very often true. But there are also a couple of counterarguments:

    1. At some point, as the capabilities of automation software advance, the software aid begins to take over essential tasks – sensing, analysis, diagnosis, judgment making – and the human shifts to more routine functions such as input and monitoring. In other words, with the automation of skilled work there’s a point at which there are no “higher-level tasks” for the human to climb to.

    2. Lower-level tasks may be seen as mere drudge work by the experienced expert, but they can actually be essential to the development of rich expertise by a person learning the trade. Automation can, in other words, benefit the master, but harm the apprentice (as R. Carey suggests in the preceding comment). And in some cases even the master begins to experience skill loss by not practicing the “lower level” tasks. (This has been seen among veteran pilots depending on autopilot systems, for example.)

    Not being a programmer myself, I would be interested in hearing whether you think either of those factors apply to coding.

    Nick

  10. Walter Hehl

    I think this is a general consequence of the development of science and technology: Moving to higher levels (in S/W, in abstraction – in general, hiding the complexity to the non-professional). This implies unfortunately, that fewer and fewer people “understand” what is inside – and everything becomes magic. This makes it hard to distinguish for them between reality and fantasy, science and pseudoscience.

  11. Raj Karamchedu

    Nick: 2. Lower-level tasks may be seen as mere drudge work by the experienced expert, but they can actually be essential to the development of rich expertise by a person learning the trade.

    It is not that. The so-called “lower-level” tasks, in programming, teach to some extent, but not a whole lot. The “lower-level” tasks in programming are, in my book, different from the so-called “lower-level” tasks in other manual labor. In other sorts of labor and work, you’d continue to enjoy the work, derive satisfaction whether the task is low-level or not. But in programming it is different. You don’t learn anything after a while. It’s roughly like this: you learn the alphabet of the language, learn how to construct sentences, learn to read fiction, poetry, and now some of us may not exactly enjoy going back to writing exercise sentences from the grammar book, do we? We may want to write sentences, paragraphs and poems of our own, which are dense with meaning and what not. We are exercising the complexity of the language, but are we not also going to higher levels of abstraction? How is this any different?

    Raj

  12. Rob

    Nick, since you asked, I do have a theory about this! To answer your points directly first though:

    1. …with the automation of skilled work there’s a point at which there are no “higher-level tasks” for the human to climb to…

    On a long enough timeline, the survival rate for all professions must decline to zero. I’m not convinced that software development is advancing particularly quickly towards this point, and there are counter-acting forces which are creating new higher-level tasks almost as fast as the lower ones can be eliminated.

    2. Lower-level tasks may be seen as mere drudge work by the experienced expert, but they can actually be essential to the development of rich expertise by a person learning the trade.

    I do recognise this, and I’ve sometimes found it hard to train people junior to myself because they don’t know about some of the things that were taught to me as being fundamental. However, I’ve concluded that in many cases they just don’t need that knowledge. A good mindset and a grounding in good software design principles is much more durable than “how-to” knowledge. It’s also the case that IDEs and other tools really don’t transform the software development process as much as Vivek Haldar’s quote suggests – it’s still about typing stuff into a text editor and checking to see that the end result is what you thought it would be. We only really need the kind of tools Haldar mentions because some programming languages are painfully verbose and “refactoring” code in these languages is just incredibly boring and low-value as a way of spending one’s time. One could easily achieve the same effect by adopting better languages, but nobody would call that ‘de-skilling’.

    My grand theory to explain why de-skilling isn’t happening is that software development is a world in which systems have very clear levels of abstraction, and in which it is a strongly-held principle that the lower levels of a system should not become visible to the higher levels (there are exceptions, of course, but the general principle holds). Systems which fail to adhere to this principle are described as “leaky abstractions“, which is very much a pejorative. To print some text to the screen, it is not necessary to know how the computer does this, only to know that there is a reliable mechanism for doing so, and you can compose your own abstractions on top of this (for example, printing whole documents). Software developers learn that building software is a process of trying to get the messy, lower-level stuff to behave in a regular, predictable way so that we can reason about it using more tractable high-level concepts.

    Even the most basic program such as “Hello, world” operates at many levels above the “bare metal”, abstracting away questions of physics, electronics, the von Neumann architecture (memory, CPUs and so forth), encoding of text into binary form, loading of executable code into memory, operating systems, video output and many other things. And the canonical “Hello, world” was written in C, which is nowadays considered to be low on the ladder of abstraction.

    Thus a modern developer creating web applications is standing on the shoulders of giants, and those giants are standing on particularly tall mountains which jut forth from a world which sits astride four enormous elephants and a, if not quite infinite, still quite considerable, stack of turtles. Our intuition (expressed by Thomas) is that this shouldn’t work, and that the incredible ignorance that web developers have of the vital matters of memory management, interrupt request handlers, multi-threading and concurrency, combined with their inability to whip out a debugger and start combing through a hex dump of their application’s memory, should render them thoroughly incapable of useful work, but this turns out to be quite wrong. It’s shaped by our experience with the world of physical stuff, where different rules apply.

    For every bit of complexity that gets taken away from developers, a near-equal amount is made available at higher levels of abstraction. Modern developers don’t need to think so much about how to tightly manage the resources of a single machine, and probably wouldn’t be capable of doing so, but they often have to worry about how to manage the resources of whole networks of machines working together, something that the older generation worked hard to make possible. It’s also true that they don’t have to know about low-level resource management, and this is a good thing.

    In my view, the reason this works is that software development really does occur in a “virtual reality” in which the abstraction between different levels of a system is fairly rigorously policed and systems are explicitly constructed with this goal in mind. Computer systems are designed so that it’s OK if you don’t know what’s going on at the lower level most of the time (I appreciate that this is not always the case, but the cases where lower-level concerns predominate are rare and can be tackled by specialists). You’re building stuff that works out of lego bricks of code, and so long as that code behaves as advertised you really don’t need to know how it works. You’re never really harmed by knowing more (although it can sometimes cause premature optimisation, as you spend too much time mucking about in the lower levels) and in general it’s a good thing to be aware of what sits beneath your code, but it’s unfeasible and unnecessary to expect someone to know, say, more than three levels of abstraction either side of their normal area of work[1].

    This contrasts with ordinary reality, where the abstractions are more likely to be leaky, and ignorance of the lower levels of reality can be deeply problematic. A lot of academic disagreement (in particular anything which is described as “humanities vs. science”[2]) is actually a disagreement about whether or not a lower level of abstraction is important at higher levels, either because it constrains the set of valid theories at higher levels (say, in the way that the laws of thermodynamics constrain theories of climate, or prove perpetual motion impossible) or because the lower level is theorised to have some kind of emergent property which actively shapes the higher level subject (say, the attempts to find microfoundations for macroeconomics, the role of evolution in psychology or the relevance of neuroscience to sociology). Ignorance of the lower levels can leave you out in the cold if it turns out that your theories conflict with more robust theories that they’re supposed to rest upon. On a more prosaic level, we might assume that our bathrooms present abstractions over systems of plumbing, but those often turn out to be leaky in both senses of the word and we’re in trouble if we don’t know something about how they work.

    Thus de-skilling in real-world scenarios is problematic for the people being de-skilled, because the knowledge they give up is more valuable than the capabilities and opportunities they gain in exchange. It would be risky to build a nuclear power station without knowing everything about how it works down to the level of physics, and once the knowledge is lost it’s very hard to get it back again. In contrast, you can build Instagram without necessarily knowing much at all about electronics or even operating systems[3]. Of course, at an individual level it can still be a problem if your particular technical specialism ends up being made obsolete, but three factors work in your favour here: the new stuff is often built on top of the old stuff that you already know inside-out; secondly, the new stuff is genuinely new so there’s not a lot of competition for jobs and you’ve already proven your abilities by working in the field; finally, there’s a fractal nature to computer systems design which means that the patterns we employ at higher levels of abstraction are pretty similar to the patterns that occur at the lower levels. For the price of a bit of discomfort and some time spent with a book and computer terminal, you can gain enough useful knowledge to offset the losses you suffer from the obsolescence of your old knowledge.

    In saying all of this, I’m making software engineering sound a lot more impressive than it is. From the inside, these clean divisions between abstraction layers don’t look quite so perfect, and the general consensus is that the industry has failed, big-time, to deliver on the promise of composable software. Such complaints have existed for a long time, and are often made by the most respected people in the field. In this view, the last 40 years has been a huge missed opportunity to make programming easier (if you only click one of the preceding links, make it that one, but be prepared to spend 40 minutes watching the video). I wouldn’t disagree with any of the specific examples, but I think that the fact that software engineers care so much about this in the first place, and that they feel entitled to care about it, leads to systems that are much closer to the ideal of perfect abstraction than would be possible in other circumstances.

    Will this process of new capabilities compensating for sunk-cost knowledge come to an end? I suppose it might. According to Marc Andreessen’s “software is eating the world” theory, we might guess that the industry can only really expand by cannibalizing external reality, taking more stuff from the real world and putting it into computers where it becomes part of the domain of the software engineers, fuel for another round of development, and there’s a limit to how much of the world remains to be eaten. Maybe we’ll create computers that can write better software than we can, but I wouldn’t hold my breath waiting for that (about 15 years ago I was told that going into software development was a mistake because this kind of AI was imminent; this turned out to be bad advice!). All told, I can’t see de-skilling as a serious problem, but then I’ve just quoted someone making a bold prediction about the future who was probably just as sure in his view as I am in mine.

    Well, that turned out a bit longer than I planned. Sadly, text editors can’t yet tell me how to make my comments more concise.

    [1] The term “full-stack developer” has been used to describe people who have supposedly complete knowledge of the systems they work with but, as that link shows, this really just means knowledge of things a few levels above and below what most of their peers know.

    [2] In the real world, we’re generally not constructing abstractions, we’re reconciling them, trying to figure out the missing layers between thermodynamics and climatology, or linguistics and poetry. Sometimes scientists get a bit over-excited and think that their new abstraction layer provides the key to understanding something that the humanities has already provided an account of, and a disagreement ensues; generally the scientists should be a bit more humble. Equally, sometimes the humanists deny the desirability or even the possibility of integrating lower-level knowledge with higher-level knowledge, dismissing it as “reductionism”. You can see Steven Pinker and Leon Wieseltier arguing past each other on this subject here.

    [3] You’ll need to know, say, how to use and administer a Linux server. But you won’t need to know your way around the Linux kernel; the iOS devices your app code is running on also do a fairly good job of shielding you from OS internals, by design.

  13. Micha

    At the moment I am making an apprenticeship as a software developer in Germany. What we learn is in the most cases using Visual Studio. Except of PHP or SQL, we never made something without VS. This surely makes it easier to write applications. But I feel like I do not really learn something in school.
    The whole GUI creating is clicking, dragging and resizing. I have no real idea what is in the code behind.

  14. Walter Hehl

    This is a great chain of comments, in particular from you, Rob:
    No problem with the length of your text.
    Thanks for the “full-stack-developer” notion – I think this is a good description of a real SW-prof! Of course, some center of knowledge is natural.
    Two historical remarks:
    First, I remember the first Telco-programs “BTAM” or earlier from IBM: Every aspect of the system was handled in ONE program: line protocol, character translations, line errors, and seat reservations, buffer overflows, what have you. Later, layers of functions and expertise became defined leading to TCAM and SNA etc etc.. At least the chief programmer was full-stack (but did not explicitly know). And later the communication architects.

    Second, a smart measure for the devloper’s work:
    I remember the IBM measure for developer’s work “Function Points, FP”.
    This is the number of non-trivial decisions made during programming.
    I suppose that this number FP/ people year is a constant (has been the last 40 year) for a good guy. Maybe by definition. Just that the decisions became on a higher level ….

    It is impossible to overestimate these learnings in general for the Society: SW is THE model discipline for handling complexity everywhere, often directly by converging, sometimes only “philosophically” or by analogy.
    Take medicine as example: We need a “full-stack-medicine” and “full-stack-doctor” (with a center of knowledge).
    The full stack knowledge “anchors” some upper method to basics and stabilizes the whole system. Otherwise you get a flood of pseudomedicines (helas, we have them) … And we know that hormones from deep below in our body stack can cause a psychic storm …

    I believe that people in general can learn a lot from our experience in SW engineering and the shift in skills – SW systems are gigantic, but they must work (yes!), and even perform! And if not, it is visible.

  15. Nick Post author

    Rob,

    Thanks.

    You write: “Systems which fail to adhere to this principle are described as ‘leaky abstractions‘, which is very much a pejorative.” The link is to an old Joel Spolsky post. But Spolsky’s point is that all abstractions, or at least all “nontrivial” abstractions, are to some degree leaky, and hence the less you know about the lower levels of abstraction, the more trouble you’ll be in when those levels begin to leak into the level you’re working at. Spolsky writes:

    “The law of leaky abstractions means that whenever somebody comes up with a wizzy new code-generation tool that is supposed to make us all ever-so-efficient, you hear a lot of people saying ‘learn how to do it manually first, then use the wizzy tool to save time.’ Code generation tools which pretend to abstract out something, like all abstractions, leak, and the only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So the abstractions save us time working, but they don’t save us time learning.”

    He seems to be suggesting that there is indeed a real danger of deskilling if one learns on and depends on coding tools that make it unnecessary (at least until problems arise) to understand what’s going on beneath the abstraction. That also implies that the more levels of abstraction you have, the more rickety the whole system becomes, even though most of the time it presents a very good illusion of stability.

    Also: as Micha’s comment suggests, there’s an existential angle here, too. Even if everything works fine, the coder’s sense of not really understanding what’s going on behind the veneer of automation can lessen the fulfillment gained from the work. I think this was also one of Haldar’s points when he described feeling like an “IDE operator.”

  16. Nick Post author

    By the way (this is a trivial but perhaps telling example), in order to put that link to Spolsky’s piece into my prior comment (it was stripped out in cutting and pasting), I had to enter the “a href” tag by hand. I could do that because when I started blogging eight years ago, the tool I used required all tags to be handcoded. If I started blogging today using WordPress, which automates the composition of tags (and entirely hides the tags from view in normal circumstances), I would likely not have known how to enter that link. That simple task would have been baffling to me.

  17. Max

    Nick, I don’t agree with the point of the post. The main difference, in my opinion, (which other folks already alluded to) is that indeed — in programming — as simple things are abstracted, the higher-order, previously impossible, problems become requirements for the next project. This is radically different from piloting an aircraft. The domain of piloting is finite, because the end-goal does not change — you need to get a plane from point A to point B. As software becomes more sophisticated, eventually, the entire thing could be automated. With software development, on the other hand, the goals continue to become more complex. The programs progressed from being able to only do simple arithmetic, to modern day search engines. I assure you that Google would not be possible without higher-level-of-abstraction languages and tools (lots of which they built themselves, by the way). So far, the trend seems to continue.

    Also, on the subject of abstractions being leaky — yes, they are. And if sufficient care is not applied, the system does become rickety. Good software is distinguished from bad in part by its ability to deal with failures in lower levels of abstraction. As I type this comment, there’s lots of layers between my browser and roughtype.com — the browser, the OS, the network drivers, the hardware of my PC, to name a few. If I yank the network cable from my machine the “perfect network” abstraction the browser relies on breaks down, but the machine does not shut down, OS does not bluescreen, browser does not crash — they are all able to deal with the situation… Is it simple? No (depending on the application, dealing with various error conditions can be 50-100% of the code written for the main logic path). Is it doable? Yes. The proof, as usual, is in the pudding. As I am sure you know, the world around us is increasingly run by computers, and it does not appear to become more rickety as time goes on and complexity increases. Ergo, integrally, the leaky abstractions are not able to destabilize the whole system. :)

Comments are closed.