The Evolution and Industrialisation of Software Development

An examination of the evolution of software development in the context of the evolution of previous technological advances.

The Evolution and Industrialisation of Software Development

Although we’re talking about software development, a subject at the forefront of modern technology, we need to start by looking further to the past. In order to understand the evolution of software development, understanding the evolution process behind other technologies we now take for granted is greatly beneficial.

At a high level there is an observable common pattern. Any given technology is niche and requires specialised skills to begin with, and with time becomes more sub divided and specialised whilst also transforming into a more widely adopted and deskilled piece of tech.

Let me illustrate with some examples, starting with the most primitive of technologies: fire. If we cast our minds back to the point where mankind was able to create fire, rather than rely on finding it in nature, we see that starting a fire was a complex and time-consuming affair. It required specialist knowledge and materials such as the use of flint, or a bow drill and straw. However, over time, this process evolved to a point where creating fire is now as simple as a flick of the thumb. In order for the average person to be able to own a lighter, specialised knowledge in several domains has been required, from the design of the object all the way through to mass production.

A more recent example of this is the ability to navigate. First carried out through celestial familiarity, the act of navigation required specialist knowledge: the ability to read a map, the position and movement of the stars, and the use of specialist equipment such as a sextant. Today, anyone can navigate by as simply dictating their destination to Google, Siri, or Alexa and then moving in the direction of an arrow on a screen. At the same time, the device required to make that possible required a huge amount of specialised knowledge in several domains of cartography, electronic engineering, astronautics (don’t forget the satellites used for the GPS), and of course, software.

Hopefully, this illustrates a recurring pattern in human history in the evolution of technology. Over time we see increased adoption of technology by reducing the prerequisite skill set to use such things. At the same time, an increase in the knowledge and skill required and increased specialisation by fewer people is observed in order to create the products for mass use.

For the avoidance of any doubt, I am very much in favour of this evolution of technology. Without the deskilling and wider adoption of technology, we are unable to advance further and build upon the technology that came before.


You don’t have to look very hard to see the same patterns emerging in software development. Even within the span of my own career, a mere 20 years, an exponential increase in specialisation is easily visible.

As late as the early 2000s it was easy to find development roles that required the developer to not only write the code, but also to test it and support it in production in even relatively large organisations (building the code and deploying it was a given). This is still common in small companies or start-ups but I’m going to ignore that — it’s common for the same person to also do the marketing and accounts too. One of the earliest specialisations to emerge was the separation of the production of the code from the testing and quality analysis and then the subsequent support. Even today, many organisations beyond a handful of developers will evolve to a similar structure fairly quickly (the wider adoption of agile methodologies and automated testing is slightly changing that now, but that’s a topic that merits an article of its own).

💡
We have developed the language to describe and talk about different roles, distinctions between a developer, a tester, and support staff are universally understood

Most importantly though, as a society, we have developed the language to describe and talk about these different roles. The distinctions between a developer, a tester, and support staff are universally understood.

Of course, this is only start of the specialisation of software development. The developer’s responsibility has reduced further still as we introduced operations and the subsequent emergence of roles such as Dev Ops and Site Reliability Engineers (SRE). From the other side of the spectrum, the introduction of business or functional analysts (in roles that later evolve into product owners, product managers, and UX designers) now define what needs building. Importantly, again, the evolution of these roles brings with it changes to our language and a common understanding of what each role brings to the table. Even so, in all honesty, the differences between a DevOps Engineer and an SRE are a lot less clear to most people than the difference between a software developer and software tester. I definitely wish that more recruiters and consultancies actually knew the difference between these.

You may argue that these are not aspects of software development but are merely skills that are complimentary to the act of software creation. Even if we accept this (though I’d counter that all these tasks were carried out by people categorised as software developers at one point), within the confines of software development there are still specialisations, the most obvious being UI/Front End Developers vs Server Side Developers. Server Side Development itself is often broken down into specialities such as real time, low latency, big data, or high performance computing.

Hopefully, the above examples illustrate without much scope for doubt that software development as a technology is evolving much in the same way that other technologies in the past have done so through increased specialisation. With that in mind, I’d like to switch focus to the other side of the evolutionary path which may be more difficult to accept: deskilling.


There are some examples of this that of this that I hope are easy to see and accept. Let’s take the #NoCode movement. Ten years ago, if I wanted a website, the knowledge of HTML (maybe CSS and JavaScript too) and webservers to understand how to create and deploy the site was non-negotiable. There were limited choices; you either paid someone with the knowledge to build one for you or you acquired the knowledge yourself. Nowadays, if I wanted a website there are numerous #NoCode solutions available that would let me generate a website, accept payments, integrate social media, and even manage stock, all without knowing how to spell HTML, let alone write a line of code.

Again, for the avoidance of any doubt, I think that deskilling is a great progression and highly important for the further advancement of technology. I also think it’s easy to accept examples of the evolution of software development. The level of knowledge required to create a website has been transformed from a closed, specialist skillset to one open to anyone that is computer literate. Importantly, is it also evident that the user of a #NoCode product is not a software developer. This makes this example of the process of software development being deskilled easy to accept (for software developers).

If we are to compare software development to carpentry (humour me please), initially the only way to create a chest of drawers (website) was to be a skilled carpenter and learn to cut and join wood. You can still do that today of course, or you could buy something from Ikea and put it together yourself without knowing the first thing about using a chisel.

In terms of the corresponding evolution of our language, we do not identify users of #NoCode products as developers, no more that you’d identify yourself as carpenter having assembled your new Ikea wardrobe.

This isn’t the only way in that software development is being deskilled. A fundamental principle in software development is the reuse of existing code where possible. It makes no sense to rewrite the same code again when it is far more efficient to reuse it. This perfectly rational approach logically leads to the creation of libraries of software to be reused in multiple different projects, products, and applications. Not only does this increase the speed at which software can be developed (by not having to write the same code again), it also means that the developer can specialise further and doesn’t need to understand (in detail) functionality provided by the library.

Let me try and illustrate this with a (somewhat contrived) example. Imagine two developers — let’s call them Dilbert and Alice — both of whom are creating microservices running on Kubernetes serving a REST API for use by a customer facing website like Amazon, eBay, Etsy, or Mom & Pop Store.

For simplicity, let’s even stipulate that both Dilbert and Alice write Java code. Dilbert is quick to pick up new patterns and can write a new microservice simply by throwing together a SpringBoot project and adding the appropriate controllers and service classes. He is very efficient at using existing libraries, knows the APIs, and can therefore rapidly create new applications built in this way.

Alice is also familiar with SpringBoot, however in addition to this, she has a working knowledge of networking, the Linux operating system, containers, helm, an understanding of the limitations of SpringBoot, how to profile, and tune the JVM… you get the idea.

Both Dilbert and Alice are capable of working with their PO and building a new API to deliver a new feature for the product. They both even have CVs that look broadly similar. However, the similarities end there. While Dilbert would be quickly out of his depth if the requirements were more complex, or if he suddenly had to troubleshoot a client unable to connect to the service, Alice would probably be quite capable of moving beyond the initial requirement and getting her hands dirty with the underlying code in the libraries and tools.

The increased prevalence of development tools, libraries, and frameworks coupled with plentiful cheap (sometimes free) online learning resources has led to a much-needed growth in the number of developers closer in profile to Dilbert than Alice.

If we return to our carpenter analogy again, Alice would be a carpenter, but Dilbert is certainly not just someone putting together Ikea furniture. Perhaps Dilbert could be an Ikea Hacker? I admit my analogy may be a little tenuous here but overall, I believe the principle is sound.

The problem here is that we refer to both Dilbert and Alice as developers. We have not evolved to a point where we have language that differentiates them. You might think this is just a question of the level of experience or ability, but I have come across this distinction many times, from new graduate hires to seasoned developers with decades of experience. Let me change the scenario just a little instead now: what if Alice was a developer on the SpringBoot framework instead?

Today we still refer to both as software developers, coders, software engineers, and other synonyms. Just for fun, let’s call the kind of role Dilbert performs a Software Assembler and we can refer to Alice’s role as Software Engineering (trying here to mirror the distinction between engineers and technicians in other engineering disciplines). Perhaps the phrase software developer can serve as an umbrella term that covers both.

To provide another analogy, when building a house, we don’t employ a team of civil engineers or architects. We engage the services of a builder, who in turn no doubt employs several other trades. The builder would never be able to build that house if wasn’t for the input of civil engineers that provide the calculations for the safe sizing of materials and spans. Nor would he be able to build it without the input of the architect to provide the designs. It’s also worth remembering that you’d be no closer to building the house with only a team of civil engineers or architects — most wouldn’t know one end of a trowel from the other. The builder will then use prefabricated items (be they bricks, lumber, steel work, or electrical cable) to construct the house. Software is slowly but surely evolving in the same pattern, but in our current state we still require civil engineers and architects to put the house together.


The lack of differentiative language for these skills leads to numerous problems. I’ll try and illustrate again with some examples: when a developer sets out to create a new library (whether they explicitly think about it or not) they write that code with a target audience of other software developers in mind, i.e., other people with the same skill sets as themselves. Let’s contrast that with the developers of tools like Wix and Squarespace; the developers of these products set out to create something for use by people with no ability to code. This was easy because we have language that describes users and developers. The lack of specialised language to describe software assemblers exacerbates the problem of how developers of tools, libraries, and frameworks create technologies for other developers.

💡
The lack of differentiative language for these skills leads to numerous problems

The lack of differentiative language for these skills leads to numerous problems. I’ll try and illustrate again with some examples: when a developer sets out to create a new library (whether they explicitly think about it or not) they write that code with a target audience of other software developers in mind, i.e., other people with the same skill sets as themselves. Let’s contrast that with the developers of tools like Wix and Squarespace; the developers of these products set out to create something for use by people with no ability to code. This was easy because we have language that describes users and developers. The lack of specialised language to describe software assemblers exacerbates the problem of how developers of tools, libraries, and frameworks create technologies for other developers.

The understanding that we have another category of user, known as software assemblers, would allow the development of better tools that specifically target and cater for this sector. The inability to express this means that, instead, what we usually end up with are leaky abstractions and the need to understand what happens beneath the covers. Better tools would remove the need to hire software developers for most companies, instead requiring only software assembly. Put another way, a recognition of this effect would result in its acceleration.

If we accept my hypothesis, there are significant implications for how we build software in the future, from the hiring and composition of the right teams, to career progression and professional growth, and even to the education required (or expected) for certain jobs.

For example, we have reached a point where a university degree is not a prerequisite for a career in software development. However, I imagine further down this evolutionary path this will diverge; a degree will once again become necessary for software engineering (as opposed to assembly) and the degrees will have increased specialisation and quite likely an increased mathematics component.


One of the questions I set out to answer in writing this, a question that I speak to many of my clients about regularly and one that comes up on forums and social media too, is why is it getting progressively harder to hire good developers? The answer is, of course, complex and multi-faceted — my previous article tried to provide one reason in the context of investment banking. I also think that this process of software industrialisation is also a part of the answer. The reason it’s getting harder to hire good developers is because we as developers are trying to make the job of developers easier, and rightly so. As a result, new developers coming into the industry are no longer forced to acquire the same level of experience or knowledge to be productive and useful.

Alongside this, there is a massive rise in learning material available to developers that essentially teach people how to use new frameworks, libraries, and tools. However, there is a complete dearth of material that teaches people how to actually innovate and be creative (I’ve previously talked about the important of creativity here), or anything on the fundamentals of what happens when software hits the metal (networking, memory management and allocation, CPU utilisation). Couple this with a desire to always hire software engineers and not assemblers, even when the more appropriate hire would be an assembler, and we find ourselves in our current predicament.

When viewed in this context, it’s of little surprise that hiring good developers is becoming harder, and this trend will only continue. This is partly a good thing but understanding what’s happening and understanding the different roles in software development will not only lead to the ability of hiring better teams, but also lead to the creation of better software engineers and software assemblers.