Since the dawn of the mainframe era in the 1950s through to the two most recent computing platform transitions – mobile and cloud – Software 1.0 has remained at the heart of the technology stack, helping create one of the largest markets in the world, dominated by companies such as Microsoft, Apple, and Alphabet.
Historically, the cost of computing power and the limited number of software developers were the main obstacles to broader digitisation. However, the emergence of Software 2.0 has radically redefined what software can achieve, collapsing the costs of building highly customised intelligent applications and, in turn, unlocking a market poised to expand by more than tenfold over the coming two decades.
The transition from ‘Software 1.0’ to ‘Software 2.0’ represents a paradigm shift in the way software is developed and deployed, moving away from traditional structured programming towards machine learning and neural networking.
The disruptive consequences are hard to overstate: the entire computing stack is being reinvented to focus on building artificial intelligence (AI) as opposed to traditional software, which has significant ramifications for software, hardware and data processing companies.
Software 1.0 is what we all interact with on a daily basis, whether that be through Microsoft Document or Excel, Salesforce’s CRM portal, or Workday’s HR management system.
Here, humans write explicit code – thousands and thousands of lines – to instruct the computer how to act in every given situation (also known as ‘deterministic software’).
The underlying unit of compute for Software 1.0 is the CPU (central processing units, sold predominantly by Intel and AMD); the most entrenched operating system is, of course, Microsoft.
Over the past decade, as every company became a software company and ‘software as a service’ business models became popularised, these have been fantastic investments. Customers have been locked into these ecosystems, revenue has been sticky and competition has struggled to break down the walled gardens of Software 1.0.
That is now changing. Traditional enterprise software companies are being challenged for the very first time by a cohort of companies built on Software 2.0 from the outset.
Software 2.0, a concept first introduced by Andrej Karpathy in 2017, is driven by machine learning, with an AI model infused into the software.
This type of software is capable of deciding the best course of action by itself: large datasets define desirable behaviour and neural network architectures provide the skeleton of the software code, with the model weights determined through the machine learning process.
Programming is done through high-level instructions or by providing examples, and the system automatically translates instructions into executable code or model behaviours.
The underlying unit of compute here is the GPU (graphics processing units, sold predominantly by Nvidia, which are necessary to accelerate computations and enable real-time processing of complex tasks previously impractical), and the underlying operating system is also Nvidia.
Why does this matter?
Software 2.0 challengers are offering superior products at a fraction of the cost of Software 1.0 incumbents. This comes at a time when chief investment officers (CIOs) and company executives are scrutinising their IT budgets in order to invest in AI and inject productivity across their businesses.
Software 2.0 is built on accelerated computing – an architectural innovation pioneered by Nvidia, based on GPUs, which is 100x faster, and 98% cheaper than traditional compute based on CPU architectures. You cannot run AI on traditional compute. As a result, these Software 2.0 challengers are dramatically undercutting the price points of legacy software providers.
The best way to contextualise this price differential is to think of the cost of Software 2.0 as mirroring the cost of inference (i.e AI deployment). The cost of inference fell by 95% in 2024, driven by OpenAI’s model progress, and we are already seeing tangible cost reductions in 2025 aided by DeepSeek’s innovations. As the cost of inference continues to plummet, this is collapsing the cost of building AI software applications.
Much ink has been split over recent weeks about potential disruption to the AI infrastructure and hardware layer as AI models become more cost efficient but we believe the most potent disruption occurs to what is built on top of these models – barriers to building software are disintegrating at a pace which is eyewatering.
How much better is Software 2.0 versus Software 1.0?
It turns out, a lot. While the costs of Software 2.0 are collapsing in line with inference costs, its capabilities are simultaneously improving in line with model reasoning capabilities.
In the first half of 2024, AI was capable of automating 20% of what a human software engineer could achieve (measured by SWE-bench − the benchmark for tasks the average human software engineer performs).
By the end of 2024, this moved up to 50% with the launch of OpenAI’s o1 reasoning model. With OpenAI’s newly launched o3 model, this moves to 73%. This means agential software is more capable – AI agents can now accomplish around three quarters of our tasks, which we expect to reach 90% by the end of the year.
As Software 2.0 continues to reshape the industry, the competitive landscape for enterprise software is evolving at an unprecedented pace. The ability to harness AI-driven software is quickly becoming a differentiator between companies that adapt and those left behind.
While the opportunities are immense, so too are the risks – investors must carefully assess which businesses are positioned to benefit from this shift and which may struggle to keep up. In this environment, the decision over what not to hold can be as important as what to hold.
Storm Uru and Clare Pleydell-Bouverie are fund managers in the Liontrust Global Innovation team. The views expressed above should not be taken as investment advice.