As spring 2021 has kicked in, it got me thinking about how underlying beliefs and assumptions shape our thinking and how even these need to be reviewed every once in a while. Hence, I thought I would undergo some ‘mental’ spring cleaning. I remember that whilst at university, sadly well over 10 years ago now, I had learnt about the 3 laws; Moore’s Law, Nielsen’s law and Kryder’s Law.

All three have been key to the underlying architecture which has spurred on the march of progression in the digital age and have definitely informed my thinking when considering potential digital products and services.

Rather than just revisit these and with awe, recite the orders of magnitude that have been achieved, I decided to apply first principle thinking. I did this in order to understand the limitations of where we are going, discover any blind spots in my own thinking and hopefully provide some inspiration of what to expect over the next five to ten years.

In doing so I I’ve shared my observations and thoughts below, to which I hope, at a minimum, provide some food for thought…

First Principles

Before we touch on the 3 laws, let’s recap as to what is first principle thinking. It is:

a basic proposition or assumption that cannot be deduced from any other proposition or assumption

https://en.wikipedia.org/wiki/First_principle

This approach originated from the Aristotelian school of philosophy and is essentially the application of a set of questions in order to determine the root cause analysis. A modern day Lean management approach to this, which you can apply quickly, is the 5 whys.

Although they are referred to as laws, it is only the laws of physics which are unbreakable. As humans we have a weakness to conflate patterns with laws and subsequently we can get a little ahead of ourselves when making predictions. By applying this approach I am going to try and mitigate, as much as possible, any fallacies in my own thinking.

Now that I’ve cleared that up, let’s focus in on the 3 laws. Starting with ‘the daddy’ of them all – at least in the tech world – Moore’s Law.

Moore’s Law

It was first identified by Gordon Moore, the co-founder of Intel, around 1970. He identified that the number of transistors on an affordable CPU would double every two years and that this trend would continue to do so. This doubling led to exponential growth in processing power, meaning that if you spent a $1 today, in roughly two years’ time that same $1 will give you a CPU which has twice as many transistors, resulting in statements that the chip is twice as fast.  

This doubling explains why cost barriers in the 1960’s resulted in computers only being owned by large multinational organisations, governments or universities, and why today computer chips are already embedded or are offered within devices as part of the Internet of Everything; smartphones, smart watches, cars, speakers, lights etc.

What are the limitations?

However, applying first principle thinking demonstrates that this trend is not going to last forever. The primary bottle neck is that we are unable to make transistors smaller than an atom. Presently, the smallest chip size is at 7 nanometers (nm) with plans to build 5 and 3nm chips in the next couple of years5nm is equivalent to a single haemoglobin molecule, with atoms, dependant on their size, being a fraction of 1nm.

As outlined in this article of MIT’s technology review, Jim Keller, who is Intel’s head of engineering, is bullish, arguing that there are more than a 100 variables which could keep the Law alive, from 3D architectures to new transistor designs or software developers being more efficient with the hardware that they are given.

The article goes on to elaborate how a research team led by Neil Thompson, an economist based at MIT’s AI and computer centre, were able to improve the computation time of a programme written in Python, which takes seven hours to execute, down to 0.41 seconds. This was achieved by switching to C and configuring the software to fully utilise the 18 cores on the CPU.

Keeping with the theme of first principles, you may be asking why does C outperform Python? In layman’s terms it’s down to how the computer interprets the code, turns it from text to binary and performs the necessary calculations to return a result. Python is dynamically typed, it’s easier for a programmer to write but is more computationally resource intensive. C is harder to write but is easier, computationally, to interpret. I concede, from a purest standpoint, that optimising the software on a CPU which hasn’t double in capacity, would provide benefits, though it wouldn’t strictly be in keeping of Moore’s original statement.

Subsequently, Thompson warns that this does also signal that the benefits of Moore’s law may no longer be enjoyed at a general level. Citing the explosion of interest around Deep Learning and AI applications which require GPU’s, due to their greater ability to handle parallel operations, as well as ASIC chips used for cryptocurrency mining, this further supports the argument that there will be a greater focus on specialisation of software and chip architectures towards solving specific buisness problems, wherever the most money resides.

This hardware conundrum has led researchers in the AI field to consider using analogue ‘black box’ solutions or a hybrid approach as a way of overcoming this hurdle.

Is it the end of the line?

In short, we may not yet be at the end of Moore’s law but taking into account that there are fewer manufactures of high end chips, that those producers are experiencing delays in industrialising new production methods, it would appear that we are coming to the maturity of the existing silicon based technology S curve.

Interestingly, Ray Kurzweil identified Moore’s Law as the fifth paradigm of computation which has provided exponential growth since 1890. Kurzweil tracked the progress in performance across mechanical calculating devices, Alan Turing’s relay-based ‘Robinson’ machines, vacuum tubes, transistor based machines and the integrated circuits, which have made Intel and Gordon Moore famous. The essay was written in 2001 and goes onto predict that Moore’s law will come to an end in 2019.

So, if Moore’s Law is part of a bigger trend, what are the candidates to continue this trend and provide a sixth paradigm?

Over the last couple of years there has been growing noise around developments in Quantum Computing, though for all the hype this is still in a R&D phase and even with a significant breakthrough is unlikely to be available for general adoption anytime soon as they will require new approaches to writing software and algorithms in order to exploit the benefits of quantum entanglement.

Graphene, the wonder material that was first isolated in 2004, has made progress in the last seventeen years with it featuring in sports clothing and equipment, barrier coatings which are used on the hulls of ships and in the world of tech, appears in the Huawei X10 phone as part of the cooling system. However, production on large scale to compete with silicon is still a way off and there are quality issues with existing methods, even when buying the same product from a company.

So rather than revolution should we be looking for evolution?

If so, the front runner to complement it and over time potentially replace it, are compound semiconductors. These can be made out of up to four other elements and have the potential to be 100 times faster than silicon, operate at a lower voltage and are resistant to heat. What sways my opinion is that they are already in use in 5G enabled phones, domestic lasers (CD, DVD and Blu-Ray players) and if you have a wireless mouse it too is more than likely to house one.

The reality is that silicon is unlikely to disappear anytime soon. Hence. I would argue that we are at the transitionary state between two technology S curves, silicon and compound semiconductors, and will know by 2025 if compound semiconductors are the new heir apparent.

Neilsen’s Law

Similar to Moore’s Law, Neilsen’s Law is concerned with Bandwidth for high end users, stating that is grows 50% annually – which can be demonstrated over the last 36 years. The key here phrase here is that it for ‘high end users’, where as Moore’s law has been for the masses. As to why this doubling of bandwidth doesn’t automatically benefit the average user is due to the fact that if you buy twice as fast a computer, your software runs twice as fast, where as if you get twice as big a modem the web pages do not load twice as fast.

The speed of the internet is a function of both the individual user’s connectivity, the infrastructure and bandwidth of the content provider. Hence, why pay a premium if it doesn’t provide a discernible benefit until 2-3 years later, when the mainstream catches up?

Essentially this is a ‘build it and they will come’ problem. Once you provide the infrastructure then developers will design services that maximise the capacity to which content providers will then utilise and mainstream users will will want to consume. A couple of years ago I was thankful to be able to stream SD video, with a high speed connection and UHD monitor, now my expectation is to stream video in 4k.

Where are we today?

In order to appreciate globally where we are, I stumbled across speed test global index, which provides a brilliant overview and data set to aid our understanding of global broadband speeds.

As of the March 2021 the average fibre global internet speed is 97.52 Mbps; this has grown by 29% in the last 12 months. This rate of growth needs to be considered in the wider economic context over the last 12 months, factoring in the impact of covid19 to global supply chains and national lockdowns being imposed.

However, when combining data from this site and the worldpopulationreview.com, we can see that only 32% of the world’s population have access to markets above this average. The remaining 68% of the world’s population does not live in a market with access to these speeds.

Interestingly, as there has been a greater need to work from home in the last twelve months, 62% of the world’s population live within countries which provide an average level of broadband connectivity (>50 Mbps) which can enable this capabillity. It should be noted that these values are country/city-state average levels; it doesn’t mean that all of the 4.7 billion people living in these geographies can afford this service. If we take a conservative view and assume 60% can achieve these speeds this equates to 2.8 billion or 37% of the global population.

Applying a 50% annualised growth rate to this data set, we can expect 83% of the world’s population, not factoring in forecast population growth, to have access to average broadband speeds of greater than 55Mbps by 2024, with the world average hitting gigabit speeds by 2028.

This may appear optimistic, as this is based on Fibre broadband being rolled out across the globe, which is notoriously expensive to deploy. However, 5G is already being rolled out in 30% of the world’s countries. This promises download speeds of 20 Gbps and 10Gbps upload speeds and has demonstrated, as of the end of 2020, download speeds greater than 500Mbps in the UAE.

Also, in contention to extend the global coverage of the internet is SpaceX’s Starlink project. This has plans to put circa 30,000 satellites into orbit and has already achieved 10% of this number, promising download speeds of 300Mbps by the end of 2021. This new entrant into the market will disrupt global broadband markets and once operational, will likely claim all the big investment banks and trading floors.

Why? Simply, because light travels faster through a vacuum than it does gas or liquid.

The estimated roundtrip time of a data packet, using underground sea cables from London to New York is 80 milliseconds, with Starlink this could move the same data packet at 43 milliseconds. THIS IS A BIG DEAL. Not only because every time sensitive industry on the planet will pay a premium to ensure that when they make a decision it is actioned straight away. Also, because London to New York is a relatively short trip, in data packet terms, the longer the distance for example London to Singapore the benefits are further elevated.

What are the limitations?

Within the known laws of physics the maximum data transfer rate is referred to as the maximum entropy flux. I openly admit that I don’t fully understand the mathematics that underpin this law, but have provided the links for more learned individuals to attempt to decode it. Essentially, what I take away from this is that we are nowhere near these limitations.

However, the real bottleneck that we are likely to encounter in the near term is that of energy consumption and heat. Essentially, as we build bigger and faster networks they consume more power, though there are existing research programmes which are looking to address these issues, though the use of spin lasers being one example.

What this does demonstrate, through the combination of these existing technologies (Fibre, 5G & Satellites), is that gigabit internet speeds, across the globe, are a real possibility before 2030.

This level of connectivity alone will enable new business models, greater immersive experiences both in a collaborative work setting and socially, supported by tools such as haptic interfaces and augmented reality. It is also likely to have a significant impact on economic migration, education, real estate prices….. the list goes on.

Kryder’s Law

This is the youngest of the three laws as it was coined in 2005 by Mark Kryder, the CTO for Seagate, who observed that magnetic disk areal storage density was increasing at a rate exceeding that of Moore’s Law. This led Kryder to predict that by 2020 a 40TB disk drive would cost about $40.

Obviously, this hasn’t been achieved. As of December 2020 Seagate had started to ship 20TB HDD with forecasts of a 50TB model to be made available in 2026. The nearest comparative deal for the cost price of $40 is 5TB of cloud storage per annum.

What are the limitations?

Within the laws of physics the Bekenstein bound is the limit to which information can be contained within a finite region of space which has a finite amount of energy. However, more pressing for HDD’s, since Kryder’s prediction, has been how fast the magnetic heads can spin. As the faster they spin, the more energy they consume, generating heat which impacts the longevity of the drive.

Since 2005, SSD’s have become a more desirable storage medium due to the lower form factor, speed and energy consumption levels. All of these have enabled the rise in mobile computing with the launch of the iPhone in 2007. The main blocker to mainstream SSD adoption over HDD has been cost; presently they are twice as expensive than HDD’s per gigabit, though prices are trending downwards.

Looking at when Kryder made his prediction, arguably it was towards the end of the HDD technological S curve. From a commercial perspective, consumers have since also had more storage options with cloud storage being baked into the price of existing software packages (Onedrive, Dropbox, iCloud etc).

This combined with greater connectivity and the Pareto principle – 80% of the time you will use 20% of the applications on your device. It would appear that storage volume is not as important as speed of access against fixed limitations of human attention spans. Hence, a hypothesis could be that market demands being satisfied which has resulted in less urgency for R&D to achieve Kryders predictions for either HDD or SSD.

With one eye on the future, a potential replacement for SSD is that of RRAM, which is more energy efficient, faster and has a greater areal storage density – 1TB fitting into the size of a postage stamp. This compounding of technological S curves, I suspect, could resuscitate Kryder’s Law.

I have not been able to validate it, but there are reports that TSMC are to start producing RRAM chips to gauge market reception, so we might not have to wait to long to find out.

So what? Why is this important?

These laws have been the driving force of the digital age, providing a regular cadence of progress which is deflationary. In isolation each law, from a human perspective, progress in what appears to be a linear motion akin to tectonic plates slowly edging across the seabed. The reality is that these laws demonstrate exponential growth, something which as humans we struggle to comprehend, resulting in earthquakes and tsunamis of innovation. If you don’t believe me, go and check the market cap of Apple compared to the FTSE 100, or the combined performance of FAANG stocks.

Essentially, everything that is impacted by all three results in the underlying cost effectively becoming $0, given a long enough timeline. Anyone who sold CD’s or DVD’s in the 90’s can attest to this, as streaming content, be it though Spotify or Netflix, in a higher quality format and with access to more content than they have time to consume is now offered at a monthly fee.

The deflationary nature of this progress is something that we need to be mindful of, taking into account that our economic models are configured for inflation and have amounted significant amounts of debt. For more information and thoughts on this I recommend reading/listening to Jeff Booth’s The Price of Tomorrow.

Tracking their continued progress, understanding any known limitations or technical hurdles, allows entrepreneurs, investors and Product Managers to predict when certain capabilities will become a reality and, as such provide a timescale as when the markets will be primed for the introduction of new disruptive business models. This ‘creative destruction’ has enabled the likes of the FAANG organisations to become world leaders, provides the same opportunities for tomorrow’s visionaries.

Final thoughts…

From the perspectives of the Laws of physics we are nowhere near the limitations of computing that is theoretically possible.

However, there are near term roadblocks for Moore’s law and Neilsen’s law, the latter of the two I am less concerned about from an engineering and technical perspective. The underlying trend of Moore’s law is at a transitory state from silicon to that of compound semiconductors or will become obsolete for general computing, as greater focus is poured into specialised hardware. The reality of this situation is likely to become clearer by 2025.

Although Kryder’s predications did not come to fruition, akin to Ray Kurzweil demonstrating overlapping technologies continuing the same exponential growth curve, the introduction of RRAM is likely to reignite this trend.

Personally, reviewing all three laws, the brightest star presently is Nielsen’s, predicting high speed global internet connectivity midway through the decade. As much as there is a large focus on A.I. and its impact on jobs of the future, I actually think that this is going to be a greater disruptive force over the next 5 – 10 years.

For mature markets it will usher in the possibility of even more immersive UI and UX, as development teams and product managers squeeze every available MB of bandwidth offered up to them. For online gaming and eSports this will only further cement their growth, all of which will require a greater number of datacentres to support this demand.

Though the bigger prize is that currently the majority of the population, viewed as either consumers, prosumers or workers are not yet living in geographies with average internet speeds greater than 55Mbps – I have chosen this number as this is the rate that studying and working from home, or wherever is convenient, is enabled. Even in developed economies where theses speed can ‘theoretically be accessed’ they are not affordable for all. This, however, is going to change.

If you live in an area where you have access to affordable high speed broadband, congratulations, you are probably enjoying the benefits of streaming videos, information overload and the frustrations of misinformation and social media. Though you are the minority sitting on the crest of the tsunami triggered by the aforementioned tectonic plates. The oncoming storm surge and its implications have not yet been fully appreciated due to the limitation of our event horizon.

So, as much as we think we have witnessed wide scale digital transformation, the greatest levels of societal, educational and economic transformation with previously inconceivable markets forged by the entry of new actors, who hold a greater variety of cultural and societal norms and perspectives, is yet to come.

This for me is a stark reminder of William Gibson, who said;

“The future is already here – it’s just not evenly distributed.”

– William Gibson
The Economist, December 4, 2003