Trading our Capabilities for Ever More Complex Computer Models
14 September 2021
A few years ago, I observed that the wealthiest people on the planet all had one thing in common: they all seemed to have a significant amount of control over a very large amount of compute power, or be in close proximity to someone with that control.
It's stuck with me ever since. Sometimes I relay it to other people, partly in jest, but lately it's seemed a bit more "real".
I recently came across the ideas of Jaron Lanier and that very much triggered some reinforcement bias, but what he had to say feels pretty spot on to me.
A computer gives you a lever that allows you to more effectively exploit information than you'd be able to with just your own mind. The more compute power you have at your disposal, the more time you have to process information and the faster you can make decisions.
When computers were first introduced, they were expensive. They economics for buying compute made it so that compute was reserved for our wealthiest industries.
First, manufacturing companies exploited computers to better model their supply chains. High-frequency traders built better financial models that allowed them to make more money. Insurance companies modeled healthcare diagnoses and survival rates. Wal-mart modeled their inventory levels and supply chains.
Then, computers became cheap enough to apply these same techniques to less important problems that had to do with modelling the real world. Social interactions and consumer taste. And software began to eat the world.
Software companies like Google, Facebook, were soon created that built their fortunes and their futures on the assumption that these models would continue generating wealth.
Naturally, because these companies were able to exploit information so much more effectively, they could take their profits and re-invest in compute power and improving their models, widening their advantage at a pace few else can keep up with.
We said the incumbents were faced with the Innovators Dilemma as "disruptive innovation" forced them to adapt or die. In reality, these startups were largely just building better, more complex models, with bigger and bigger computers than incumbents could economically justify based on past performance.
Prior to the computer, the models these companies used existed inside the heads, to one degree or another, of all of their stakeholders. This is why people tended to stay at companies for so long. They valued the knowledge inside their heads. We've have succeeded in separating these models from their compute-substrate, effectively giving the biggest computers "infinite time" and un-bounding the complexity of these models.
This can be felt when interacting with these large companies. When things go wrong, there is often a sense of helplessness because the person you contact doesn't have the ability to fix the model, if you can even contact a human at all.
But we have also incurred other trade-offs that aren't frequently talked about.
These models have to be constantly updated by humans, but in this new world, only a select few have permission to update these models, and that permission is increasingly governed by politics inside these companies, and out.
Complex models are more likely to break. In the blink of a bat sneezing on a human in Wuhan, reality can change instantly, but the models do not.
In the world of models, we have to first invalidate our model of the world, and convince the gatekeepers of that model that something is wrong before we can collectively re-orient. And because so many egos are built on the assumption that the model will continue functioning, it is incredibly difficult thing to do until it breaks entirely and we are abruptly forced to face just how far from reality or models took us and how fast. Often, when it gets to this point, we say they are "too big to fail" and instead of invalidating their models, we continue to pretend as if it was a one time thing, and they are still just as realistic.
Bunnie Huang points out the difference in our American economy vs what it is like in Shenzen.
To sort of illustrate what this means from a social phenomenon, its what I call capability over inventory. Let's say you walk into a store and say you want to buy a 1.8m USB cable.
And the clerk in the store says, "we have 2m and 1.5m, how about you just get this 2m cable and you just coil up this extra bit, and you're good to go?"
And you say, "No, no, I really want an 1.8m USB cable. Can you make on for me?"
That clerk will be like, "You're crazy. Why would I make one of these for you? I have in my inventory, right here a cable that can work if you are just willing to accept this little bit of a compromise."
In an inventory-based society, essentially, your thought processes, your ideas, your creativity is constrainted by what you can get on the shelf. And the people who control those channels effectively can control how you structure life, what you innovate, how you do things. Right?
Now if you go to an ecosystem like they have in Shenzen, they have people who have spools of wire, people who have cable endings, and they have people with machines like this, which puts the endings on the cable.
If you know that guy who has that machine, who has that capability, you say, "Hi, I'd like to have a 1.8m cable."
Guy goes over to the machine, types in 1.8m, ca-chunk, out comes a 1.8m cable, quantity one.
This is what I call a capability-based ecosystem. Where instead of looking at what inventory exists out there, and what kind of bits and pieces you have to put something together, you can actually go and get that thing built to exactly the right spec to your idea.
Inventory is based on these models. We've essentially based our entire economy on models that might be invalidated at the drop of a hat. In my opinion, these models being invalidated, is a very real concern post-COVID-19.
We've traded our capabilities for better and better models of reality in order to stretch our dollars further. But, we can never perfectly model reality, because that would require a computer the size of the universe. And as far as I know, we don't have a spare one of those lying around.
But, bigger more complex models don't generate wealth. Bringing new capabilities to more of humanity generates wealth. Better models only move wealth around.
I think this answers the question of where all the productivity went. We partly gamed the metrics to look like we were producing more than we were and the gains we did have were mostly captured by these computer models.
The entrepreneurs that built these giant computers masquerading as tech companies hacked the hackers. Around the same time as these original computer models were being created for supply chains. The hacker ethos emerged. A core tenant of that ethos is that, "information wants to be free."
This impacted things in two ways:
- By expecting bits to be free, it effectively gave consumer-facing Internet companies a single possible revenue model: predicting and manipulating our behavior via advertising by building better and better models of us all.
- By encouraging people to do open-source work for free, and companies being able to leverage their bigger computers to more effectively use that work, they further entrenched themselves. (App Stores)
But, remember the amount of compute power under their control is able to continually increase as they re-invest their profits. These combine to effectively create a "power law distribution" of wealth. An assumption, that the entire industry of venture capital is based upon.
If the distribution of wealth in your society follows a power law distribution instead of a normal distribution, you have effectively removed the possibility for a middle class.
This shrinking middle class being replaced by computer models is what I believe to be causing the conflict that is the primary symptom everyone would point to when you ask them, "what is the single biggest problem in our society?"
Based on this, I think these are actions that might be taken:
- In the short-term, invest in the largest computers and the best computer models, in order to fund:
- Competition in the form of decentralized, self-hosted software projects that erode the moats of the largest tech companies.
- Legislation that forces the owners of more than a specific amount of compute power not use their ownership to favor any particular computer model and that computer models be open source.
- Legislation to remove accredited investor limits, which prevent more people from benefiting from the success of these computer models and make the power law distributions more dramatic.