Be Enhanced or Be Absorbed
In the following, I'll make the case that we need to develop our systems with the understanding that the quantity and quality of intelligence on our planet is going to increase. In this world, well designed systems will be enhanced, others will be augmented, and the remainder will be absorbed into the models.
We stand here on the dawn of a new age of escaped genies destined (absent defeaters) to never return to their lamp. And the genies are becoming more powerful on a daily basis. In this world, what can an employee, a CEO or a startup founder be certain of? Being at the base of any exponential (or at least a sigmoid with a long runway ahead) casts a lot of doubt on almost anything we can say about the future, and this is especially true when the exponential relates to the quantity and quality of intelligence in the world.
Accepting this premise is one thing, but understanding how we can best navigate this world is another.
How the future looks
We should assume given current progress, trend lines, and vested interest of the large corporations, that:
- full multi-modality across text, images, video, sound and actions will become standard
- future models will be capable of planning and solving long horizon, complex tasks
- creativity and ability to solve novel, unseen problems will improve, so that AI can invent new things and solve new tasks
- autonomous agents will be released on to the internet and run indefinitely in pursuit of some objective, rather than acting like chatbots that terminate at the end of a number of tokens
- a non-trivial proportion of internet traffic will be autonomous agents performing their duties
- navigating the internet and other systems (e.g. Operating Systems, applications) will have first-class support in future models
What are the practical implications of living in this timeline? My claim is that existing systems and organisations will either be absorbed into the models or enhanced by them.
Next steps
It feels somewhat intimidating to live in the era of generally intelligent machines suddenly starting to automate large swathes of work. But for the forseeable future, I think that there are practical and useful things that we can do to retain some sense of normalcy.
1. Migrate upwards in the layers of abstraction
Pay attention to what aspects of your systems are being automated and leverage those abilities to do more work.
Naive next token prediction has already begun to change some areas like graphic design, copywriting, and customer service. But this is not to say the jobs no longer exist. Instead, if the humans doing these jobs can notice which aspects of their work are now possible to do with AI-based systems, and augment their workflow to leverage these new capabilities, they can transform their potential replacement into a huge multiplier of their productive output.
For example, it could be the case that an analyst in the future goes from scouring the internet and building graphs, to instead analysing the outputs of agents doing the same work, effectively turning the human into a supervisor and quality assurance officer of a machine who produces the output, rather than the entity who produces the output themselves.
2. Increasingly leverage machine reasoning in software systems
Let's imagine we want to extract payment details from human written emails, like bank details or PayPal addresses. Let's imagine we get a thousand of these emails per day.
In the modern context we have two options:
- We could write complex pattern matching software systems that look for all conceivable forms of payment details
- Alternatively, we can make an API call and have a Language Model spit out JSON containing the required details.
I'd argue that we should increasingly be opting for the latter options, because it benefits from improvements in model performance that will inevitably arrive in ~3 months, whilst the former does not. The first option requires long periods of development, teams of software engineers and adds complexity to our organisation, whilst the latter can be done in an afternoon and start getting results. If the results are bad at first, we can iterate, initially by trying prompt engineering techniques (e.g. providing correct and incorrect examples).
This is not to say that there is no value in more traditional deterministic systems that provide greater powers of interpretability. It just needs to be acknowledged that when done with thought and care, language models can provide a 95% solution with minimal negative externalities for a fraction of the time and effort.
Conclusion
When building new systems, like companies or software, we should be cognizant of how they might look in a world where my predictions in the bullet point list above come to fruition.
The labs are working as hard as they possibly can on the next breakthroughs, throwing billions of dollars at the greatest minds and machines. They are not slowing down, and we should not expect generality of systems to either. We'd be wise to observe the shift taking place with todays systems in the presence of basic AI models and consider that in a world of increasingly competent exploitation of increasingly competent machines, our current systems are going to be put under strain, warp, and eventually either evolve or break. It has only just begun.