We aim to build the cleverest software agents and robots, excited about the possibilities of artificial intelligence.
We aim to build the smartest possible programs that serve us. But what if this approach was the opposite of what we should aim for? What if we instead aimed to build a set of AI components that were of the minimum cleverness necessary for a given task and became an infrastructure component on which other AI systems would be then built?
When many decades ago the protocols for the Internet were designed using the TCP/IP set of standards, the architects of that design were able to realize that it would have been impossible to predict the type of applications that in the future will be built on top of those protocols using those protocols. As a consequence, they didn’t try to forecast the type of applications that would emerge, and the type of features that these applications would need. What they did instead was to agree on the minimum set of protocol components that would be able to support the richest set of applications, whatever creative developers came up with in the future and this approach has been transformative.
Something similar should be happening with AI as well. Completing the hard task of working out what is the minimum set of support services that the AI layer should provide uniformly to everyone. In a few years we will see flourishing new AI applications, similar to how the Internet was able to flourish, thanks to the same approach, starting many decades ago.