The evolution of software vs. hardware solutions

At the AGI mailing list there are several interesting threads, and recently a request by Eliezer Yudkowsky caught my eye, asking for the source of a quote, which I forwarded to Geordie Rose of D-Wave, who indeed replied quickly and in depth.

The request was to support that

“a 1974 era computer running 2007 algorithms would beat a modern computer running 1974 algorithms”

and ideed that is apparently the case.

I asked about a year ago Marvin Minsky if he saw an exponential increase in the power of software to solve problems, just as hardware, and he said that in the course of the decades he did indeed see a positive trend, but his assessment would be that of a linear increase at most. We would need a good deal more data points than just two to confirm weather he is right or wrong. (My guess is that he is wrong, and that the increase is more than linear. See also below.)

What we see in hardware is the relentless application by computers via automated CAD systems of a given architecture’s possibilities. When there is a jump, it is because as the then current technology becoming exhausted, one of the groups exploring alternatives gets lucky and it is their solution that becomes the base of the next generation’s hardware planning by computers once again. Software development on the other hand is seldom automated: code generators have not become widespread, as their output was difficult to optimize further, and re-writing was deemed a better choice instead. This means that there is a less orderly exploitation of any current software architecture, and of its possibilities, and farther spaced increases are more likely to be jumps into new paradigms.

I was wondering why would one apply the masochistic exercise of running a slower algorithm on a faster hardware, or a faster algorithm on a slower hardware. The second is common in the change of platforms, where energy consumption considerations impose the use of less power-hungry CPUs: for example when moving a given application set to a mobile platform from the PC. The first is an even more important event in my opinion, because the relatively less well optimized set of solutions often leads to higher readability by humans, a richer set of programming and debugging tools, EDIs, etc. The latest generation of languages, like Ruby, is a testimony to this, and their expressivity and readability widens their circle of adoption, even if they are not optimal. Sometimes it is actually possible to achieve both highly optimized execution and high expressivity and readability, as in Mathematica for example. What reliable and widely used software can do, and stay manageable, has definitely increased in the years. Certainly not bug free, and now often hiding behind a permanent ‘beta’ tag, but the threshold of a few million lines of code as maximum possible seen 10-15 years ago has been greatly exceeded.

As an anecdotal aside, among those who have been busy thinking about helping software productivity and reliability is Jaron Lanier, the inventor of the term ‘virtual reality’ whose company selling gloves, and goggles for immersive virtual reality, was called VPL Research. ‘VPL’ stood for Visual Programming Languages, because he felt that in order to achieve better software throughput we needed new interfaces, and that immersive VR would be needed. (This is not Second Life style object creation, as there the code is still written traditionally, while Jaron saw the programmer juggling algorithms in 3d…)

1 thought on “The evolution of software vs. hardware solutions”

  1. Pingback: The evolution of hardware and software – Cyril's blog

Comments are closed.