Liveblogging the Singularity Summit 2007 – Day Two – morning

I arrived a few minutes late, as I miscalculated my appetite at the breakfast place, which being also popular put our orders in the queue. 🙂 The keynote by Peter Norvig has already started, and as I entered the Palace of Fine Arts Theater, he was saying “So now back to AGI”, which I assume means that he spent the first part of his talk on premises, or information concerning Google, etc.

CIMG9411

09.20 Peter Novig, CTO, Google. ”
If you can solve a lot with data, go ahead and do that! Introduce the model only when data doesn’t help.
Probabilistic truths, against the current state of the web. This changes how you do testing, with a continuous updating.
AGI prerequisites – components are more important then solving the problem at once:
(and progress is being made so I am optimistic) probabilistic first order logic, hierarchical representation and problem solving, learning over the above, with lots of data, online, efficiently.
We were surprised how game theoretic this has become.
When we started we thought we would be making a copy of the web, and people would come and look things up in our index. Now we see that things we do influences the optimizers, then we change, and so on…
We are coevolving with the web!”

10.10 Storrs Hall
How do we build laws that work in conditions that we cannot even grasp? It is like Hammurabi preventing the Enron scandal!

If we put ironclad constraints on our robots today, since they will now the world better

By 2050 all corporations will be run by their Management Information Systems, and they will be required by law to be built so that their first law is ‘Make a Profit’!

Law 1: a robot shall Understand
(Socrates said “there is no greater good than knowledge, and no greater evil than ignorance”)
Robots will understand memetic evolution
(because evolution is where morals come from)
(The superintelligent AIs of the future will be built entirely of Human Ideas)
(Ideas should compete. Bodies should cooperate.)
ESC: Evolutionarily Stable Conscience. Robots understanding their morality and evolving
Law 2. A robots shall be Open Source
We already live in a world largely run by artificial information processing structures which have no conscience: Corporations and Governments. They have an open-source motivational system: auditing, because money is their emotion. The less transparent
Law 3. A robot shall be Economically Sentient.
Law 4. A robot shall be trustworthy, loyal, helpful, friendly courteous, kind, obedient, cheerful, thrifty, brave, clean, reverent, and shall do a good rurn Lord Baden Powell 1857 2007 founder of the boyscouts). Robots shall be boyscouts! :)”

CIMG9455

10.36 Peter Thiel speaking about investing in a world where the possibility of the Singularity exists.
“How do you invest in the world as a whole?
The basic intuition about a world with a Singularity scenario, with extremes in good and bad, is that the tails of the Bell curve distribution of possibilities are much fatter.
If you are somebody predicting the end of the world, then even if you are right you will not make a lot of money!
Consequently you have no choice but to bet on the positive outcomes.
The things you choose are not going to be exactly right.
You would expect the world to be full of manic booms and busts.
All the conventional theories say that the markets should be getting smoother, but that is not what we are seeing.
One of the ways to see them, is that these fluctuations represent different bets on the Singularity, or proxies for it like globalization.
How does someone like Warren Buffet invest in view of the Singularity? His holdings in the last ten years shifted towards insurance, and reinsurance. That is very significant.”

CIMG9451

10.50 Charles Harper, Templeton Foundation
“What does a slug know of Mozart?
Epistemology follows Ontology. There can be state changes. The best example is the Chimp-Human differentiation.
Our culture is social based on a linguistic ontology.
How serious is the “dilemma of power”? We are much slower in evolving social and cultural solutions to the proper handling of the technological advances that we create.
I don’t have the answer, but I think that it matters a lot, and solving it is very hard.
How important is “transformation of desire”?
“The Hungry Soul” by Leon Kass
“Evolutionary Dynamics” by Martin Nowak”

IMG_0005-1

11.35 Q&A session with the morning’s speakers.
J. Storrs Hall “It has been the subtext that it is possible to build machines that are smarter than humans. I would like to suggest that is then also possible to build machines that are more moral than humans, and that it is desirable to do so.”
Peter Thiel “We must not presume that regulating AI research in the US would not stop it everywhere. I almost hope that we could push the accelerator, and arrive to AGI faster, because my intuition is that we would maximize then the probability of its use for good.”

12.00 Lunch. Spoken with R.U. Sirius, Kevin Kelly, Ronald Bailey, and others. Shot some photos that could be rather nice, I hope (and the World Future Society asked for permission to use them, which they didn’t need, since all my photos are licensed with the Creative Commons Attribution license, which allows for their free use as long as attribution is given): for example Peter Norvig of Google, and Barney Pell of Powerset sitting together on the floor hacing lunch

Follow the afternoon of the second day of the Singularity Summit on my following post.