During the lunch R.U. Sirius interviewed Kevin Kelly. I am looking forward to listen to that podcast soon.
13.06 Michael Lindsay X-Prize Foundation shows a video on the X Prize Foundation, which funded the prize for private space travel won by Burt Rutan’s SpaceShip One team. The foundation is expanding in new areas and Michael presents their initiative in education. Focus on educational software.
“The most difficult part is picking the target goal. I want to ask you a question: how many of you think that a 2sigma improvement giving the pupils using these tools in Algebra 1 will perform better then 97.7% of the classroom, cannot be achieved within 10 years?” A lot of hands are raised, which is a little surprising.
Michael says that they are not sure if they want to set up the prize, which for me is absurd. If they have the means, they should definitely try: there can be little more important than equipping humans through education to better cope with the world around them.
13.35 Steve Jurvetson – The Dichotomy Between Designed and Evolutionary Paths to AI Futures
Asks the question to the audience which approach is going to produce the first AGI, and two thirds are for the evolutionary approach, to which he also subscribes to.
“Evolving an artificial brain will create one that is as inscrutable as the human brain.
There never will be a solution to the problem of the opacity of evolved systems, as in Wolfram’s cellular automata there isn’t a shortcut to executing them.
You cannot just upload the brain, extracting it from the entirety of the sensorial environment.”
Shows EvolvedMachines, genetic-programming.org
“Grand engineering challenge: unification of approaches (mentions Wolfram, D-Wave, Kurzweil, Kelly)”
14.15 Christine Peterson – Preparing for Bizarness – Open Source Physical Security
“Stick around, prepare for risks.
Last year addressed IT risk, this year addressing physical risk: chemical, nuclear, bio, nano.
It is a very scary world ahead. The best strategy is to think that AGI will come on both the side of defence, and offence.
The strategies that do not work are the top down near term ones of DoD, and DHS.
The challenge is balancing security, privacy, freedom, and it is not going well. Example is airline security.
Can we take the principles of Open Source as practiced in software out in the physical world?
14.30 James Hughes – Waiting for the Great Leap… Forward?
“I see greater than human intelligence around me every day, as a sociologist, since organizations are meta-human, and they make decisions, and they act every day.
AGI will be radically alien.
Friendly AI should be attempted, but also Humans should be Friendly, and maybe that is also going to be a consequence of our attempt.
Morals are editable.
We have an ability to determine outcomes, and have to recognize our Millennialist cognitive biases in both the good and the bad direction.
Emergent artificial life forms living in our information systems might not reach human level intelligence, or more, maybe stop at the level of cockroaches, but… cockroaches are very, very annoying!
The Storm worm has implemented automated defensive mechanisms that have been called frightening by experts.
From a policy point of view, in cybersecurity, international, national, and local regulations are all necessary.
We need to talk to the people designing future versions of the Vassenar agreements to agree what to future threats are.
Detection and control AI threats will require machine help.
AI will require IA (Intelligence Amplification). Some humans are actually intelligent, and already friendly!
Maintaining the mammalian brain as a control mechanism.
Need for a new social contract around social provision, labor, wages, education and retirement.
15.00 Eliezer Yudkowsky
“Somebody came to me yesterday asking if I was a creationist because I said that it was impossible for a butterfly to evolve. I want to be on record that I am not a creationist, I am an evolutionist. What I said that evolving a butterfly is a very inefficient way of getting a butterfly!”
“By the power of laziness, which is a programmer’s great advantage, you achieve results that would be impossible just by hard work.
Deep Blue’s unpredictable moves had a
A smart AI researcher should always ask “Could an AI solve the problem I am trying to solve? How am I thinking about the problem now?”
Humans can think about AI theory, so
An AI that can output AI-theory, can swallow itself, and become a reflective AI.
This is where Intellgence Explosion comes from.
AIs do not necessarily form a natural class, so their motives will be rather varied. We need to reach out in the space of possible minds, and pick the stable self modifying trajectory of a friendly AI.
There is a difference between terminal values, and transient values (means to an end).
In designing a moral agent we need to think about the trajectory that it will describe.
We need an ethical bulldozer, because we all together are not smart enough to shovel the stars.
We need a Friendly AI capable of helping us solve the Friendly AI-theory.
The last level of laziness.”
15.20 Q&A
With a very good remark about the need of getting our own house in order.
16.00 Ray Kurzweil in videoconference from the UK where he is at Aubrey De Grey’s SANS conference.
Makes a very short introduction, and responds mainly to Norvig, with various remarks defending the exponential gains in technology, economy, biology, etc. (Evidently Peter at the beginning of his speech which I missed, did attack some of his claims.)
I wasn’t writing as I stood in line at the microphone to ask a question about his current views of open and open source approaches versus regulatory or elistist approaches with a view towards the Singularity Institute making policy recommendations. He confirmed that he favors an open source approach, but that there will be a role for proprietary data sets. He did not address the policy/political angle of my question.
Ray is filming a documentary called “The Singularity is Near”, intertwined with the story of Ramona the female avatar he created, as she progresses in the future, starts to fight for her legal rights, tries to pass the Turing test… The film is due to arrive in 2008, about a year from now.
17.00 Tyler Emerson, the Executive Director of the Singularity Institute is giving the closing remarks of the conference. Educational outreach. Research, with the research grant program for students all over the world. Developing a slideshow to be given at Universities.