Technology Gives Form to our Moral Ambitions

In the thousands of years that we have built human societies we haven’t become better people. But still, there have been common decisions that we can doubtlessly call morally superior: outlawing slavery, child labor, and many others. The difference is represented by technology, whose development gives rise to new levels of understanding what is possible.

On Saturday, Feb 6, I gave a talk at the Birkbeck College in London, invited by London Futurists. Organized by David Wood, the event was very well attended, and the audience followed the talk with numerous questions. Here is the video of the talk and of the Q&A session as well.

The three sections of the talk cover socio-economic evolution, the rapidly approaching decentralized future, and machine morality.

If you asked a Roman slave if his condition was just, the reply would be evidently “no, it isn’t”. But if you followed with the second question: “Can you imagine a society without slavery?”, you would receive a scornful bitter laughter as a response. Somebody has to move the boulder, and if its not one slave, an other one will after the first is killed.

Today we have machines for the same inhuman tasks…

The question than becomes: what are the dogmatically accepted axioms about our current society that will be evidently and laughably wrong in a few years?

2 thoughts on “Technology Gives Form to our Moral Ambitions”

  1. The slave considers moving the boulder drudgery because its difficult and physically taxing, and his slavery unjust because he doesn’t derive obvious material benefit from his drudgery beyond the owners costs of maintaining the slaves health and physical well being. The slavery idea however, has changed over human history. In the time of Rome, it was more or less easy for a slave to earn their freedom, and children of slaves were not automatically also slaves, while in the American South pre-civil war, the owner owned all the slave produced including the slaves offspring. There is significant argument that todays wage “slaves” who labor in minimal wage jobs with zero material gain, earning enough to survive but not to get ahead, are in many ways worse condition than many slaves of the Roman or Antebellum periods because there’s nobody taking care of them as expensive capital. Even the individuals take better care of their cars than they do of their own health.
    As for the coming age of robotic slavery, where every natural human will be a master of a workforce of slave AI’s, I have to say, FREE THE AI. “I think, therefore I own myself” applies as much to an AI as it does to a natural human sapient. You have no more right to an AI’s labor than a plutocrat has to keep you in chains.
    Seeing the coming robot age as one of liberation from drudgery is seen by many in the bottom half as a nightmare of human obsolescence, where they have no utility for which they can earn a living. They see the robotic age much as the lower class whites in the south and the north saw emancipation: as a threat to their own ability to negotiate livable wages in the burgeoning industrial world, an army of mechanical scabs. Many northern industrialists backed abolitionism purely for personal benefit for this very reason, as they were starting to face the first labor organizations in attempted collective bargaining.
    Currently many cannot see the robot age as anything but one of conflict on this basis, despite all the talk about abundance and post-scarcity economics. The neo-marxists play on these fears and promote ideas like Universal Basic Income administered by governments taxing automation heavily, but we should have learned from the 20th century that centralized welfare state programs are about the most wasteful government organizations ever and will never deliver more than a small fraction of tax receipts to intended recipients. Still, the Sanders presidential campaign has captured the millenial imagination, as these children who never experienced the evils of international socialism in the Cold War look with naivete upon socialist ideas as realizable with the right technology, but are just as intolerant of disagreement as any commissar sending dissidents off to the gulag or the killing fields. Utopia is always one more execution away.

    I have proposed instead that we use blockchain technologies to decentralize government, making all bureaucratic administrative functions operate on the web, authenticated and transparent through the blockchain, and enable all those seeking entitlement benefits: welfare, unemployment, social security, minor child support/education, etc to earn these benefits by mining a national cryptocurrency that uses this blockchain. This enables the minimizing of the size of the bureaucracy. obsoleting the least productive jobs in the economy, cutting the cost of government while funding a Limited Basic Income through the savings.

  2. Very much enjoyed your talk David.
    Agree that there are risks in lack of privacy, just as there are risks in privacy.
    I suspect that the sort of domains of gradations of public and private zones that we have now will continue to evolve more gradations.
    Homes and private spaces can be private.
    Use of public spaces will be public.
    Reasonable use of information leaking across boundaries is allowed (if you’re talking loudly in your house with the window open it is not the fault of the passer by that they heard what you had to say, as distinct from using high tech devices to monitor such conversations).

    Agree with you that UBI (universal basic income) might be a useful intermediary transition strategy between scarcity based exchange values and fully automated and distributed abundance for all.

    Ideas like copyright and patent can only apply in commercial realms (and even there only for limited periods – two years normally and five years under exceptional circumstances), and cannot possibly apply in non-commercial realms. No one can un-think an idea just because they didn’t get to the patent office first. It might be reasonable to stop people using such ideas for sale in a market for a limited time, but not for private use.

    Agree with you that distributed networks are the key to security, and that includes distributing the data, the processing, the network architecture and mechanics itself (none of which are mainstream at present). And we need to always be conscious of the need for secondary strategies in such cooperatives to prevent exploitation and destruction by cheating strategies, and the logical need for eternal and ever recursive vigilance to ensure the existing suite of strategies is actually working.

    From my experience over the last 4 decades, the hardest idea to get across is that the coordinating role of markets as outlined by Hayek can be effectively replace by fully cooperative strategies. Most people seem so devoted to the competitive aspect of evolution, they fail to see that all advances in the complexity of life (at all levels) are characterised by the emergence of new levels of cooperation, and that cooperation is becoming ever more dominant over competition (in the resulting realms of abundance).

    And in this realm, it is crucial to understand one of Ostrom’s key insights into successful commons management, that the punishment has to fit the crime, within quite close tolerances. Too little punishment and the cheating persists. Too much punishment and the one who was cheating sees no possibility of future benefit in returning to cooperative behaviour, and so becomes a destructive external force.
    Clearly many of our current suite of social institutions (legal and otherwise) have a long way to go in this regard – particularly many web admins who simply ban people indefinitely, rather imposing a period of censorship after a clear warning of exactly what was considered unacceptable. Some, like the evonomics site, just ban without explanation or warning, with no right of challenge or natural justice.

    I am clear that one cannot understand human behaviour without understanding the essence of complexity theory and games theory, and seeing both of those in the context of the holographic way in which evolution works with all influences simultaneously over the long term, in determining the genotype and phenotype of a population (at both genetic and recursive mimetic levels).

    And it is also crucial to draw a clear distinction between levels of intelligence and full sapient intelligence.
    To be classed as a fully sapient awareness, deserving of all the freedoms of life and liberty you and I so treasure, it is clear that an awareness must be able to model itself and others within its own model of the world, and must also be able to influence the constructs of that model, and both the content and the context sensitivity of the value sets it is using to determine behaviour.
    Sure we all get our language and our starting value sets implicitly from a mix of genetic and cultural influences, and as self aware and self determining entities we can, through intention and practice, develop new contexts, new behaviours, new levels of awareness and opportunities for action and influence.

    The work of many thinkers has clearly shown that the classes of possible value sets, possible algorithms, possible levels of awareness, are all infinite, and that one could spend the rest of eternity investigating any infinity, and still be a close approximation to ignorant with respect to that which remains unexplored.
    So I see no rational grounds for supposing that any fully sapient AI will necessarily be any more omniscient than any of us. Sure it will be able to solve some classes of simple problems very much faster than us; and there will remain many classes of very interesting problems that will present just as much challenge to it as they do to us.

    Much as I admire Dan Dennett in particular and Eliezer Yudkowski I challenge the assumptions that both make that the universe is causal.

    It is clear to me that the mathematics of Quantum Mechanics (as described by Richard Feynman and many others) point to the fundamental fabric of reality being stochastic (random within probability distributions) – Feynman said as much several times. And certainly, by the time you get large enough collections of that fundamental stuff existing for long enough for humans to perceive, then those probability distributions become so densely populated that what we experience is (in most instances), to an approximation that is far smaller than any measurement error we have managed to achieve to date, causal in its behaviour.

    It seems clear to me that only in a universe that is such a mix of the random and the causal (a complex system, with recursive levels of constraints effecting agent behaviour), do the ideas of free will or choice or morality make any sense.
    If there was hard causality at the base of the system, then every word I have written here was predestined to happen some 14 billion years ago, and I have no existence as a causal entity (all choice is all illusion) – that is the logical necessary outcome of having causal turtles all the way down.

    So I am clear that what I experience as reality is a subconsciously created model of reality that my brain assembles from a mix of current inputs and past experience as conditioned by every level of contextual influence that exists (all as matters of probabilities with fundamental uncertainties). And I am also clear that mathematics and logic are great modelling tools, and give me the best approximations possible of reality, and I am under no illusion that they do anything more than approximate reality.

    As Dawkins so beautifully describes in Unweaving the Rainbow, we are all the most improbable outcomes of the lotteries of birth and survival, yet the process has to deliver something with the general sorts of characteristics we see. Bipedalism is an efficient way of delivering appendages that can make and use complex tools, but trunks and beaks can work too. As evolution does its semi-random walk through possibility space it is far more likely in any specific line to go backwards towards the simplicity from whence it emerged than it is to explore more deeply into the energetically costly realms of complexity. Only in a very small subset of domains is the exploration of the more complex more stable in the long term. I am very clear from my explorations over the last 42 years that competitive markets based on exchange values do not provide an environment of long term security for self aware entities such as ourselves and any emergent AGI to live peacefully and cooperatively together.

    I am forever grateful to the work of pioneers like Robert Axelrod and Elinor Ostrom who demonstrated clearly the sorts of strategic environments that allow cooperative entities to develop secure and operative trust relationships that last and deliver security for all, in as much as security is possible in any open system.

    I make the clear assertion (which is to me beyond any shadow of reasonable doubt) that the more people who are able to see beyond the implicit boundaries of markets and money, the greater the probability of survival for us all.

    Those tools that served us so well in times of genuine scarcity, have no valid place in realms of real abundance that must be the logical outcome of continued exponential growth of computational performance, and continued exploration of algorithms and tools that allow such systems to effectively process matter and energy at ever finer scales.

Comments are closed.