Review: Digital Humanism

For our June talk SELHuG committee member Tony Brewer, a former IT strategy specialist, made a presentation on Digital Humanism – shaping a future for people and robots.  Trevor Moore reports.

Slides of the presentation plus links to other information are at the bottom of the review. 

The catalyst for Tony’s talk came from his attendance last year at the Vienna Biennale, the Manifesto forwhich bore the intriguing title What Do We Want? Dimensions of a New Digital Humanism.  The Vienna Biennale Circle have developed a set of eleven questions that together seek to answer the wider question: How do we want to live in a digital world?

Before we could sensibly look at those questions, Tony set our brains whirring by giving an outline of some of the key concepts involved in any discussion in this arena.  Fundamentally, he defined ‘intelligence’ as ‘the ability to perceive or deduce information and place it in context to accomplish complex goals’.

Artificial Intelligence

So what about Artificial Intelligence (AI)? There has been an ascending ladder of AI that begins with ‘Narrow AI’ (AI aimed at a limited task, such as driving a car); continues through ‘Strong AI’ (or Artificial General Intelligence, equivalent to human intelligence in working out tasks etc) and ‘The Singularity’ (AI at a level equal in all respects to humans, but including the ability to self-learn and thus potentially develop ‘super-intelligence’ beyond the human level – although a big question for many at the meeting was the extent to which computers can learn ‘emotional intelligence’).

Achievement of The Singularity

The Singularity seems to be the level of intelligence that robots might have in one of those sci-fi movies where they take over the world, not always with encouraging results.  No-one knows when this level of AI will be achieved, but there is a good guess out there that it could be as soon as 2050.

The Singularity could spell the end of the need for human invention – humans could then just put their feet up and let the robots get on with it…but hang on, isn’t that a little scary? What if they started developing things that went against the best interests of humans? We came to ‘Perils and Precautions’ later…

The Singularity is a real possibility because of the development of so-called ‘neural networks’ that mimic the neurons and synapses of the human brain. These have enabled computers to become self-learning – to such an extent that some scientists developed a system called AlphaGo Zero which taught itself to play the complex Chinese game ‘Go’ to a level at which no existing human or digital competitor could beat it.

One of the main benefits of neural networks is that they have enabled computers to analyse vast amounts of data – called Big Data, no doubt to impress us all with the scale of it all.  There is therefore no longer any need merely to sample data to reach conclusions (as, say opinion pollsters do – and we know how good they are…) – you can instead analyse the whole lot and the system identifies the correlations itself.

Perils and Precautions

Tony split the risks into three main categories:

Internal Perils – these could come from the way in which the system is set up in the first place. There may be unconscious bias in those developing the technology (a team of under thirty-year old male hipsters would adopt a different approach to a team of mature female specialists).  Ensuring human control would be paramount.

External Perils – these would be social and economic, such as the loss of routine manual jobs. But there could also be significant risk of misuse. The prospect of self-learning systems being in control of weapons of mass destruction is not an attractive one – the so-called LAWS (lethal autonomous weapons systems).

Existential Perils – when The Singularity occurs, we would be heading for a world in which super-intelligence exists, way beyond a human’s capability.  How could we ensure it develops in a benign way?

The Eleven Questions

Having explained the background, Tony then asked us to split into smaller groups to consider the Vienna Biennale questions, which are as follows:

1. How do we want to be human?

2. What do we want from technology?

3. How do we want to live together?

4. What do we want for our planet?

5. How do we want to consume?

6. What do we want to learn?

7. How do we want to work?

8. How do we want to dwell?

9. Which digital fundamental rights do we want?

10. Which rights do we want for robots/artificial intelligence

11. How do we want to deal with a superintelligence?

To see how the groups responded to these questions, in a feedback session, see here

You can see the exhibition manifesto of the 2017 Vienna Biennale Manifesto here

The slides for Tony’s presentation can be seen here