Pages

Sunday, January 7, 2018

Matt Ridley: Artificial intelligence will be a symbiosis, not a replacement

In the early 1960s, at the Massachusetts Institute of Technology, there was a disagreement about what computers would achieve. One faction, led by John McCarthy and Marvin Minsky, championed “artificial intelligence”, believing that computers would gradually replace human beings. The other, led by Norbert Wiener and JCR Licklider, the man who oversaw the creation of the internet’s precursor, championed “human-computer symbiosis”, believing that computers would augment human beings.

“Man-computer symbiosis is an expected development in co-operative interaction between men and electronic computers,” wrote Licklider in a crucial essay published in 1960. “It will involve very close coupling between the human and the electronic members of the partnership.” In his arresting analogy, computers would be to us as fig wasps are to fig trees: symbiotic partners.



Looking around us, Licklider was right. Augmentation rather than replacement triumphed, says the historian Walter Isaacson in his 2014 book The Innovators, especially after it was taken up by the hackers, hobbyists and hippies of the west coast. By 1968, at what came to be known as the “Mother of All Demos”, the visionaries Stewart Brand and Douglas Engelbart were demonstrating to an audience in San Francisco such symbiotic concepts as the cursor and the mouse.

We are in the middle of a hype cycle about AI and I think Licklider will be right this time too. The AIs we use, though we do not call them that, are augmenting, not replacing, people. My smartphone recognises the faces of my family, adds to maps the names of restaurants or theatres it spots in my diary, re-routes me around traffic congestion: it is my symbiont, not my nemesis. Note that AI is assisting the lives of consumers even more than those of producers.

The same symbiosis is true of the AIs coming in the near future. At a Microsoft lab I have watched experimental systems do in seconds that which takes a radiologist hours: delineate an organ on a series of scans, preparatory to cancer treatment. At Google’s Deepmind in London, algorithms are preparing to save the search engine company a fortune in energy bills by rethinking its electricity distribution system.

Even on motorways, the transition from a driver at the wheel, heavily assisted by adaptive cruise control, automatic braking, lane discipline and so on, to a computer chauffeur is massively problematic. Imagine: you have not driven a car for months, you are inside one doing 70mph on the M6, dozing gently, when a snowstorm blinds the sensors and an electronic voice says: “I’m handing back control to you.” That is roughly what happened to two inexperienced Air France pilots with many hours of flying but most of it with the automatic pilot on, over the Atlantic in 2009. They crashed.

Even where automation has replaced human beings, it has increased employment through spillover effects: more productive workers can afford to buy more goods and services, which supplies more jobs to those providing them. The economist William Baumol identified that employment actually rises in low-skill occupations when high-skill ones are automated.
In a report published last month the Institute for Public Policy Research agrees that this has happened: “The evidence suggests that consumers have used their higher incomes to purchase relatively more services . . . low-skill service occupations (typically also low-tech) such as food service workers, cleaners, or recreational workers, have grown rapidly in recent decades.”

The IPPR thinks that this will continue with the next wave of AI: “There is likely to be tremendous potential for the productivity dividends of technological change to be redirected to the consumption of social goods and infrastructure, and expanding employment in the provision of these services . . . it is possible that technological change will also negatively impact on employment in services. But it is unlikely.” We have heard warnings of mass unemployment with every wave of automation since the threshing machine. In 1964 a US presidential committee of inquiry set up by Lyndon Johnson foresaw huge job losses because of “potentially unlimited output by systems of machines which will require little co-operation from human beings”.

However, the IPPR raises the prospect that AI will create more inequality, as the lowest-skill jobs and the highest-skill ones increase, while the middle-skill jobs decline: more managers being driven by more Uber drivers, but fewer taxi drivers and secretaries. Such job polarisation does show up in the statistics over the past two decades.

Yet it is the lawyers, professors and accountants whose jobs are under threat. Even if the jobs of carers and cleaners can be partly automated, I suspect that will increase the demand for their services. If, say, robots could do some of the work of looking after old people more cheaply, then people would be able to afford more carers to supply the rest of the need. The IPPR says that “the risk is therefore less mass joblessness and more the ‘paradox of plenty’.  Technological change would make society richer in aggregate. However, capital-biased economic change would create a problem of distribution: those who can provide labour but do not own capital might have inadequate means of making a reasonable living.”

Here too history teaches a reassuring lesson. Automation has already shifted vast amounts of income from labour to capital, and how has society responded? By sharing the labour more equally. Consider, for example, the fact that Britain has very low unemployment right now. Yet because of shorter working hours, longer holidays, longer periods in education and longer retirement, the proportion of life that the average Briton will actually spend at work, as opposed to sleeping, consuming, learning, on holiday or in retirement, has shrunk dramatically, from about 25 per cent a century ago to about 10 per cent today. That is evidence of fairly sharing the benefits of automation. If the percentage falls to 7 per cent or 5 per cent thanks to further symbiosis between computers and people, then everybody can gain.

Matt Ridley, a member of the British House of Lords, is an acclaimed author who blogs at www.rationaloptimist.com


No comments: