Tuesday, 7 April 2015

A Future So Bright, You Have to Wear Shades?

When the End Comes, All That Will Be Left Is Us

Today, I came across this interview given by Apple co-founder Steve Wozniak.  IMHO, Wozniak was the real brains behind The Fruit Factory, whereas Steve Jobs was the guy who understood what the market wanted, or perhaps more accurately put, telling the market what it should want.  

The Woz was being interviewed by an Australian journal after a recent announcement that he had applied for and received permanent residency in that country.  His son lives in Sydney, and Woz has apparently long fostered a desire to "live and be buried" in the Land Down Under.

Among the topics Wozniak held forth on was his increasingly dim view of the future of mankind in a world of artificial intelligence.  He joins an increasing list of impressive minds (Stephen Hawking, Elon Musk) warning us of the risk of summoning the demon, as they say.

The basic idea is quite simple and familiar to anyone who has seen one of the various films in the catalogue of dystopic futures (The Terminator franchise, Logan's Run).  Humanity create computers and/or robots with true AI, the machines, not being subject to the same biologic limits as human beings, quickly become "smarter" and faster than their creators, and subsequently become our overlords.

With catastrophic consequences:
Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently,
Woz imagines a few alternatives for human beings:
Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don't know about that … But when I got that thinking in my head about if I'm going to be treated in the future as a pet to these smart machines … well I'm going to treat my own pet dog really nice.
Aside from the fact that one ought to treat one's pets "really nice" irrespective of how our future turns out, I remain unconvinced of the proposition of real "AI."  I've written before about how I view the threat of AI, but suffice it to say that I am an adherent to John Searle's argument against "strong AI,"  Essentially, machines will never really be thinking or understanding in the sense that people commonly describe them; rather, they will be made to simulate these processes.

But Wozniak, and Musk, and certainly Hawking are to be listened to when they warn of these risks, Of course, machines do not need to do more than simulate intelligence with reasonable effect.  The problem here is what responsibilities we abdicate to machines.  How much autonomy we give them rather than how "smart" they are.

A more pressing question I would pose to Woniak et al is the immediate future of a workforce where machines can simulate the jobs we do.  A couple of recent publications, including the book Our Kids: The American Dream in Crisis by Harvard researcher Robert Putnam examine the reality that the ability to win and maintain life in the great American middle class has become increasingly challenging.  It's well-reported that wages have been more or less stagnating since about 1972, and that the trend is accelerating and for larger cohorts of Americans.  

Many reasons are offered - the usual suspects about racism, corporate rapacity, educational deficiencies.  But what to make of the reality that machines that can simulate human beings with greater skill can plainly replace us?  The argument since the rise of machines is that automation is part of creative destruction - the automobile put the buggy whip maker out of business, but created jobs for the mechanic.  The ATM reduces our need for bank tellers, but requires people who can make, program, and maintain the devices.

The central problem with this argument is the assumption that there is no upper limit to human abilities; that we will forever be able to create new occupations.  That does not seem to me a sustainable view.  

John Derbyshire wrote in a book entitled (without irony) We're Doomed:
The assumption here is that like the buggy-whip makers you hear about from economic geeks, like dirt farmers migrating to factory jobs, like the middle-class engineer of 1960, the cube people of today will go do something else, creating a new middle class from some heretofore-despised category of drudges. But… what? Which category of despised drudges will be the middle class of tomorrow? Do you have any ideas? I don’t. What comes after office work? What are we all going to do? The same thing Bartleby the Scrivener did, perhaps, but collectively and generationally.
What is the next term in the series: farm, factory, office…? There isn't one. The evolution of work has come to an end point, and the human race knows this in its bones. Actually in its reproductive organs: the farmer of 1800 had six or seven kids, the factory worker of 1900 three or four, the cube jockey of 2000 one or two. The superfluous humans of 2100, if there are any, will hold at zero. What would be the point of doing otherwise? [emphasis mine]
Machines that can function as lawyers or doctors - they will need people to make, train, and maintain them.  But I suspect not on a 1:1 basis.  Likely not on a 10:1 or 100:1 basis.  That's an awful lot of smart, educated people who are going to have to find something to do.  

If the current trends (e.g., the guy with graduate degrees working as a salesman at Macys) hold, as bad as such a future will be for the educated, it's going to be cataclysmic for those lower down the education scale.  Someone perhaps capable of graduating high school or perhaps completing a couple of years of community college is going to find that he is competing for jobs with men and women who are much smarter than they.  

The "solutions" (universal pre-school, 'free' community college) are going to bump into biological realities.  And fast.

More from Woz, who spent a few years as a teacher after he became independently wealthy:
Computers in schools were very new when I was teaching, and they didn't really succeed. They didn't change how smart we'd come out thinking; we're just more powerful at getting answers and knowing things by using the internet
The idea that methods or tools will make people "smarter" is not grounded in reality.  These tools increase the reach of our existing abilities.  They extend them.  But they do not change their nature.  Much like the fact that better running shoes allow human beings to run faster, they cannot make us fly.

And in this case, the machines will always be able to carry out "mental" tasks faster than we can.

So I am not terribly concerned about the threat of AI to humanity.  The economic challenges posed by "smart" machines are going to be nasty, and they are going to arrive much sooner,  Some argue that they've arrived already.

I suggest that people like Wozniak and Musk should be much more concerned about the immediate future of human beings rather than the ultimate fate of humanity.

No comments: