Wednesday 27 September 2017

Humanity on the Treadmill. Or, in the Blender


Came across an interesting "study" that is to be carried out by the founders of Y-Combinator, a self-proclaimed "start-up incubator" here in the San Francisco Bay Area.The founder of that outfit, Sam Altman (who bears no resemblance to the comical Ehrlich Bachman on the HBO show "Silicon Valley" - AVIATO!) announced about a year ago that he was interested in testing not how companies might behave, but people.

The idea on Altman's mind is to a small-scale test of the impact on people (and feasibility) of the introduction of a "Universal Basic Income."

Briefly, the UBI is a concept by which an entity (most likely, the state) provides to all of its citizens a basic, floor income. The idea is not new, and has been floated (and endorsed) by the likes of the uber-libertarian Milton Friedman. Under the scheme, every citizen would be granted a certain amount of monthly or annual stipend, irrespective of work of any sort.

It's whose appeal I find increases the more I consider:

  1. The growing role of automation in delivering goods and services
  2. The rising challenge beyond our shores in countries with wage demands and standards of living well below what is necessary to be considered "middle class"
  3. The increasing concentration of wealth at the top
  4. The ascent of near-human artificial intelligence
  5. My own empirical observation of the clash of the increase of skills needed to "make it" versus an apparent dystopian devolution in actual skills present (well educated, skilled people have fewer children at later ages while people lacking education and skills have larger families at younger ages. Think: the central premise of the movie "Idiocracy."
I think a lot - perhaps more than is healthy, about what is going to happen as the machines and the underclass grow. I've written more than once about my particular views with respect to AI. But in a nutshell:

  1. I reject that machines will ever replicate human intelligence in anything more than a simulation (good)
  2. Artificial intelligence will not be a perfect simulation (good)
  3. It doesn't have to be (uh-oh)

Steve Wozniak some years ago, in talking about the future of machines, put it this way:
Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don't know about that … But when I got that thinking in my head about if I'm going to be treated in the future as a pet to these smart machines … well I'm going to treat my own pet dog really nice.
The erstwhile mathematical and political blogger John Derbyshire several years ago described what is happening in the workplace and beyond in a somewhat dystopian view that has stuck with me since first I read it:

The assumption here is that like the buggy-whip makers you hear about from economic geeks, like dirt farmers migrating to factory jobs, like the middle-class engineer of 1960, the cube people of today will go do something else, creating a new middle class from some heretofore-despised category of drudges. But… what? Which category of despised drudges will be the middle class of tomorrow? Do you have any ideas? I don’t. What comes after office work? What are we all going to do? The same thing Bartleby the Scrivener did, perhaps, but collectively and generationally.
What is the next term in the series: farm, factory, office…? There isn't one. The evolution of work has come to an end point, and the human race knows this in its bones. Actually in its reproductive organs: the farmer of 1800 had six or seven kids, the factory worker of 1900 three or four, the cube jockey of 2000 one or two. The superfluous humans of 2100, if there are any, will hold at zero. What would be the point of doing otherwise? [emphasis mine]
Machines that can function as lawyers or doctors - they will need people to make, train, and maintain them.  But I suspect not on a 1:1 basis.  Likely not on a 10:1 or 100:1 basis.  That's an awful lot of smart, educated people who are going to have to find something to do.  

The current trends are scary (a guy with graduate degrees working as a salesman at Macys). It's going to be ugly for even the educated. Worse, as bad as such a future will be for the educated, it's going to be cataclysmic for those lower down the education scale.  Someone perhaps capable of graduating high school or perhaps completing a couple of years of community college is going to find that he is competing for jobs with men and women who are much smarter than they.  

The "solutions" (universal pre-school, 'free' community college) are going to bump into biological realities.  And fast.

If you want to see a real horror movie, forget about a guy in a hockey mask. Check out this video, entitled "Humans Need Not Apply."



This is where the UBI may come in.

The EU is already looking at the future - a vote was taken this past year in Brussels to examine taxing robots as they enter the workforce as a means to take care of the human workers they will displace.

The research that Altman proposes will provide random people with $1000 per month, over five years. People given the money will not be required to do anything in return for it, and at the beginning and end of the 60 months, people will be interviewed on their behaviours and choices. Did you work? Doing what? What did you spend the money on? 

How did you pass the time?

The sociological research question implied is: without work, will our lives have purpose? Is an intrinsic part of humanity to create things? To do things other than entertain ourselves? The ultimate leaving behind of any sort of work, and what its effects on how people see themselves is a critical question.

But the question that is unasked is this: What sort of impact will it have on the tiny number of 'producers?' Producers, not in the sense that Republicans talk of "makers and takers," but of those whose job it will remain to come up with ideas and visions? That shrinking set of individuals will potentially have enormous power and control.

I'm reminded of the image of the future from HG Wells's The Time Machine. The protagonist - never actually named - goes far into the future, and encounters two creatures. One (the Eloi), look like perfectly formed, beautiful human beings. They cannot speak and lack much more than the sort of intelligence one might expect of a domesticated animal.

The horrible truth is revealed, of course.

In our real future, under a UBI, when the overwhelming majority of people don't have to actually do anything to acquire survival, will we lose as the Eloi did?

And what sort of Morlocks will tend to us? 

Steve Wozniak recommends to be extra nice to his dog. But will our future Morlocks view us with compassion?

The history of mankind is not encouraging.

No comments: