Thursday, 9 April 2015

The Race to the Bottom to Accelerate?WE

Who Will "Win" this Race in the Valley?

I've recently returned "home" to the San Francisco Bay Area after a couple of years living in Paris, France.  During my exile, I kept a very loose eye on the news and developments, which, as Eliza Doolittle ("My Fair Lady") summed correctly -"without much ado, we will all muddle through without you" - carried on in my absence.

It seems that the "culture" of the Valley has become more widely and acutely discussed.  Picking apart the ins and outs has become almost a sport.  The "bro" culture. The rush to make huge sums of money.  The rise (in the past) and triumph (in the present) of the "nerds."  Some of the analyses have been more accurate than others.  In particular, the most aggressively ignorant meme is that companies in Silicon Valley are "too white and male," a claim that is plainly belied by even a high-school level knowledge of statistics.  For example, the hot tech companies - e.g., Facebook, Twitter, Google, have workforces that are approximately 50% white, in a country where more than two of three people are.  But then, it's not 'sexy' to publish the more truthful headline that "Tech Companies Are Too Asian."

Now, I do not subscribe to these sort of phony claims that a successful company is "too" anything - if the best minds overselect for Asians, then it stands to reason that the successful company would overselect for Asian employees.  You fish in the lake where the fish are.

The rhetoric has gotten more shrill and the volume louder about why the tech world is insufficiently welcoming to women.  "Gender" bias is a the hottest topic (aside: the abuse of the word "gender" is just one more surrender in the steady retreat of linguistic standards. Why people are squeamish to use the real world, "sex," escapes me.)  

The recent case of Ellen Pao, a former employee at the archetypical venture capital (VC) firm of Kleiner, Perkins, as roiled the valley.  Pao was enmeshed in a nasty battle with her former employer, ostensibly because she failed to be made a partner in the firm, complained about it, and eventually got fired.  Pao sued Kleiner in a multi-million dollar "gender" (sic) discrimination suit, which she eventually lost.

Setting aside the inherent sense of self-worth of a 30-something who had delusions of grandeur, the discussion has touched off many arguments in the Land of Lean In.

Pao made news today as the CEO of Reddit, a social media site headquartered up the peninsula in San Francisco.  Pao, under the rubric of looking to level the pay gap between male and female employees at Reddit, announced that Reddit will not engage in salary negotiations as part of its hiring process.  Citing data that men are more likely to negotiate pay, and to be more aggressive (and successful) than women when they do, the practice will not continue.
Men negotiate harder than women do and sometimes women get penalized when they do negotiate. So as part of our recruiting process we don’t negotiate with candidates. We come up with an offer that we think is fair. If you want more equity, we’ll let you swap a little bit of your cash salary for equity, but we aren’t going to reward people who are better negotiators with more compensation.
At first blush, this strategy seems like a blow for equality.  

But is it?  

As I see it, the end-game here is to depress salaries stealthily.  Pao and Reddit appear to be championing equality (which in a sense, is existentially true), and they will likely draw kudos for the effort.  But this equality is likely to come not because women are going to see more money, but because male employees are going to see less.

If one thinks about the issue for more than three seconds, it's obvious, isn't it?  What sort of employee has more leverage to demand higher pay, the entry or mid-level engineer, or the candidate being sought for upper management?  Did Pao herself accept the first offer from Reddit, or did she negotiate her pay and equity? 

The de facto outcome of this move, if duplicated, will almost surely shift more of the income away from the middle (who will no longer be able to negotiate for more pay) and towards the top (who have far more leverage to expect/demand more pay, or worse, to C-suite employees whose pay is set by boards of puppets controlled by their friends). 

Women are being used here as part of a long-term strategy, either consciously or unconsciously, to undermine wages.  It's a trend that is not new.  I've long believed that, if proper analyses were conducted on the wage structure in the developed world, the entry of women in large numbers into the workforce would almost surely be a significant factor in wage stagnation/suppression.  

We hear often about how real wages have been flat since 1980 - a date conveniently chosen because of the election of Ronald Reagan.  In fact, wages began flattening about a decade earlier.  Just after women becan to move in larger numbers into the US economy.  

As the chart above shows, real wages closely paralleled productivity right up until the early 1970s, and then split.  Household income has continue to grow - slowly - but real hourly wages actually fell.  How is it possible that household income ticked up, but wages fell?  

I haven't done the modelling, but I would be highly curious to see the results of anyone who has.  

The laws of supply and demand cannot be repealed - if the supply of workers is increased significantly, what effect is that likely to have on wages?  

Pao's efforts are just the latest salvo.  And make no mistake; it's not stochastic.

A year ago, the big tech employers in the Bay Area (Apple, Google) settled a lawsuit surrounding collusion in hiring and recruiting from each other's workforce with effect that depressed wages for tech workers.  

Negotiation over pay relies on leverage - if you're a highly valuable candidate, you always have the option to say, "No.  I don't accept this offer" and walk out the door.  But if the company has a gentleman's agreement that competitors also will not negotiate or "poach" employees, that leverage is gone.

The leaders of these companies are not stupid - Pao is not stupid, with degrees from Princeton and Harvard - so they surely must see where this is going.  

In this case, to more money for people like Ellen Pao, who rather than being called out as the rapacious businesswoman that she actually is, will be championed as a fierce feminist fighter.


Tuesday, 7 April 2015

A Future So Bright, You Have to Wear Shades?

When the End Comes, All That Will Be Left Is Us

Today, I came across this interview given by Apple co-founder Steve Wozniak.  IMHO, Wozniak was the real brains behind The Fruit Factory, whereas Steve Jobs was the guy who understood what the market wanted, or perhaps more accurately put, telling the market what it should want.  

The Woz was being interviewed by an Australian journal after a recent announcement that he had applied for and received permanent residency in that country.  His son lives in Sydney, and Woz has apparently long fostered a desire to "live and be buried" in the Land Down Under.

Among the topics Wozniak held forth on was his increasingly dim view of the future of mankind in a world of artificial intelligence.  He joins an increasing list of impressive minds (Stephen Hawking, Elon Musk) warning us of the risk of summoning the demon, as they say.

The basic idea is quite simple and familiar to anyone who has seen one of the various films in the catalogue of dystopic futures (The Terminator franchise, Logan's Run).  Humanity create computers and/or robots with true AI, the machines, not being subject to the same biologic limits as human beings, quickly become "smarter" and faster than their creators, and subsequently become our overlords.

With catastrophic consequences:
Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently,
Woz imagines a few alternatives for human beings:
Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don't know about that … But when I got that thinking in my head about if I'm going to be treated in the future as a pet to these smart machines … well I'm going to treat my own pet dog really nice.
Aside from the fact that one ought to treat one's pets "really nice" irrespective of how our future turns out, I remain unconvinced of the proposition of real "AI."  I've written before about how I view the threat of AI, but suffice it to say that I am an adherent to John Searle's argument against "strong AI,"  Essentially, machines will never really be thinking or understanding in the sense that people commonly describe them; rather, they will be made to simulate these processes.

But Wozniak, and Musk, and certainly Hawking are to be listened to when they warn of these risks, Of course, machines do not need to do more than simulate intelligence with reasonable effect.  The problem here is what responsibilities we abdicate to machines.  How much autonomy we give them rather than how "smart" they are.

A more pressing question I would pose to Woniak et al is the immediate future of a workforce where machines can simulate the jobs we do.  A couple of recent publications, including the book Our Kids: The American Dream in Crisis by Harvard researcher Robert Putnam examine the reality that the ability to win and maintain life in the great American middle class has become increasingly challenging.  It's well-reported that wages have been more or less stagnating since about 1972, and that the trend is accelerating and for larger cohorts of Americans.  

Many reasons are offered - the usual suspects about racism, corporate rapacity, educational deficiencies.  But what to make of the reality that machines that can simulate human beings with greater skill can plainly replace us?  The argument since the rise of machines is that automation is part of creative destruction - the automobile put the buggy whip maker out of business, but created jobs for the mechanic.  The ATM reduces our need for bank tellers, but requires people who can make, program, and maintain the devices.

The central problem with this argument is the assumption that there is no upper limit to human abilities; that we will forever be able to create new occupations.  That does not seem to me a sustainable view.  

John Derbyshire wrote in a book entitled (without irony) We're Doomed:
The assumption here is that like the buggy-whip makers you hear about from economic geeks, like dirt farmers migrating to factory jobs, like the middle-class engineer of 1960, the cube people of today will go do something else, creating a new middle class from some heretofore-despised category of drudges. But… what? Which category of despised drudges will be the middle class of tomorrow? Do you have any ideas? I don’t. What comes after office work? What are we all going to do? The same thing Bartleby the Scrivener did, perhaps, but collectively and generationally.
What is the next term in the series: farm, factory, office…? There isn't one. The evolution of work has come to an end point, and the human race knows this in its bones. Actually in its reproductive organs: the farmer of 1800 had six or seven kids, the factory worker of 1900 three or four, the cube jockey of 2000 one or two. The superfluous humans of 2100, if there are any, will hold at zero. What would be the point of doing otherwise? [emphasis mine]
Machines that can function as lawyers or doctors - they will need people to make, train, and maintain them.  But I suspect not on a 1:1 basis.  Likely not on a 10:1 or 100:1 basis.  That's an awful lot of smart, educated people who are going to have to find something to do.  

If the current trends (e.g., the guy with graduate degrees working as a salesman at Macys) hold, as bad as such a future will be for the educated, it's going to be cataclysmic for those lower down the education scale.  Someone perhaps capable of graduating high school or perhaps completing a couple of years of community college is going to find that he is competing for jobs with men and women who are much smarter than they.  

The "solutions" (universal pre-school, 'free' community college) are going to bump into biological realities.  And fast.

More from Woz, who spent a few years as a teacher after he became independently wealthy:
Computers in schools were very new when I was teaching, and they didn't really succeed. They didn't change how smart we'd come out thinking; we're just more powerful at getting answers and knowing things by using the internet
The idea that methods or tools will make people "smarter" is not grounded in reality.  These tools increase the reach of our existing abilities.  They extend them.  But they do not change their nature.  Much like the fact that better running shoes allow human beings to run faster, they cannot make us fly.

And in this case, the machines will always be able to carry out "mental" tasks faster than we can.

So I am not terribly concerned about the threat of AI to humanity.  The economic challenges posed by "smart" machines are going to be nasty, and they are going to arrive much sooner,  Some argue that they've arrived already.

I suggest that people like Wozniak and Musk should be much more concerned about the immediate future of human beings rather than the ultimate fate of humanity.

Wednesday, 1 April 2015

Days of Future Past

Who says "you can't go back?"  

We are about to find out if the old adage is true or not, as recently, I changed jobs - and companies - to take an exciting new position.  The plus side: more responsibilities, more opportunities, more visibility.  More money.

But there is a price to everything, and in this case, that price means giving up the final few months I had living in Paris.  

It was a very tough choice. 

Needless to say, Paris is a fabulous place to live; we've enjoyed just about everything from the food, the history, the culture, and the architecture to the perks of living in the centre of Europe, a location that has allowed us to visit a dozen countries in Europe and Africa.

However, life forces choices, and being a grown-up means, sometimes, making decisions that you'd rather avoid.  As the sub-text of this blog paraphrases John Lennon, life is what happens when you're making plans.

Thus, I've had to say "adieu" to ma vie en rose dans la ville de lumière after a couple of years as an adopted Parisian.  (My wife and son get a temporary stay, as they will be coming along at the end of his school year this summer).

The fall will be difficult, but it will not be fatal.  We're coming back to the US, and in a twist of fate, it is a real homecoming of sorts.  We are moving back to the San Francisco Bay Area, where our son was born and where both my wife and I lived nearly all of our adult lives.

We are no longer going to be, as the title of this blog states, San Jose Refugees.

It is going to be an interesting transition - we've been gone from central California for nearly a decade now.  Just about everything has changed.  I'm no longer young, but decidedly middle-aged, a fact my son reminds me of with frequency and glee.  For a chunk of the previous time, I was single, and for virtually all of it, childless.  I have responsibilities that I hadn't, interests that I didn't, and aches in places I was unawares of.  The BMW convertible is gone.  No; a Honda Odyssey is not on the cards.  A sensible sedan likely is, however.

I visited my old neighbourhood this weekend - it looks very different of course.  Nothing is more constant in the Bay Area than change.  There are many new, fancy high-rise apartments in San Jose that were one and two-storey, mid-century eyesores.  

Today, a shipment of personal items arrived at my temporary, corporate apartment from Paris after some delays at the customs office.  The foreman of the delivery crew asked me if in the past, I had lived on 11th St in San Jose, which of course I had.

Turns out, the guy lived next door to me 20 years ago.  

He was 11 at the time. Now grown, he has three children of his own.  

The world is, indeed, small, even if it's not as "flat" as Tom Friedman would have you believe.

Can you go back?  We are about to find out.

Back to the What?

It's Now 2015, and STILL No "Mr Fusion"

One of the fun things about being a parent is getting to re-live certain moments of your youth with neither guilt (due to unabashed indulgence in some of the less-than-adult pursuits) nor nostalgia.  (NB: recall that the root of the word "nostalgia" is Greek meaning a pain - 'algia' - one feels when remembering one's home - "nostos").  

Had a trip down the guilt-free memory lane recently watching the 1985 movie (I hesitate to call it a classic) Back to the Future with my nine-year old.  Some of the jokes are not as funny as I remember them being, some of the plot twists (Libyan terrorists?) seem terribly dated, and the special effects often seem at a level of cheesiness that they make Kraft Dinner look downright healthy.

One thing struck me, though, and that is, it is now 2015.  The movie was released in the summer of 1985 - nearly exactly 30 years ago.  

One of the chuckle-inducing themes of the story is that the protagonist goes back 30 years to 1955, and we all get to laugh at how primitive, corny, and backwards the people in the 1950s seemed.  Gee, my parents were square, huh?  Glad that I'm not like that.  

Did The Men of Texaco really come running out to service the Chevy when it pulled in?  Did the kids really say things like "swell" and "dreamboat?"

Well, the laugh is on me, as it is now my turn.  

I am sure my own folks had the same feeling, but wow.  Was 1985 really that long ago?  It hardly seems possible.  

As I think about all the "modern" items in the 1985-era McFly household (boom boxes, Sony Walkmen, touch-tone telephones, floppy disk drives, and cassette tapes), it does in many ways seem a different world.  Who could have imagined then the iPhone or wireless internet.  Or the internet, for that matter, which in those days was still a figment of Al Gore's imagination.

I am pretty sure my son - who has lived his entire life in an era where CD technology is largely in the rear-view mirror - regards the artefacts of my youth as Indiana Jones-worthy antiquities.  He's not yet weighed in on feathered hairstyles, parachute pants, or "Wake Me Up Before You Go-Go," but he does at least show some appreciation for Pac Man.

At the end of the movie, Professor Emmett Brown returns from the future in a flying car powered by trash converted to energy in "Mr Fusion."  Despite all the advances of the 30 years in between Hill Valley circa 1985 and today, we still have not achieved flying cars.

Peter Thiell was, in this respect, correct.

Party on, Garth.

Monday, 9 February 2015

Whose "Boyhood" Is It?

I don't watch the Academy Awards each year, and in fact, most of the time, I have not seen even one of the movies nominated.

This year, however, I had a chance to see one of the critics' top picks, the work of Texas film-maker Richard Linklater entitled "Boyhood."  It's the story told over 12 years of the life of a young boy growing up in quasi-rural Texas.

The "angle" of the movie, and it is a clever one in my view, is that it was shot in sort-of real time.  Each year, the crew and cast would get together for a few days and shoot various scenes to capture the life of the lead character, Mason.

You almost literally see the kid grow up.

The life of little (and not-so-little) Mason - divorced parents, a mother who goes through several serial marriages, a dash of alcoholism, and despite all of that, relative calm - is unlike my own, but the movie to me was fascinating.  And as the father of a kid who is about 10 per cent of the way along the journey shown on film, captivating.

And excellent.

Linklater and the crew have, for the efforts, been nominated for many awards, and even the US President Obama has noted that the film is his favourite of 2014.

Oddly, it has come under attack from various quarters.

The Atlantic attacked "Boyhood" because it too narrowly focused on the youth of a white American kid, noting that the experiences of Mason are non-universal.

It's an odd criticism, really.  As someone who was, himself, once a young, white American, the film doesn't reflect my youth, either.

But do we expect, or even ask, movies to speak to the experiences of us all?  Last years' "12 Months a Slave" touched not a single of my life's experiences, either, and in fact, slavery has been outlawed in the US for 150 years.

A second criticism, from the Wall St Journal, focuses on the crypto-sexism of the movie's point-of-view.  Apparently, as Mason goes through his life, that of his older sister fades into the background.  The movie becomes, for two feminist writers at Columbia University, a sort of Millenial Ophelia cri de cœur -  a way of showing how society discourages women's voices.

The reality, as I see it, is that it would be odd for a movie called "Boyhood" to focus on the older sister, who about half-way through the film is off to college anyways. The authors make a number of other errors, but again, I don't see why a film Linklater made about - ostensibly - his own experiences needs to be a vehicle for universal expression.

It's been said, more than once, that the average colour of a rainbow is white.  The current need to ensure that everyone and everything is represented risks turning unique works like "Boyhood" into a Kraft Dinner of bland, pointless pap.

Finally, from some corners, the film is attacked because there is no central crisis nor conflict.

But in a film about the life of a young kid, isn't that the point?  John Lennon said, once, that life is what happens when you're making plans.  Here, Mason is remarkable not because he discovers a comet or invents the internet or overcomes, with his own bare hands and the pluck of a teacher who left a lucrative corporate career to 'save' disadvantaged youths.

In other words, there is no Mihkei Pfieffer.

What we get instead is not melodrama or Karate Kid show-downs, but a real life of a sort.

I suppose that is what really, in the end, made the film so excellent for me.  It doesn't need a "Lifetime" hero waiting to swoop in and save the day.  As the story closes, as Mason heads off to the next stage of his life, in fact, one is left to ask if the day has been "saved," and "from what?"  One doesn't know what will become of Mason.  One only sees from where he has come.

Just like the rest of us.

Monday, 2 February 2015

Bang the Drum, Ringo

One of the things I enjoy in my leisure time is running, something I've been up to for 20 years now.  I try to mix up my routines a bit to add a little variety, but also attempt to have some day-to-day reproducibility to allow me to do some benchmarking.

I'm a mathematician by trade, and thus a significant chunk of my waking (and even some of my non-waking) mental energy is devoted to numbers.  To paraphrase Pooh-Ba from The Mikado:  "I cannot help it; I was born sneering."

Living as I have for the past couple of years in Paris, my courses are around some familiar icons, but a significant piece is around the nearby Parc Monceau.  It's a pleasant, English-style garden with a fake pyramid, Roman colonnade, huge, Palatine trees, and a loop course.  It works well, as one loop around the park is almost exactly one kilometre, and thus, I can run up to the park, a few laps around it, and then home to complete my 10k.

Parisians have taken to jogging - a surprising thing to say for someone who just a few years ago was laughing when Nicolas Sarkozy's running routine was derided as too Anglo-Saxon.

The French, despite being seen as avant-garde and progressive, in fact are quite a socially conservative, conformist lot.  I wrote last summer my observations that virtually everyone running in Parc Monceau ran the same direction, circling the park in an anti-clockwise sense.  When I headed the opposite direction, I was greeted with stares that ranged from bewilderment to shock to in some cases, disapproving angst.

It was almost like the scene from Midnight Express where Brad Davis decides to march against the direction of the other prisoners.

WELLL.... it turns out that there is. in fact, a method to the madness.

In the local Direct Matin this morning, the daily "Savez-Vous..." question and answer section asked about why in track and field, the runners always circle the track anti-clockwise.

According to the article, the direction is not par hasard, but instead, is rooted in brain physiology.  When the modern olympic games were revived in the late 19th century, the tracks ran clock-wise.  The athletes complained.

In the article, the brain's centre of balance resides in the left hemispheres, and thus the right side of the body for most dominates.  When running clock-wise, the eyes, legs, and balancing mechanism is thus turned opposite of where our internal gyroscopes are needed.

The article went on to note that, in events where people run clock-wise vs. anti-clock-wise, the body feels more stress, and times are slower by two seconds on average per 400 metres.  For a race like the "metric mile," (1600 m) this is a significant obstacle.

It also explains, I think, why open skate among other things also require skaters to circle anti-clock-wise.

So here, the French desire for conformity has science to back it up.

Now, if we could only answer why they always wear black and continue to see smoking as glamourous...

Tuesday, 13 January 2015

Silicon Valley Under the Dome (Again)

I have been living away from the so-called Silicon Valley for almost a decade now; I live in Paris, and thus I don't miss life back there terribly, but do try to keep up with the scuttlebutt.  Over the years, I've written several posts about the Bay Area, where I spent most of my adult life.  For a blogger, the Bay Area provides a steady fodder for discussions about innovation (or lack of it), demographics and the impact they have, the evolution of the word "entrepreneur", and social issues like privilege and gentrification.  

One of the hot topics right now in the Valley - aside from how 'hot' (and difficult) the housing and jobs markets are (really, issues with nearly predictable cycles - you could cut and paste San Jose Mercury News articles from 1997, 2005, and now 2014 almost verbatim) are issues of what "makes" a startup/tech company successful, and why the rewards seem to be going to a statistcally skewed group.

Further militating for the maxim that to err is human, but to really muck up an analysis requires a human from Harvard.  One from the Harvard Business School is a daily double,

To wit, this article in the recent Harvard Business Review (motto: "Mis-measuring the social sciences since 1922").  The click-bait title of the article "The Myth of the Tech Whiz Who Quits College to Start a Company" poses itself as a sort of myth-busting piece in the vein of Malcolm Gladwell.  The article sets up as its pins that there are three 'common myths' about tech founders: that they are young, that they are technically trained, and that they were graduated from a prestigious, local university.

One is confronted immediately with the inherent contradiction that the title (that tech founders are drop-outs) contrasts with popular myth three.  But let's set that to the side.

Unsurprisingly, HBR notes that "the data tell a different story." Unsurprising, because if the data from the analysis supported the story, I reckon that the article would not have been published.

The mythos is summarised un-succinctly in the following narrative form:
The verdict follows a familiar line: for better or worse, successful tech sectors are products of young entrepreneurs, who disrupt whole industries without ever having worked in them. 
These founders, in turn, are invariably portrayed technical experts. Science, technology, engineering, and math (STEM) education is now at the center of entrepreneurship policy, and cultivating technical talent has become an important goal of the White House’s Office of Science and Technology Policy (OSTP).
Where better to get that technical education than at a great local university? Stanford is the classic example, with hundreds of future Silicon Valley entrepreneurs passing through its Palo Alto campus. A university of this caliber not only creates great talent, the theory goes, but also helps a region to retain it. It makes sense, then, to assume that without a world-class university nearby, a city’s tech sector cannot thrive.
The authors of the story, who work for a consulting company called Endeavor Insight, turn to data in the New York tech sector that are available on public data sites such as LinkedIn, AngelList, and Crunchbase in an attempt to analyse the veracity of the myths. A sample of 1600 "tech founders" in New York city forms the analysis cohort.

The first item to fall, it turns out, is the straw-man about being a college drop-out - noting that dropout founders are "the exception, not the rule."  No numbers are given, but any sort of reasonable analysis would have to look at the numbers in comparison to some sort of control.  Of course, it's unlikely that the majority of founders would be college dropouts (despite the fantasies of, e.g., Peter Thiel), but how does the distribution of dropouts vs degreed founders of tech compare to the tech workforce in general?  To the population of founders of non-tech companies?  Any sort of reference?
This conclusion is an example of the sound of one hand clapping.

The next item under the microscope is the question of whether are particularly young.  The conclusion: yes; they are young, but 'seldom fresh out of school' (whether dropping out or not - whoops).  The data are presented as a histogram below:

There is a handful of problems with this analysis.

First, the authors offer no objective definition of what "young" means.  Myth Number Two is states as "they are young."  In fact, they are young, so HBR have failed to knock over their own straw-man.  But what defines "young?"  A post hoc definition of "fresh out of school" is offered.

Second, from a statistical point of view, the authors use average, when plainly, the distribution is pretty skewed.  That average (31 years) is being pulled to the right by a group of people clearly at least a standard deviation and a half over the mode of the distribution.  When talking about the myth of the "typical" tech founder, does it make sense to you to look at the average age of a skewed distribution, or to where the bulk of the data are?

Put another way, the average colour of a rainbow is white.  It's an inappropriate measure here.

The median of this distribution is around 27 or so.  That's about the age of median player in professional baseball (28.8)  It's a bit older than the median age, which is 25.5.  The 'typical' founder of a tech company is younger than the 'typical' professional baseball player, and a bit older than the 'typical' NFL athlete.

And third, there is no comparator.  How, for example, do the tech founders stack up against similar, non-tech company leaders?

The whole "analysis" is incredibly sloppy, and fails even to support the claims made by the authors.

The next "myth" attacked is that tech founders are heavily STEM (science, technology, engineering, and mathematics).  Surprisingly, only 36% majored in one of the STEM fields;

Tech founders are also much less technical than conventional wisdom leads us to believe. We divided New York City tech founders’ college majors into two categories: STEM (science, technology, engineering, and mathematics) and non-STEM, and found that just 35% studied STEM fields, while 65% majored in something else. In fact, these founders were more likely to study political science than electrical engineering or math.
Is this a reasonable analysis?

According to the data published recently by NPR, about 2.5% of US graduates in 2010 took their major in CS.  1% were maths majors.  Engineering was 5%.

By comparison, business and economics degrees were held by 25% of college graduates  History and humanities were 5%.

It's obvious that the founders of successful tech companies are far, far more likely to derive from scientific disciplines, when one controls for the sample pool, than from business or psychology,

Worse, the examples given to illustrate the point call into serious doubt the definition of "tech" used by HBR.  In making their case, the authors cite Alexandra Wilson (MBA founder of Gilte Group, an "e-commerce business") and Neil Blumenthal, founder of on-line eyeglass retailer Warby Parker.

It's worth pointing out that neither Gilte Group nor Warby Parker is a "tech" company.  They are essentially marketing companies.  Gilte provides an on-line platform for consumers to purchase luxury brands; Wilson (in fact, not the founder, but rather, one of four co-founders) brought to the company her experience with brands like Bulgari.  One of the other co-founders is a man named Dwight Merriman, who provided the code and oversaw the actual TECH.

Both Warby Parker and Gilte Group may be highly successful companies, but calling them 'tech' is a stretch at best.  Fed Ex deliver eye glasses and clothes; I would not call them an ophthalmologist nor a design house.

As an aside, as I have written many times before, including most recently here, Silicon Valley has changed from a place where real new ideas and technology were created, into a place that is largely slick marketing masquerading as innovation.  It used to be made of companies like HP or Intel; it's now Yo dot Com and Twitter.  I asked then, and still ask, is the Valley out of big ideas?

As Peter Their famously said, on the way that actual innovation is slowing, "We wanted flying cars, instead we got 140 characters."

One place where HBR get it 'right' is that they are moving the discussion of what an "entrepreneur" is away from the false idea one gets in reading the self-congratulatory press releases out of Palo Alto and back to what a real entrepreneur really is.  Living in France and speaking French, it's clear that an actual entrepreneur is a person who is in the middle of bringing together ideas, marketing, funding, and the operations to produce the product.  It's not, despite what Stanford undergraduates think, a guy with a brilliant idea with the technical chops to realise it.

And ultimately, what the failed "analysis" that HBR offers reveals is not that the founders of tech companies are not young, not technical, and not tied to a university, but rather, that today - perhaps as yesterday - there is an enormous gulf between an idea and a successful business.  And this is where the MBA comes in.  That is the actual role of the entrepreneur.

Tech companies - even Gilte and Warby Parker - need to have a tech head to survive (in both cases, at least one of the co-founders was, in fact, a young STEM graduate).  There is also always going to be a guy with an MBA talking about synergies, share of voice, and channels.

Monday, 12 January 2015

Daddy, How Come there Are SO Many People Named "Charlie?"

"Daddy, see that man over there?  And that one?  And that woman?  They all aren't really named Charlie, are they?  And why are they all wearing name-tags that say "Je suis Charlie?  Are these people all going to a first day of school or something?"

No, son.  In fact, I doubt any of them is actually named "Charlie."  And they aren't going to a 'Welcome Back to School' day at all.  They are going to a march. A march in support of Charlie Hebdo, and to show support for freedom.

"Huh?  Who is Charlie Hebdo?

Charlie Hebdo is the name of a satirical magazine; last week, some men went to their office with guns and attacked them because they didn't like what the magazine had printed.

"Oh; my teacher was talking about that this week in class.  Our school had a moment of silence.  I guess that this is what they were talking about.  What did Charlie Hebdo print that made these guys so angry, and why did they shoot the cartoonists?  And what is 'satire' anyways?"

Well, son; satire is a kind of political way of talking, where you try to show how silly ideas that some people are really serious about are.  Remember Gulliver's Travels?  Where Gulliver, a giant, is tied-down by hundreds of tiny people?  It doesn't look like a serious book, but it was, and it was in this case not laughing about little people, but instead, it was pointing out how lots of people with small minds use hundreds of tiny rules to stop great people from doing things.  That was a kind of satire of rigid society in England in those days.  

Here, the cartoonists of Charlie Hebdo were using satire to make fun of a group of people who try to use their own religious beliefs to control others.  The people whose beliefs were mocked got angry, and they decided to kill the cartoonists, partly to punish them, and also to make other cartoonists afraid so that they would not make similar cartoons in the future.

"I see.  Do you think that it will work?  Will cartoonists stop drawing cartoons because it may make these guys mad again?"

No.  I don't think it will work.

THAT is why these people are marching.  They want to show to those who think it is OK to control what other people think by threatening them that they will fail.  Here, the French people are saying: "No.  I am not going to let your threats scare me into being quiet.  I have a right to think and to say and to draw cartoons as I like.  You think that you hurting these people is going to control how I think, and I say to you that it won't.

"OK.  So it's important then for all these people to stand together.  But why do they say "Je suis Charlie?"

They say "je suis Charlie" not because they agree with Charlie Hebdo, or even like what Charlie Hebdo had to say.  What they are saying is that, like Charlie Hebdo, we all stand up together for the freedom of thought.  Though our ideas are different, our values are not.  If you want to attack the cartoonists at Charlie Hebdo because you don't like what they say, then you have to attack me, too.

"Hmmmm... OK.  But you know, I've heard you complain when newspapers or other people make fun of your beliefs.  My teacher even said that Charlie Hebdo in the past has made fun of our religon.  They shouldn't make fun of people, should they?  You don't think it's OK to say things that hurt your feelings, do you?  You don't even like their cartoons, after all.

Son, this is the point.  I don't like it when people say things that hurt me.  I don't think it's good when they say things that insult other people.  It's generally not a good idea going around trying to offend people, and the world generally is a better place if people try to be nice to each other instead of trying to be mean.

But this is important for you to understand.  Because I don't like something, it doesn't mean I am going to tell someone else that they can't say it.  I can ask them not to say mean things.  I can control what I say and try to be nice - though I know at times, I am certainly going to say things other people do not like.  But if Charlie Hebdo or anyone else writes something that someone is hurt or offended by, then that person should respond explaining why they think that Charlie Hebdo's comments are wrong.

If there are bad ideas, and you think your ideas are better, then let the other guy have his say, tell people why you think he is wrong, and then let people decide who is right. That is how "the market place of ideas" works.

If you do not like an idea, you pick up a pencil, you do not pick up a gun.

And here, this is why it's so important as it is for everyone to be at this march.  Freedom of thought and freedom of speech means defending ideas that you don't like.  It's easy to stand up for viewpoints you agree with - the cartoonists were not attacked because they drew nice cartoons about puppies.  That sort of speaking does not need anyone to defend it.

No.  Freedom of thought and freedom of speech exist to protect ideas that are not popular.

And never, never, forget this: what is popular now may not be tomorrow.  If someone tries to ban speech or to attack a cartoonist that they disagree with today - even one you don't like or agree with - well, they might be coming after you some day.

So, it's really good then, that so many people are going to go out and stand up for Charlie. 

Even if they aren't named Charlie.  

Thursday, 8 January 2015

Another Shooting: Where Do We Go From Here

Warning: Drawing Kills
As just about everyone knows by now, the city of Paris was struck yesterday by its most serious terrorist attack in 50 years, measured by the loss of life.  In the end, 12 people were executed by at least two and possibly three well-armed, disciplined, and obviously well-trained assassins.  There were eight cartoonists, a couple of editors, and two policemen killed.

Because, apparently, the magazine had dared to print cartoons that radical Islamists in France and undoubtedly abroad as well found offensive.

The weekly journal Charlie Hebdo, which takes its name partially because it was an outlet for Peanuts cartoons (hence, Charlie) and partially due to its frequency of publication ("hebdo" in French is short for Hebdomadaire, or weekly) has long been an instrument of satire, some plainly in questionable taste.  Its targets were somewhat indiscrimant; one week it would be President Hollande.  The next, Pope Francis.  Frequently, it poked fun at Islamists.

The blow is like a crash of thunder in a country whose motto is "liberté, égalité, fraternité" ("libery, equality, and brotherhood").  The killing of satirists is a chilling attack at free speech and free thought in a very real way, unlike the sorts that are often held out as examples elsewhere.

I am struck by the reaction for a couple of points. First, the hypocrisy to a point of the US government, who seem now ready to stand, maudlin shoulder to maudlin shoulder with a magazine that just three years ago it attacked publicly; in the words of then spokesman Jay Carney:
We are aware that a French magazine published cartoons featuring a figure resembling the prophet Muhammad, and obviously we have questions about the judgment of publishing something like this,
Another truly bizarre, through-the looking glass reaction can be had from Nicholas Kristof, columnist for the New York Times.  Like the famous "#IllRideWithYou" forelash against imagined anti-Moslem violence in Australia following another terrorist attack in Sydney last month (it's not even accurate to call it a backlash, as it turns out the story that provoked the incident was made up almost of whole cloth), Krisrof talks about Islamophobia and how people jumped to conclusions that the perpetrators were Islamist when others (Christians, Jews, right wingers) had grievances as well.

Kristoff has become almost a caricature of himself. The comment about why people concluded it was Islamist radicals ("(w)e don't know exactly who is responsible") stands in direct conflict with the facts. It was *immediately* known that the killers shouted Islamists slogans (Allahu akbar, "On a vengé le prophète Mohammed") and they were described by eyewitnesses immediately.

I live in Paris and this was documented on social and other media within minutes.  In fact, it turns out that the perpetrators were exactly who we thought that they were, and also in fact, at least one was known to the police for radical activity.

Kristof is making excuses and projecting in a naseauting and clueless way. Why would anyone conclude that the killers were Jewish or Christian?  This column would not have been written by a sane, informed, non-tendentious writer. The information was available. He works for a newspaper for Christ’s sake.

People are asking what it all means to France, and there is right now a lot of bravado about standing together in solidarity and the importance of freedom.

As a foreigner living in France, I am going to take a contrarian view and say that in the long run, it will likely mean little more than a continued erosion of freedom of speech here.

Politicians like François Hollande will issue mawkish statements about solidarity with the writers, the confreres of Charlie Hebdo will replace their banners with black-ink as an hommage, but in the end, the key players are not going to do more than STOP printing cartoons and other material they think is scandalous.

Recall that France, unlike some other nations, does not have a real freedom of press.  Writers and actors are prosecuted for making statements that "incite hatred" (witness the fracas last year with the "comedian" Dieudonne Mbala Mbala).  Even today, one of the two leading papers here in Paris (Le Figaro) has re-tweeted several times how people can report racist or offensive tweets or posts on social media.

I expect that we will see more - not less - of this.

One thing further to keep in mind is this - a few years back when Jyllandsposten in Denmark printed a cartoon deemed "offensive" to some Muslims, newspapers in France - Le Monde and Le Figaro included - refused to show the cartoon even in stories about the controversey.  Charlie Hebdo was one of a very small number who ran the cartoon.

Sadly, I think that that is likely to be what this event will mean ultimately here.

And I am less concerned right now about the rise of xenophobia or votes for Marine Le Pen and more concerned about self-censorship.  And surely more concerned that 12 people, including two policemen, lost their lives.

Wednesday, 7 January 2015

Does "Economics" go with "Health?"

The Doctor Won't See You Now
I was reading a story today that was recently broadcast on the US NPR network.  Reprinted was an interview from the show "Fresh Air" with Steven Brill, a lawyer and writer who has recently focused on the impact of health care costs in the US.  

Brill in 2013 wrote a 24,000 word cover article for Time magazine entitled "Bitter Pill: Why Medical Bills Are Killing Us," in which he outlined the ways be which (he sees) hospitals and others in the health care business rig the US health care market to enrich themselves whilst at the same time drive up the costs and reduce the access to medical care.  Brill has recently extended his earlier essay into a full-length book, broadening its targets to include the political battles around the Affordable Care Act ("Obamacare"), including pharmaceutical company lobbyists, 

The interview with Brill is fascinating, and I highly recommend reading it.  It puts the work into a more personal context by framing the book around a recent incident where Brill had been diagnosed with a heart condition that required surgery - expensive surgery - to fix. 

The piece really frames well the essence of the situation - and one that, as a health economist, takes up the lion's share of my waking day.

That issue is, people in general, and Americans in particular, pay an enormous amount of money for "health care" without really understanding what they are paying, what the risks and outcomes are, and how to evaluate them.  Brill, a lawyer and not a doctor, or health economist, or medical ethicist, doesn't really frame it this way, but that's what more or less comes across. 

The quote that took my interest most directly was near the top:
At that moment I wasn't worried about costs; I wasn't worried about a cost benefit analysis of this drug or this medical device; I wasn't worried about health care policy," Brill says. "It drove home to me the reality that in addition to being a tough political issue because of all the money involved, health care is a toxic political issue because of all the fear and the emotion involved."
"A patient in the American health care system has very little leverage, has very little knowledge, has very little power," Brill says. [emphasis mine]
This is really an insoluble issue from my vantage point. 

In my reading, his first point is either vacuously true or vacuously false, depending on how you look at it.  ANY consumer of any product, in deciding whether to purchase the product, empirically, if sub-consciously, is making a cost-benefit analysis.  It doesn't matter whether it's an iPad, a candy bar, "the clapper," or open-heart surgery.  If you decide to buy the iPad, you are implicitly deciding that the value of having the device is more beneficial than the money you are handing over to get it, and thus by extension, the other items you will not be able to afford by making this purchase.  By electing for heart surgery - itself carrying risks of death on the operating table, as well as some costs you will have to bear, even if only in the deductible, co-pay, and other costs you will incur.

In this sense, there is nothing fundamentally different about choices for consuming a "health care service" versus other choices.

On the other hand, there are fundamental flaws as well - if you are insured, then the overwhelming majority of the actual, economic costs of your choice are obscured from view, and most of the health risks of the procedure are poorly (at best) understood, and thus your choice is made in an environment of unequal information.  

Doctors (the providers) will, in almost every case, have an enormously larger reservoir of information than patients (the consumers) on the true risks, benefits, and costs.  This information asymmetry means that, while you surely are making a cost-benefit analysis, that analysis is sufficiently ignorant that typical "market" analytics do not really apply.

Does it Even Make Sense To Talk about the "Health Care Market?"

What comes out in reading Brill's piece to me is multifaceted.

First, his basic argument is freighted with opinions that he would masquerade as fact.  He makes an incredible statement that "(t)he insurance companies are not really the bad actors in this movie."  This is contrary to the back-story that, when he got his bills from United Health Care (his insurer), they were 36 different letters in 36 different envelopes, filled with conflicting, confusing, and contradictory information.  The climax of this thread is reached when Brill, in the course of working on his book, confronts the CEO of UHC with a bill that states that there is $0 billed, $0 paid (by the insurer), and yet $154 owed by the patient.  The CEO has no way to explain - or even understand - this situation.

The insurers, he says, are incompetent and terribly managed, and create and send out bills that they cannot understand, and yet they are "victims" just like everyone else.

I really don't know what more to say about this than the fact that the insurers are, in essence, the real customers of health care in the US.  THEY are what in health economics are called the payers.  They negotiate the prices; they decide what medicines and treatments you as the end consumer will be able to have access to.

It is close to axiomatically true that, in an economic system, where the negotiator is incompetent, terribly managed, disorganised, and ignorant of the costs and benefits, then the price structure is going to be distorted.  Perhaps, terribly distorted.

Brill goes on to make some observations that are, to be kind, wrong.

He makes the following claim that simply isn't so:
But they're (the insurance companies) sort of stuck in the same ditch we're in, which is being forced — unlike the payers for health care in any other developed country on the planet — being forced to pay uncontrolled, exorbitant prices and high profits that are generated by nonprofit hospitals and by drug companies and medical device makers.
It's wrong on two counts.  The first is that foreign payers - in this sense, he is talking about single payer systems like the UK or France, "control" prices in a way that American ones cannot.  In fact, prices are negotiated in most single payer systems in ways that are not fundamentally different from the way that they are in the US with large payers like UHC.  France, which is often cited as offering a model for high-quality care at lower cost, goes through a quite complex system of cost-benefit analysis wherein the medical value and innovation are evaluated, and then price negotiations, based on the level of need, benefit, and innovation established - occurs.   

The drug companies are not forced to accept the prices negotiated; if they do not, they simply do not make the products available in France.  Similar systems exists in the other markets. Different countries assess value differently, and hence, prices are not uniform across Europe, nor is access to treatments.  For example, I would far prefer to get a highly aggressive form of cancer in Germany than in the UK.

In the US, large payers have a similar tool available - UHC can easily decide that the value offered by a treatment is not sufficient for the price at which it is offered, and thus decide not to re-imburse it.

The difference is, in France, there is a single payer, so it has larger influence on the price.  In the US, if UHC does not choose to re-imburse a product, one of its rivals - perhaps Aetna or Kaiser - will. One presumes that an insurer that provided access to more and more expensive treatments would likely have higher premiums and higher deductibles. Consumers could then, hypothetically make a choice on whether the additional cost is worth the additional coverage.

In short, US healthcare payers have precisely the same tools to control costs at their disposal as their European peers.  They choose not to deploy them precisely because they know their customers - people who are ready to demand Cadillac service at a Chevy price.

This is where the problem of information asymmetry comes in, and where something like comparative effectiveness could be a big help.  Other countries with 'health care technology assessment" systems (HTA) in place - like France, Canada, or the UK - make data-driven analyses of the costs and benefits of competing treatments.  The process is not perfect, and it is not entirely "transparent," but it provides a somewhat information-based way to assess what to re-imburse and what not to.  

When such a system was discussed as part of the ACA in the US, it was quickly shot down under a fusillade of remarks about "death panels."

Worse still, Brill talks about "exorbitant" prices and "exorbitant" profits.  Indeed, the prices of some treatments are eye-popping.  The current enfant terrible is Gilead Sciences, which recently launched a treatment called "Harvoni" that provides a near 100 per cent cure rate for chronic hepatitis type C (hep-c).  The price tag is close to $100,000.

Is that, however, "exorbitant?"  Are Gilead's profits "humongous?" 

These are intrinsically subjective questions.  $100,000 is a lot of money.  I would bet that the house Brill lives in (he made $45 million selling one of his companies), if offered at $100,000 would not be called an "exorbitant" price.  

In a sentence, "price" and "value" are not the same thing.  They're related, but the relationship is not necessarily a strong one.

This is the sort of question that economics is set up to answer, and why health economics exists.  But given Brill's thought exercise, the price of health care and if it's "reasonable:"
The first way to look at it, which is certainly the way I was looking at it the morning after my surgery and ... eight days later when I walked out of that place a healthy person, is that those people saved my life. So in that sense, would I beg, borrow and steal or insist that my insurance company beg, borrow and steal to pay for all that? Yes. Were the people there highly professional, highly skilled? Did they care a lot about me? Yes. So in that sense, it's reasonable.

your ability to live - the most basic urge of any living thing - is directly related to the care you get when you are sick.  And given that Brill would "beg, borrow, and steal" to be able to access care that he admits "saved (his) life," is economics really an appropriate tool?  Is it a useful one?

In classical economic theory, you have costs that are either partially or completely elastic (the more cost there is a for a product, the lower the deman) or partially or completely inelastic (demand is constant irrespective or price).  Health care price/demand are highly inelastic.  If you *need* something to live, you are not going to reduce your demand for it unless price distortions become extreme.

In this sense then, the most basic law of economics does not apply to medical care.  Precisely because the products have value, their prices cannot follow a free market scheme.  

This turns the price/value relationship on its head.  Unlike say, an iPod or a CD or a type of automobile, value is decoupled from the argument of cost.  THIS has distortive effects on policy and on the language of the discussion.

If one steps back from the heat of the argument, this is obvious - Harvoni costs $100,000, an "exorbitant" price.  Recently, a cel from an earlier Tintin cartoon sold for many times that price.  The first has the power to save lives, while the second (as much as I like Tintin comics) has absolutely no intrinsic value.

The standard argument companies make for their high prices is that these are needed to finance R&D (itself a not completely honest answer); seldom does a company like Gilead respond that the price is high because the value of what they offer is high, which is the more honest answer.

But it is an unpalatable one because we claim that we cannot place a value on life.  This itself is plainly false (cf: arguments by Princeton bio-ethicist Peter Singer), but it is instructive.  However, since the absolute value of medical interventions is so high, price must be decoupled from the price means that economic analyses of an economic question are inappropriate.

Unfortunately, we live in a world where we measure what we value, and we act on what we measure.  And thus, healthcare systems - even in European social democracies - have no alternative but to turn to cost-benefit analyses when assessing how to allocate their resources.