Friday roast: man, machines and the gaping void

Profile picture for user gonzodaddy By Den Howlett July 23, 2015
Summary:
Brian Sommer's swipe at machine driven discrimination got me thinking about the reality of man/machine decision making. It's not pretty.

petehowlettukulele
Hand crafted ukulele

The man/machine interface is a current discussion that has much allure for alleviating the pain of getting to business critical information in as close to real-time as may be possible. I am of the view this is a dream, an illusion and dangerous.  Earlier in the week, Brian Sommer opined about the risks and dangers of over reliance upon machine driven HR analytics. He made the point:

I’ve heard from a lot HR people in high turnover industries (e.g., food service and retail), that they should be able to target recruits with highly probable retention characteristics. What they’re really saying is that it is okay for them to discriminate if a math model told them it’s okay to do so.

IBM partner Vijay Vijayasankar, who prefers to opine on Facebook came back with:

The trouble with all predictive algorithms is that people who use it do not understand clearly what the boundary conditions and limitations are. When something has 60% probability of not happening, it still can happen. So when we take binary decisions based on such odds - we are just kidding ourselves. Brian makes an interesting observation on discrimination by algorithms - spot on. Except human bias discriminates all the time and usually in unpredictable ways. Algorithms may have all the human biases, but can be changed when it is found out. We can't really do that with humans.

I think Vijay is far too simplistic in his approach - but unintentionally. Here's why.

In an incredibly long but massively insightful piece about the future of robotics, John Markoff who has been following this space for many years prefaces his discussion with this set of observations:

We're at this point where over the last three or four years there's been a growing debate in our society about the role of automation, largely forced by the falling cost of computing and sensors and the fact that there's a new round of automation in society, particularly in American society. We're now not only displacing blue-collar tasks, which has happened forever, but we're replacing lawyers and doctors. We're starting to nibble at the top of the pyramid.

He then gets to the meat of the argument, pointing first to the many advances that have been made in pattern recognition for example but then hits the homer that pretty much everyone else has missed:

What hasn't happened is the other part of the AI problem, which is called cognition. We haven't made any breakthroughs in planning and thinking, so it's not clear that you'll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

More prosaically in my world, I fail to see how we can defer to algorithms for any decision that requires a weighing of probabilities where humans are involved, where cognitive skills are needed or where 'art' comes into the equation. This was the root of a discussion I had with Vishal Sikka not long after he became CEO at Infosys.

At the time, he was early in the development of his thinking around machine learning and automation as it applies to mundane tasks in outsourced business processes. I asked him very specifically whether he could envisage a day when machines would replace the 'real' work that we do when problem solving. I remember he looked at me as though I'd asked the most stupid question of the day. No! was the answer.

Remember at this point that Sikka earned his PhD in artificial intelligence and has worked with those technologies. In short, he deserves our respect.

The difficulty for many of us I think is that we so easily get wowed by what appear to be tectonic shifts or massive disruption. For example, we look at the way Uber is impacting transportation and look forward to a driverless world as though it is some sort of remarkable achievement. And while I have little doubt we will see significant impacts over time, it's not that remarkable. If anything, I would argue that techonology in that space is leading towards a logical conclusion. We may have imagined it as far back as the silent movie era but it is a world apart from - say - teleporting.

But the whole thing was brought home to me when my brother came to visit earlier in the week. He's a master luthier who has practised his art for over 50 years. Starting in a wood shed as a nine year old, he's been building first guitars and later ukuleles by hand pretty much all of his life.

He gave me one of his pieces. I had never seen one in real life, only on video. Nothing can prepare you for the sheer joy of holding something that has been beautifully hand crafted and appreciating the workmanship that goes into these instruments. It has a sense of magic that allowed me to appreciate why musicians drool over certain makers.

Mount snowden
'Mount Snowden' headstock detail

We talked about the impact of machines on his craft and on other crafts like toolmaking, my fathers' skill. My brother correctly pointed out that while you can get a perfectly serviceable instrument out of a machine, you can't get the quality of sound that comes from hand crafting, neither can you get the kind of finish or balance that make holding a hand crafted instrument so pleasurable. Having seen and heard the ukulele he brought me, I have no doubt as to that simple set of truths.

And so it is in many forms of endeavor.

We can teach machines many things - CNC has been an ongoing boon among manufacturers for many years, displacing the crafts that allowed for the creation of things like motor vehicles. It will no doubt continue to do so but there are cognitive limitations that, as Markell points out, we haven't come close to tackling. Another example serves well - AdTech.

In what I believe was a long overdue discussion about crappy websites, Jon Reed dove into the topic of programmatic advertising. I don't know about programmatic but I do know it is problematic. Ben Thompson, to whom Reed defers says:

...arguably the biggest takeaway should be that the chief objection to Facebook’s offer — that publishers are giving up their independence — is a red herring. Publishers are already slaves to the ad networks, and their primary decision at this point is which master — ad networks or Facebook — is preferable?

Are they? If you accept Thompson's apocryphal argument then it would seem that alogrithms have well and truly screwed up the publisher world and that it's all down hill to Facebook. I don't buy that. I rarely hear anyone tell me that the real-time ads they see on Facebook (as an example) are entirely relevant to their interests. Isn't that what programmatic AdTech should be delivering?

And while the demise for some or many media types may be imminent, that isn't wholly true across the entire industry. Not by a very long way. Overcoming the hurdles does require technology but it also requires a very large does of craft. That only comes from human imagination, experimentation, filtering, curation (some of which can be automated) and failure coupled with a willingness to move on.

That's something no machine is able to do - yet.

But I think the bigger point is that too many people look at the world as it is, see the potential allure of technology now and even into the near future but fail to imagine the world as it has yet to exist. As such, they get caught up in the hype traps of today while remaining blinkered to possibilities that have yet to be imagined.

In short, the tendency is to refactor today in light of automation and so on, believing that it is breakthrough innovation when in reality what we see is incremental change or destructive disruption. So yes, bring on your predictive models, yes, continue with automating mundane and repetitive tasks. But please, let's not try to lobotomize our human ability to problem solve.

Featured image: © Olivier Le Moal - Fotolia.com

Disclosure: Infosys is a premier partner and a personal consulting client of the author.