Enterprise hits and misses – Google’s diversity problem and AI’s industry adoption issues

SUMMARY:

This week: why Google’s diversity problem is our problem, and boiling down AI adoption by industry. Your whiffs include some satirical awards, including my own fact check failure.

Cheerful Chubby Man

Lead story – Google’s diversity problem – learning from the falloutarticles by Den Howlett and Madeline Bennett

Google had a no good/very bad day (sidenote: best children’s book ever?)- or week, as an engineer put out a scathing, controversial letter on Google’s culture, and was subsequently fired:

Whether or not the firing was the correct decision, this situation will be defined by how Google handles the critiques raised. It’s a shame this letter fell into gender stereotypes that distracted from an underlying point about a “monoculture” that doesn’t tolerate dissent. What will Google do if someone else speaks out about Google’s workplace culture while avoiding “Code of Conduct” flags and prejudicial language? Limiting this to lambasting Google is foolish. As Den says:

So many of these stories become ‘mots du jour’ and then quickly fizzle until the next blow up. My sense is that organizations need view these as opportunities for improvement that can be credibly reported back into the public domain.

Diversity is a core principle at diginomica but the question has always been: how are hearts and minds actually changed? How do we create diverse teams while learning from uncomfortable, or dissenting views? If tarring and feathering scapegoats on social media qualifies as the answer, then as John Lennon once said – you can count me out.

Bennett’s argument is well-articulated, but what seals the deal for me is hearing so many women personally who felt excluded from tech careers, or the blatant sexism they encountered in those careers, or the pay inequities they continue to experience. Combine that with the superior performance of diverse teams. To me, the practical argument is the most compelling: when we fail those who are excluded, we fail ourselves.

Happy children eating appleDiginomica picks – my top four stories on diginomica this week

Vendor analysis, diginomica style. Here’s my three top choices from our vendor coverage:

Jon’s grab bag – Chris tried to make an appointment with Doctor Robot, but found its bedside manner lacking – for now (Calling Doctor Robot – a ten point prescription for AI-enabled social care). Stuart asks: Is the FUD factor fading? Genpact CEO reckons decision-makers are more upbeat. Hmm… wondering how the Genpact CEO feels about the saber-rattling with U.S. and North Korea?

Best of the rest

Waiter suggesting a bottle of wine to a customer Lead story – Putting AI hype through the industry grinder – by several smart peeps

myPOV: One thing lacking in the AI hype circus is how AI is faring in the gut check of vertical adoption. Three bloggers bit off a chunk of that:

  • McKinsey’s State Of Machine Learning And AI, 2017  – Louis Columbus pulls together AI data that includes an industry breakdown. Turns out high tech, automotive and financial services are the leading adopters of AI tech, with laggards including travel, healthcare and construction.
  • Cognitive Computing: Getting Clear on Definitions – Lora Cecere, a former Gartner analyst who has lived and breathed the hype cycle, breaks down AI’s supply chain potential as an analytical evolution, with plenty of “buyer beware” caveats. Leave it to Cecere to ask the question, “Can you clarify the business problems that you solve?” C’mon Lora, how are vendors supposed to sell AI tech when you ask questions like that?
  • Rage against the machines: is AI-powered government worth it? – Another industry with AI dilemmas is the public sector. Maëlle Gavet writes on World Economic Forum’s blog about five dangers when algorithms make/change policy. This may not be the “threat to democracy itself” that Gavet believes, but it’s a sobering view worth an intellectual grapple (“algorithms don’t do nuance…”)

Honorable mention

Whiffs

Overworked businessmanLosing my religion award – Tai Chi master who claimed to have supernatural powers got a very unsupernatural butt-whipping in a martial arts fight:

Bite the hand that feeds award – In one of the greatest celebrity endorsement meltdowns of all time, Helen Mirren admits L’Oreal moisturiser ‘probably does f— all’ – during a L’Oreal panel where she appeared as a paid endorser.

The celebrities-are-cool has gone too far awardJapanese chicken take-out chain offers human sweat flavored sauce.

The “we need pageviews too” linkbait title award (to the BBC)Sperm count drop ‘may lead to human extinction’

Internet of Things is changing western civilization awardA Fridge Dumped in Levenshulme Has Its Own Twitter Account. Where are we in the IoT hype cycle again?

Chatbots need lawyers too awardChinese chatbot vanishes after spurning Communist Party. Then, the follow up: Chinese chatbots apparently re-educated after political faux pas.

And, of course, the “Hey, don’t leave us out of the gender absurdity headlines” Uber award goes to, of course, Uber:

Finally, last week’s hits and misses noted a weird story about Facebook researchers taking a bot offline after it developed its own language to communicate with other machines, which I referred to as creepy interesting.

Well, this week, Snopes did a fact check: Did Facebook Shut Down an AI Experiment Because Chatbots Developed Their Own Language? They do a pretty thorough debunking of the story, which of course qualifies as an embarrassing hypocritical lapse self-whiff on my part. Snopes is a bit too casual about the one true part of the story. This machine did develop its own shorthand communication. I stand by the “creepy interesting” comment. The rest was completely and totally wrong. And on we go…

 If you read an #ensw piece that qualifies for hits and misses, let me know in the comments as Clive always does.

Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed. ‘myPOV’ is borrowed with reluctant permission from the ubiquitous Ray Wang.

Image credit - Cheerful Chubby Man © RA Studio, Happy Children © Anna Omelchenko, Waiter Suggesting Bottle © Minerva Studiom, Overworked Businessman © Bloomua, at the seaside © olly - all from Fotolia.com.

Disclosure - SAP, Oracle, Workday, New Relic and Salesforce are diginomica premier partners as of this writing.

    Comments are closed.

    1. esteban kolsky says:

      here is the thing my friend.

      what the computers did? is the same thing we do when we learn. we tend to ignore things that become superfluous and slow down the rate at which we learn. its basic cognition, been known for a while.

      we tell computers to learn (thus the oft-improperly-used-and-abused ML moniker) but then when we do we freak out.

      what if we handled that with humans?

      you go learn how to code, but if you exceed my parameters for coding – or my level of knowledge for coding – will shut you down.

      this is when it gets interesting… the key here is not to shut them down, but to provide sufficient constraints and boundaries so that the actions that result from that learning are not outside of what we feel comfortable.

      but stopping the learning bc it developed their own language? please… same thing we do. its basic cognition – just getting going. this is what is going to slow down the spread of AI + ML in the organizations: lack of understanding more than anything else.

      just saying — and not chastising you, since you re the reporter here, but the many people who freak out bc computers do what they are supposed to do / what we tell them to do.

      as my favorite t-shirt saying of late says – drink some coffee, put on some gansta rap, and handle it.Reference

      1. Jon Reed says:

        Esteban I wish I could argue with you on this to make for a better blog debate but I don’t really disagree with anything you’ve said.

        I still stand by my comment that it’s creepy interesting that computers can at this point learn to talk to each other, albeit in rudimentary ways that gets jobs done. And turns out reading into it that’s been happening for a while, it’s not a new thing – just something I was unaware of. Nothing alarmist about it but I stand by my view that such things warrant monitoring and discussion.

        As for this: “but stopping the learning bc it developed their own language? please… same thing we do. ”

        Just to clarify for readers, the misreported stor(ies) I riffed on last week claimed that this research was stopped because of this, but that’s not the case. See the Scopes article for the full detail.

        – Jon

    2. says:

      On a different tact, and connecting some dots to some other discussions. As we think about diversity, how does that play with the idea of circles of advisors.

      If we talk the diversity talk, do we walk the diversity walk by conscientiously expanding our inner circle to include more diverse opinions.

      It’s a small idea, but it might have a big impact on truly embracing the power of viewpoints formed from different frames of reference.

      Probably a discussion for another day.

      1. Jon Reed says:

        A discussion worth having 🙂

        Agreed – the risk in an “inner circle” is you create your own echo chamber or human filter bubble or monoculture. I think it takes a willingess to throw yourself into new and uncomfortable situations and make diverse connections across fields and disciplines and geographies.

        But that’s the short version and I won’t pretend to have all the answers to that one.

        – Jon