Enterprise hits and misses - HR takes on AI, and machine learning raises the enterprise security stakes

Profile picture for user jreed By Jon Reed March 24, 2019
Summary:
This week - A big change in tone for HR and AI. Debate - are AI and machine learning assets to enterprise security? And: the unknown unknowns of digital transformation. Your whiffs include highly personalized LinkedIn job offers.

Cheerful Chubby Man
Lead story - AI and digital must put people back at the center of work - Unleash19 by Phil Wainewright

MyPOV: For a little while there, humans were out of fashion. "AI" was going to save us from ourselves - or at least from our productivity misadventures. But as Phil reports from Unleash19, humans are back in fashion. He quotes Slack CTO Cal Henderson:

We can use tools and technology to help do a lot of the heavy lifting … and that moves HR professionals into a role where their time is spent better on the things that computers are bad at and humans are very good at — like relationship building and time spent one-on-one with employees.

Phil warns: we're not in the clear yet.

Artificial intelligence is so powerful and reliable that the decision makers who direct it need to make their choices very carefully — and we must all stay alert to their role.

This might sound obvious to some, but Phil sees a welcome shift in tone:

I noticed a marked change in the maturity of the discussion at this Unleash event compared to previous ones. No one is expecting digital transformation or artificial intelligence to deliver change unless the human dimension is addressed. That’s a welcome shift, and one that underlines the key role HR professionals will play in ensuring these technologies deliver value to the enterprise.

Phil's piece acknowledges the reality of automation and the skills implications. From my side, I'd like to see HR taking more collective responsibility for the training needed. HR needs to make clear the employment risk of passivity. Assurances about the value of human creativity and empathy are great, but I can think of plenty of corporate environments where one or both is in short supply. Let's see HR take a central role in that conversation.

Happy children eating apple
Diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my top choices from our vendor coverage:

A few more vendor picks, without the quips:

Jon's grab bag - Den weighed in on Brexit hanging over an SAP customer show in Brexit chaos - the business verdict from SAP Business ByDesign customers. He also found plenty to gnaw on via a podcast roundup with Vinnie Mirchandani, whose most recent SAP book is ready for debate and perusal: SAP Nation 3.0 - Manifest Destiny, a review and conversation.

Martin ponders cloud at a crossroads in Cloud technology – increasingly unimportant in important ways. Finally, Neil wades into the potent IT issues implicated in the recent tragic Boeing crashes, and some broader IoT implications (Sensory deprivation - 'garbage in, garbage out' revisited for the IoT era).

Best of the rest

Waiter suggesting a bottle of wine to a customer
AI and cybersecurity - hackers versus enterprise AI - articles by Satish Abburi and Juned Ghanchi

MyPOV: Nothing bugs me more than the lazy marketing gimmickry proposition that "AI and machine learning will change enterprise security for the better." Shouldn't we start with the acknowledgment that bad actors will have access to the same tools? In Hacker AI vs. Enterprise AI: A New Threat, Satish Abburi does just that. Abburi writes:

Automated penetration testing using ML is a few years old. Now, tools such as Deep Exploit can be used by adversaries to pen test their targeted organizations and find open holes in defenses in 20 to 30 seconds — it used to take hours. ML models speed the process by quickly ingesting data, analyzing it, and producing results that are optimized for the next stage of attack.

Why is the playing field so level? Abburi says that cloud computing and affordable CPU/GPUs allow black hats to become experts wielding AI/ML tool sets. I'd add that state-sponsored attackers have deep pockets to throw at this as well. Abburi:

When combined with AI, ML provides automation platforms for exploit kits and, essentially, we're fast approaching the industrialization of automated intelligence to break down cyber defenses that were constructed with AI and ML.

That doesn't mean AI is irrelevant to security or that all companies are equally vulnerable. If anything, Abburi believes this should motivate companies to "operationalize" AI security to better contend with false positive/negatives. If tech is an equalizer, our processes/people/diligence is the difference - either for bad or good.

Over at The New Stack, Juned Ghanch comes to a similar conclusion in The Possibilities of AI and Machine Learning for Cybersecurity. Ghanch believes we run the risk of AI over-automating security, taking humans too far out of the loop:

When to save repetitive work and all the resources, we depend on AI-powered systems and undermine the combination of machines and human expertise, the automated security systems remain vulnerable to threats.

It's refreshing to read these sober takes - they dovetail with Phil Wainewright's point on the benefits of a more mature AI conversation.

Other standouts

  • Exploring the “Unknown Unknowns” in IT-enabled Digital Transformation Estimates - solid piece by John Belden of UpperEdge, with bonus points for bringing Donald Rumsfeld's (in)famous unknown unknowns into the digital transformation convo. Nice counter-intuitive point on being wary of low estimates ("low estimates result in digital transformation failures"). Oh, and a blind spot warning: "Recognizing and acknowledging your own blind-spots can be tough to swallow, but wouldn’t you rather understand your blind-spots before you start your journey rather than when you reach the precipice of a failure?"
  • Boeing 737 Max: Software patches can only do so much - I've held off on the Boeing crashes to avoid exploiting the tragic, but there are plenty of tough software and IT lessons to be derived here. Some of them are in this piece by Jason Perlow (autoplay-video-yuck alert).

Less standout enterprise blogs than usual this week. I won't add substandard pieces just to take up space here. Get busy bloggin' folks!

Whiffs

Overworked businessman
So a Russian perfume company suffered a setback. They were compelled to remove a fragrance called 'Sexual Harassment' after social media backlash. If the Brexit petition Den blogged about doesn't come through, thank goodness there's a plan C: Uri Geller tells PM: 'I am going to stop Brexit telepathically'.

AT&T took another much-deserved trip through the 5G bullshit meter overload PR spank tunnel as well (AT&T’s “5G E” is actually slower than Verizon and T-Mobile 4G, study finds).

Meanwhile, I took a jab at a journalist who is jazzed up about the convenience of an airport facial recognition system in China:

I don't know if this system is opt-in or not, but don't you get that powerless feeling that most facial recognition systems will be imposed on us, rather than offered as an opt-in convenience? Yes, there will be reassuring safety and security rationalizations offered up - as the line in the sand keeps blurring. Funny how when it blurs, it moves in on us.

Finally, I took another potshot at LinkedIn's personalization prayers finely-honed ad display:

If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does.

Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed. 'myPOV' is borrowed with reluctant permission from the ubiquitous Ray Wang.