Enterprise hits and misses - AI's security risks get a wake-up call, Walmart raises the e-commerce stakes, and Google Gemini steps in it

Jon Reed Profile picture for user jreed February 26, 2024
Summary:
This week - AI poses security risks beyond the scope of most enterprise IT teams. Google Gemini takes a PR beating, but what can enterprises learn from gen AI guardrails-gone-wrong? Your whiffs include a supersized dose of ChatGPT aimed at McDonald's.

loser-and-winner

Lead story - The downside of going gangbusters on AI - new forms of security risk

Martin came back from Dynatrace Perform with a deeper take on AI security - and where the new risks lie. So why is AI posing a different type of security risk? Martin:

The fact that the AI systems will be working together, with output from one becoming input for another, the possibility is high that new ways of injecting malicious data and code can be found, and do it in ways that the AI systems can be used to mask what is happening.

Problem #2? Most companies won't have this type of security specialist in house. Martin quotes Maria Markstedter, CEO and founder of Azeria Labs:

That's the real challenge, because the ability to identify abuse and misuse of these AI systems within your respective product or platform will be the responsibility of your security team, not your data analysts. We're talking about a completely different system here, so we're talking about new attacks, new types of vulnerabilities. And all of these new attacks and vulnerabilities require you to have an understanding of data science, and how AI systems work but at the same time, a very deep understanding of security, threat modelling and risk management. Because you can't find vulnerabilities in a system that you don't fully understand.

Nor are these skills easily acquired externally. So where should enterprises go from here? No easy answers, but redefining data security would be a good step:

This, Markstedter suggested, requires a significant re-think of the fundamentals of data security, because model data is just data at the end of the day, and this needs to be protected just as much as business-sensitive data. Attacking a model through its data inputs is an important way of exploiting access and authorisation.

For companies with in-house data science teams, another viable next step is to mash these teams together for in-depth security design/planning. Longer term, Martin concludes, a new type of security pro is on deck:

It does require the evolution of a new class of specialist – half security expert and half data scientist – with the whole being more valuable than the sum of the parts.

I would add: enterprise should not treat the consumer-facing "big AI" leaders as the role models here. From OpenAI to Microsoft to Google, these AI vendors are erring on the side of releasing AI tools prematurely to gain PR attention and market share, with less concern over the consequences (I'll get to Gemini shortly). However, for enterprise leaders, a "most fast/worry later" approach to AI is a one way ticket to hanging out on LinkedIn threads, updating the ol' "Open to Work" job profile.

Diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:

Jon's grab bag - Back to this week's theme: Mark Chillingworth found that cybersecurity topped digital transformation in recent UK CIO surveys: UK tech leaders prioritize cybersecurity in 2024. Derek parsed the issues with UK's Online Safety Act in Communications regulator Ofcom faces multiple challenges in making the UK the ‘safest place in the world to be online’. Finally, Madeline documented the smart city and sustainability convergence in Building for sustainability - how Seattle City Council and DPD are taking a lead on smart buildings to counter CO₂ emissions.

Best of the enterprise web

Waiter suggesting a bottle of wine to a customer

My top seven

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis - The Verge reports on Gemini's extremely comical/cringe/awkward attempts to erect guardrails on its gen AI bot. Wired's piece on what went wrong gets closer to the mark: the limitations of gen AI are on display here:

When combined with the limitations of AI models, that calibration can go especially awry. 'Image generation models don’t actually have any notion of time,' says Luccioni, 'so essentially any kind of diversification techniques that the creators of Gemini applied would be broadly applicable to any image generated by the model. I think that’s what we’re seeing here.'

Guardrails are well-meaning, but the same vast training data that makes gen AI bots compelling is also its PR bog pit. Guardrails are bandaids for tech limitations with no current answers, despite what AI apologists evangelists might protest. Oh and by the way, the absence of any guardrails isn't so swell either, unless you happen to make your living in illegal industries.

The enterprise lesson? Make sure your know exactly what data your LLMs are trained on. I'd much rather contend with the pros/cons of a finance-specific LLM than try to regulate the output of a model trained beyond its enterprise purpose. If this means more reliance on vendors that have built their own models versus open source AI model experiments with or without guardrails, so be it (More on this next week).

Overworked businessman

Whiffs

Okay, your stock price is supersonic, but c'mon:

This seems a bit over the top:

Meanwhile, Brian sent me this whiff: This Guy Scammed McDonald's For 100 Free Meals Using ChatGPT. My only question? If you "win" 100 free McDonald's meals, did your body really win?

And in the "no comment" whiffs division:

See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.

Loading
A grey colored placeholder image