KubeCon + CloudNativeCon - why the engineer’s challenge is getting everyone speaking the same language on AI

Chris Middleton Profile picture for user cmiddleton March 22, 2024
Summary:
We hear from different user companies how AI is presenting real challenges to engineering teams – at least, when it comes to considering factors beyond the technical.

A keyboard button with the word ‘translate’ on it
(Image by Gerd Altmann from Pixabay)

At KubeCon + CloudNativeCon event in Paris this week – where 12,000 developers, strategists, vendors, and users of open-source tech were packed into the Paris Expo Porte de Versailles – there was no escape from artificial intelligence. After 18 months of Gen-AI hype, its mindshare dominance was inevitable.  

So, what are the implications for users – in this context, developer teams and software engineers – of the symbiotic relationship between cloud-native systems and AI? How can they go about managing innovative projects at express speed, when talking the language of business may not be their strongest suit?

That was the topic for an entertaining KubeCon panel, which presented a range of different user case studies. So, where did these cloud-native adopters stand in the big picture of AI adoption? 

Sergiu Vasile Petean is Director of Cloud Engineering and Operations for insurance company Allianz Direct, which describes itself as “a technology company with an insurance licence”. Petean is also a DevOps ambassador, and Technical Advisory Board member for event organizers the Cloud Native Computing Foundation (CNCF).

He said:

I see a lot of service providers selling AI like there's no tomorrow. So, the hype: it's real. But you're going to end up with some deceptions and disappointments soon. It [AI] was already here, but now we recognize that, and it's going to become even more important for everyone else.

So, now it's only logical for each one of us to educate ourselves as much as we can and plan for the future. But that’s something we did before with cloud, of which we [Allianz] were early adopters. So, that gave us an advantage because we educated ourselves, and when things got real, we managed to capture that value.

The same has to happen on AI. We need to train, to advocate – and get ready to be disappointed, maybe, at the beginning. But the value is just going to grow, and whoever gets early into the game is going to capture it.

Allianz Direct certainly makes great play among its end-user customers of being at the cutting edge, with its “love for data” and “disrupt now – and never stop!” messages online. But does this mean that others should jump on the hype train, shouting ‘Me too!’?

Jinhong Brejnholt is Chief Cloud Architect, Global Head of Cloud and Container Platform Engineering, at a company in a more restrained sector of Financial Services: Danish investment house, Saxo Bank, which offers an online trading platform.

She said:

It’s the same for us, but it’s still early for our especially highly regulated industry to adopt AI. We are looking at it and starting to play around with it. However, I'm super happy that the reference architecture white paper was published today [see my first report].

But she added:

I hope there's more maturity and control around this area, so we can actually utilize the technology without there being too much of a concern. But we are super looking forward to putting more action into it.

A journey ahead

Hans Kristian Flaatten is Platform Engineer and tech lead at the Norwegian Labor and Welfare Administration, the largest part of Norway’s government. Flatten was more circumspect than his peers, but apparently also more open to use cases. 

He said:

As a government we are quite… I wouldn't say ‘restrictive’, exactly, but it's ‘let's see how it plays out’. But from a technical point of view, we have already started using generative AI tooling in order to help development and operations, from more of a higher-level case- supporting standpoint. 

But I believe we are still far away from the very strict requirements of, for example, informing our users, the public, what the criteria were that influenced an outcome – if they were applying for disability benefits, for instance. The correctness there, in [that hypothetical use case] needs to be according to the law, and not a best-guess effort!

But AI as a supporting tool, that's what we are currently looking into. Can we use it to summarize very complex cases? Can we use it to describe audio? Can we use it to translate? We have a lot of translators, so that's definitely an area worth looking into. 

So, it's definitely on our radar. We do have strategies in place for how to start looking into it, but there is, I believe, quite a journey ahead of us. Let’s put it like that.

Full marks for candour. However, a cynic might form the impression of end-user organizations buying the hammer then looking for a nail, largely because of peer pressure and industry hype.

Dr Gualter Barbas Baptista is Chief Advisor on Platform (the software kind!) Strategy and Enablement for Germany’s national railway provider, Deutsche Bahn, where he has been working with Kubernetes, observability, and more, for several years.

He said:

We are certainly going to experience a transformation in the way we work. But, as with any disruptive technology, there will be conflicts emerging, but also innovation. But it's not possible anymore to think of a future in technology without incorporating the possibilities that AI is providing us.

So, we have started this [our AI journey] at Deutsche, where we are exploring it, both for assisting in software development, and other AI tools. But there are a lot of challenges. Intellectual property is one of them. A lot of legal issues too, with many legal teams working on this. So, I hope that we are able, in the end, to take the steps forward to incorporate it.

A complex ecosystem

But with the hype cycle surrounding generative AI, and many organizations rushing into me-too deployments, are there any missing standards or specifications that would help those developers who are tasked with somehow making it work?

Allianz Direct’s Petean said:

It’s regional; we have different complexities. Look at Europe and its experts in compliance, in regulation. Sometimes they go too far, sometimes they need to correct course, and that maybe takes too much time. [See our recent feature on the EU’s AI Act]

So, having more clarity there [would be a good thing], and maybe engaging on more than just the local level, by having policies that are really global and can be embraced by all regulators, so we have a common base for conversations.

Because, it's very complicated when you have an AI Act here, a US rule there, then something else in Asia, but you have a global rollout of your solutions. It's so hard to navigate. And then you have to employ different teams, specific to different regions, and they also have to talk to each other. That makes everything very hard and very slow.

But everything else is moving fast, so we have to be fast. Somehow you have to move at the speed of the future, which is hard to do [in this environment]!

The CNCF itself could both accelerate this process and create more alignment, he suggested. 

Saxo Bank’s Brejnholt added:

There are some missing standards between different toolings in the landscape, both in open source and commercial products. So, standardization around data portability, that really needs to be looked into. 

Also, different tool sets. If you do similar things, standardize it, so it's easier for users to choose one or the other!

Norway’s Flaatten said:

Again, as a government, data locality and data sovereignty are paramount. So, it's highly unlikely that we would send any of our user data outside to some black box, for example. And hopefully, [our data] would not be trained upon to improve the model [by an AI vendor]. 

But from a purely technical and development perspective, as developers, that [particular scenario] is not a problem. We embrace open source. Most of our source code is open. 

So, using generative AI to improve on the source code isn't really a problem. But using generative AI to, say, interpret sensitive health records and health data about any individual, that would be problematic. Especially if that data isn't kept private. That’s a huge concern.

[An earlier conference session] also highlighted the biases and problematic sides of generative AI. But AI can be, and will be, a helpful tool, but we are still a long way from using it on the kinds of data that I have in my systems.

Deutsche Bahn’s Baptista summarized the developer’s and engineer’s AI challenge well when he said:

Data is, of course, one of the foundations for having proper models. But another thing that is very important is having the appropriate platforms. 

Platforms are, wherever you are, the foundation on which people of different skills and expertise can come together and cooperate. Because what we have with AI, currently, is a highly explorative process. It's an evolving technology. That means we need to provide basic standards that people can interact with, while also being able to expand.

In the DevOps culture, ML Ops, and so on, the data engineer, the data scientist, and the platform engineer are all cooperating in an ecosystem to enable innovation and progress.

Balancing innovation with usability

So, where are organizations finding the most complexity and difficulty in building these multidisciplinary teams – which include the C-suite as well as developers and engineers? Such teams need to not only cooperate, but also collaborate. Or, as one speaker put it later at the conference, how can the Avengers with all their different superpowers find a way of working together?

Allianz’s Petean, for one, was candid about the reality:

We are extremely disconnected! We went through a digital transformation, and it took us about eight months to educate the stakeholders that are consuming our platform, to find the common language, to find the right motivation to get people to even want to be educated. 

That was extremely hard for us, especially when you don't have the power balance in the company. You have business [functions] in some organizations that are way stronger than, say, ‘the boring IT’. So, it's hard to have a strong voice and be taken seriously sometimes. 

Sometimes you need a crisis to be taken seriously! Luckily, we have had enough of those. […] Then you have decisions based on honesty and truth, and not on politics and personal agendas.

Saxo Bank’s Brejnholt was also frank about the challenges:

The cloud-native space moves superfast. So, for any organization, if you want to step up and stay on top of it, you have to move fast, too. But how do you convince the executive team that, yes, we need more engineers? Or that we now need a DevEx team, because the platform we just built, it’s just too hard to use?

So, balancing innovation with usability is hard. Then getting the message out to the entire organization is also hard. You need to get by from the top to [the bottom], or from the developer to your CEO and CTO – from your peers to the executive team.

My take

An entertaining if indiscreet session, offering insights into the engineering and personal challenges lurking beneath the hood of the AI hype cycle. But one UK developer and start-up founder was not impressed by the CNCF’s and KubeCon’s large-scale focus on AI this year. He told me later that “it felt like a betrayal” and an event he had “never been to before”, despite years of attendance. He added that lots of delegates, privately, feel the same. Perhaps KubeCon’s record-breaking scale in 2024 is a sign it is getting too big for its once-bespoke boots?

Loading
A grey colored placeholder image