Let's calm down, get educated, take responsibility - Smith Point CEO Keith Block on AI and the need for leadership
- AI is one of the focus areas for new VC firm Smith Point Capital. CEO Keith Block discusses the potential of such tech and what needs to be done to address concerns around it.
If you're sitting out on Smith Point and you're looking out into the ocean, you see a giant wave called AI coming, which will be the next wave of digital transformation.
So says Keith Block, co-founder and CEO of new venture capital firm Smith Point Capital, named after a remote part of Nantucket Island off the coast of Massachusetts. As noted in a previous article, the company has been set up to offer enterprise experience and advice to tech start-ups to help them to scale their businesses.
Block identifies four main categories of the tech landscape as informing the firm’s investment thesis - data, workflow/process, platforms and, of course, AI. Given the current levels of hype around generative AI in particular, it’s a fair assumption that the Smith Point experts will be seeing a lot of pitches in the AI field.
But that in turn throws up the question of what exactly constitutes genuine AI? Does it mean all things to all people? Or does it have differing interpretations and applications? It’s like digital transformation as a concept, suggests Block:
Classically when we get back to digital transformation, everybody has a different definition of what digital transformation is. I think the important thing is that, however you want to define digital transformation, we are still embarked on this journey right now. There's a massive movement to the cloud, whether it's infrastructure, security, development, operations, environments, applications - horizontal and vertical. We're not done with this one yet. I'm not going to venture a guess that we're halfway through, but we've still got a long way to go.
So it is with AI, he adds:
Again, just like digital transformation, everybody will have a version of what AI is. I don't think people are well educated on it. I don't think it's well understood. I think we all just need to calm down and get educated on what this means.
For his part, Block offers up a three phase vision of AI:
One is kind of reporting the news, really BI. Think about companies like Tableau as an example - a two dimensional expression of the data, but it just reports the news. It doesn't do any analysis. There's no diagnostics, there's no predicting the future. That's sort of phase one. Another phase is the diagnostic and predictive phase. Then there's a third phase now, which is really things like the generative AI component.
In that second phase sits an unnamed Smith Point client that the firm is currently working with to lead its Series 3 funding. Block says:
What this company is doing is they have an incredible AI technology, built on a gaming engine, that differentiates because they don't just tell you the news. They say, 'Why did this happen? When will it happen again?'. And they do it with a visualization technique that is put into 3D. The ability to process and display and explain massive amounts of data is now available to them.
There is a incredible collaboration component. So, you could be in London, I could be in New York, [someone else] could be in San Francisco, and we all have our VR headsets on and we're doing 3D modelling and doing 'what if?' questions together in the same room, which is incredibly powerful. We're super-excited about the technology and the applicability is tremendous when you think about all the potential use cases.
For anybody who has a fleet, as an example, uptime is really what makes a difference to these companies in terms of bottom line profitability. Just moving the needle a few points by predicting uptime, bringing the right level of skills, the right level of inventory, which this company can do, is pretty amazing. That just opens your mind about all the use cases that use AI and won't displace humans.
Good or bad?
That last point is important given the ongoing paranoia about the negative impact AI - and automation in general - might have on the future of work. AI is going to be a very, very powerful force, argues Block, but adds:
I don't think it's necessarily a power for bad. I think it's a power for good. I don't think it's going to replace us.
There will always be a role for humans, he states:
The way I like to think of it is, where there's lots of data, and potentially lots of structure, I think AI is really good. But when you start introducing human judgement, that's not where AI is now, and I think it's going to be a long time, if ever, before AI replaces human judgement.
A classic example would be driverless cars. We still have not figured that out, or even driver-assisted cars when you think about it. If they were powered by AI, who would own the algorithm. Is it the driver? Is it the car? Is it the insurance company? God forbid you're in a situation where there's going to be an accident and judgement needs to be applied. Whose judgement is it? These are very, very complex issues. So, I think AI is great for starter stuff. I think it's great for predictions and diagnostic. But judgement? That's long, a long, long way away, if ever.
That’s a pragmatic viewpoint, but it’s undeniably the case that there’s an increasingly wide divide at present among tech leaders on whether AI is something to be feared or not. Bill Gates says it isn’t; Elon Musk sees the robots coming to get us. A rational debate is needed, suggests Block:
You know, we need a little excitement in our life. What's another schism, right? I think cooler minds will prevail. I don't think AI is well understood. I think a lot of this is we have to wait and see. I will tell you the area that I do worry about with AI is the whole issue of ethical use, and notifications and warnings. There’s a lot of bad actors out there. There's a lot of bad data. We can see that generative AI, without judgement, without filtering, without regulation, could be bad. Imagine the impact of AI on elections. We've already kind of seen bad actors impacting elections around the world, certainly in the United States.
The best minds in the world should really be getting together and talking about ethical use of AI and ethical use of data, and get ahead of this and work in collaboration with the private sector. I would love the President of the United States and our leadership here to get together with the current and future AI superpowers and talk about this. Responsibility and leadership and judgement really matter. I could see this being bad, but I'd like to see us move aggressively ahead of it.
I don't think we should kill AI. I don't think we should take a pause. But I think that we need to organize and mobilize. I would even go so far as to say there needs to be a Manhattan Project, if you will, in the United States, where we really do get the best and brightest talking with the government and actually take some action with it. Don't politicise this. This is not about politics.
It is however a global issue - the same questions are being asked in, for example, the European Union, which on past form when it comes to tech and data regulation, is likely to be ready to take a more hardline approach than the US. This, in turn, could lead a Balkanization of approaches to AI in terms of what’s deemed acceptable and ethical use.
We’ve seen this before, agrees Block, around data privacy and data protection:
The Europeans were way ahead of the United States on data privacy with GDPR. There seems to be a trend - every day [in the US], there's a new calling for less centralized government and more states rights. Every state should have its own policy. It's already playing out on social issues in the United States. There's just no leadership. We need leadership.
AI doesn't mean this is the end of technology. We're never going to stop innovating. It's human nature to innovate and to create. That's who we are.
A positive thought to hang onto as the debate around AI gets ever more febrile.