Generative AI and your job - where will you end up?

Cath Everett Profile picture for user catheverett July 6, 2023
Summary:
Although the ChatGPT debate is currently focused on whether white collar jobs will be destroyed or not, ironically one of the key reasons for adoption is labor shortages.

unemployed

There has been great debate recently over the threat posed to white collar jobs by generative AI tools, such as ChatGPT. But according to researchers at Stanford University and France’s emlyon business school, predictions that AI algorithms will replace professional posts are "based on abstract and naive notions" of such roles and how they are performed.

The premise is that machines are unable to replicate the value of "relational expertise" - or expertise that is generated from, and applied when, working with others in an organisational context - as it is not "extractable or codifiable". As a result, professional roles are more likely to change rather than be destroyed, the researchers say.

Other experts are not quite so convinced though, believing the situation to be rather more nuanced. Helen Poitevin is Distinguished Vice President Analyst at research and consultancy firm, Gartner HR. She says:

There’s a tendency to say ‘AI is good at this’ and ‘humans are good at that’ so let’s keep them, which means the debate becomes ‘what’s safe from AI?’ But we forget that humans are choosing how to work with technology - and even if it’s not perfect, they’ll still go with it for cost reasons. It’s a comparable situation to photocopiers years ago – people didn’t want to go to another floor to have experts photocopy documents for them so they’d use the machines on their own floor even though they were lower quality. It’s about convenience and ease of access, and that will drive more people to AI too.

Another consideration here is that while roles will undoubtedly change due to technology, there are also likely to be less of them to go around. Applied Futurist Tom Cheesewright explains:

It’s necessary to differentiate between ‘jobs’ and ‘work’. Jobs are made up of a collection of tasks so, in some ways, the researchers are right. No machine can do all the tasks making up a job as some are too complex and relationship-driven and can’t be extracted or coded. On the other hand, jobs aren’t indivisible, discrete units. So really there’s no such thing as a solicitor or accountant – they’re just an arbitrary collection of tasks based on years of experience of what works best.

A key point in this context is that this “arbitrary collection of tasks” does not and will not remain static. Cheesewright continues:

If a new means of production comes along, it doesn’t mean the nature of a job won’t change. Today you may need a rounded set of skills to be a solicitor but that doesn’t mean you couldn’t take the job apart and redistribute tasks. It’s not that you’d be replacing solicitors with AI. You just don’t need as many as AI will do a chunk of the work.

Will AI shift the employer/employee balance of power?

Jens Loehmar, Chief Technology Officer for Workday Continental and DACH, believes that in the current context this situation can only be a good thing. He explains:

The bigger picture here is that companies across Europe are experiencing a huge skills gap fuelled by demographic change in the form of an ageing population, which over the next 10 or 20 years could have a significant impact on the ability of the economy to grow. So using AI to enhance how they work will enable companies to eliminate inefficiencies and be more productive in the long-run. It’ll enable them to grow despite skills shortages and demographic changes.

But the question is: could this situation shift the balance of power from employees, as evidenced by the so-called Great Resignation, back to employers over time? And could we end up witnessing an AI-enabled ‘Great Redundancy’ for those unwilling to conform to their employers’ requirements in that other great debate du jour, hybrid working and the return to the office?

The challenge here is that while many employers appear keen to see staff return at least some of the time, just as many employees are actively or passively resisting such mandates. The subject hit the headlines again last month when Google warned employees that it would crack down on anyone failing to return at least three days a week. The tech giant also indicated that compliance would be monitored using data gleaned from office badges and attendance levels would henceforth be included in all performance reviews.

But Poitevin thinks it unlikely that employers, including Google, would take such sanctions further and use AI as a stick to beat their staff with all the way back to the office. Instead, she believes it will be the ongoing competitiveness of the labour market that ironically generates the largest shift in circumstances:

Talent constraints will push companies to automate more as will a desire to be more effective with generative AI. The problem is that the more AI is used to automate parts of jobs, the harder it becomes to attract people as most don’t want to do work that’s too automated. So companies turn to more automation to fill the shortfall as a result and it becomes a vicious circle.

Cheesewright agrees:

What we’ll end up with is a smaller workforce but one that is highly-assisted by technology, leveraging uniquely human assets to do the highest value work. Historically, the workforce model was pictured as pyramid-shaped, with lots of graduate options at the bottom who could be promoted to management. But the Boston Consulting Group showed the model changing to a rocket ship where the wings consist of AI. The issue is if you’ve not got a pyramid, what happens to everyone else?

Dealing with the knock-on effects

But the situation will have other knock-on effects beyond just white-collar job losses. Poitevin explains:

New roles and business models will emerge, but they’ll also create a race for performance and increased performance expectations for individuals. You can compare it to runners over the last few decades as they’ve increasingly used technology to perform better. The time required to run 100 yards has changed substantially, which in turn changes what good looks like and what people expect. So there’ll be a need for speed in terms of responsiveness and providing certain experiences, and the pressure will be high if people can’t use the tools available to get ahead of others. They’ll be out of game and running round block for their own exercise.

As to just how many roles are likely change, she compares the situation to that of pilots today. While in the past they used to fly aeroplanes manually, the focus of the role has now shifted to one of reading electronic monitors. As Poitevin points out:

You have to be very skilled at using these tools and it requires high levels of performance. But humans are creative in different ways, and some will be heavily impacted by this shift. So I agree with Carl Frey’s book ‘The Technology Trap: Capital, Labor and Power in the Age of Automation’ – there’ll be a negative impact for some people and there’ll be fallout in society. Over time, certain sub-groups will raise productivity levels and it’s highly likely to be positive for society overall. But in the short term, certain groups will be very negatively impacted in their ability to thrive and work effectively in society. Frey’s book pointed to high school or secondary school educated employees who have been in the workforce for about 10 to 15 years and whose jobs will disappear before they’re ready to stop working.

The upside is that it is still uncertain how long it will take for generative AI applications to be adapted in any consistent way by organizations, which means there is still time to take positive action. Cheesewright explains:

It’s not clear how long it’ll be before generative AI will pass muster. It’s certainly not ready to provide enterprise-scale applications yet without an enormous skills and time input. But it is clear that it’ll lead to an acceleration of inequality. Ultra-efficient robots can do nothing but concentrate capital, which is why regulation, including around tax policy, is important. Although AI is currently creating rather than destroying jobs as people train systems and the like, in future there’ll absolutely be disruption of certain industries.

To address this situation, he believes there are two options. On the one hand, the state can accept it has to pick up the slack and take responsibility for retraining and reskilling the workforce. Concurrently though, it would also need to change the benefits system, education policy and tax regime. On the other hand, Cheesewright adds:

The state could use a combination of cultural change and regulation to place more of a burden on employers to both pay and train people appropriately. Employers still aren’t investing enough in who they already have as opposed to going out on the open market. But, ultimately, the point is that the social contract has to sit somewhere. The problem is that, like climate change, no one is currently grappling with these issues in any serious way, which includes understanding what all of this could do to the economy and how we could best support it.

My take

While heated debate about AI has been going on for years, the consensus is that the impact on those both in and put out of work as well as the wider economy will be significant. Despite this, there remains a lack of concrete action in terms of regulation or trying to mitigate the potentially severe societal impacts – and all the while deployments will continue to spring up like mushrooms for reasons of cost, convenience and ironically, labor shortages.

Loading
A grey colored placeholder image