AI is currently too expensive to take most of our jobs, finds MIT researchers

Derek du Preez Profile picture for user ddpreez January 24, 2024
Summary:
A new study from MIT FutureTech aims to understand if our fears about AI coming for our jobs is warranted, by factoring in the cost of deploying the technology.

  Employees on a conveyor belt leaving their jobs as a metaphor for great resignation © Aleutie - Shutterstock
(© Aleutie - Shutterstock)

Given the recent advancements in artificial intelligence (AI), and how rapidly the likes of ChatGPT have permeated the public consciousness, there has been a wave of concern about to what extent these technologies will have on our jobs and work. It’s not an entirely new concern, of course. The threat of AI and robots taking over the world and rendering us all useless lumps of flesh has been a favorite theme of science fiction storytellers for decades. However, given the onset of Large Language Models and their ability to replicate human speech, and how widely these tools are being used, people are becoming increasingly worried about that dystopian future being closer than we think. 

And to be fair, some of this concern is understandable, given announcements from some large companies that have said that they are making job cuts to their workforce thanks to advancements in AI. However, widespread job displacement and structural changes to the workforce as a result of AI is different to a select few organizations introducing sophisticated automation to save on costs. The former would mean that a significant proportion of us - particularly white collar workers - face an imminent threat to our employment. 

And whilst some of the owners of the companies touting these tools would like to believe they can replicate what it means to be human, I think a lot of us would argue we aren’t quite there yet. 

So, how worried should we be about AI coming for our jobs? That is the focus of some new research from MIT FutureTech, which doesn’t just look at the capability or effectiveness of AI in carrying out tasks end-to-end, but also acknowledges the costs involved in developing and deploying such technology. In other words, even if the technology exists for companies to replace human workers en masse, is it actually cost effective to do so? 

In short, the answer is no. Researchers at MIT found that even if it is technically feasible to replace certain tasks end-to-end with a machine and/or algorithm, the technology is still too expensive for almost all organizations to consider it. 

The paper - titled ‘Beyond AI exposure: Which tasks are cost-effective to automate with computer vision?’ - takes the use of computer vision as an example of an AI-enabled technology that could be used in the field today, assessing its current and future impact on job displacement. 

It provides a hypothetical example of a bakery considering whether to use computer vision to visually check their ingredients, to ensure they are of sufficient quality (e.g. unspoiled). Currently a task carried out by bakers by eye, this task could theoretically be replaced with a system by adding a camera and training it to detect food that has gone bad. 

Using publicly available labor data, the study notes that checking food quality comprises roughly 6% of the duties of a baker. A small bakery with five bakers making typical salaries has the potential labor savings from automating this task of $14,000 per year. It notes that this amount is “far less than the cost of developing, deploying and maintaining a computer vision system” and it is not economical to substitute human labor with an AI system in this scenario. 

Of course, this is just one small example. However, the study found that this is quite typical for many scenarios. The researchers said: 

We find that only 23% of worker compensation “exposed” to AI computer vision would be cost effective for firms to automate because of the large upfront costs of AI systems. 

The economics of AI can be made more attractive, either through decreases in the cost of deployments or by increasing the scale at which deployments are made, for example by rolling-out AI-as-a-service platforms, which we also explore. 

Overall, our model shows that the job loss from AI computer vision, even just within the set of vision tasks, will be smaller than the existing job churn seen in the market, suggesting that labor replacement will be more gradual than abrupt.

Path to cost-effectiveness

The general consensus of the research is that, at present, the AI technology considered here is far too expensive for most firms to consider for most tasks. The research even considered an aggressive example of a ‘bare-bones deployment setup’ which assumed free data, free compute and only minimal engineering effort required - where in that scenario, which is quite unrealistic, the amount of economically attractive firm-level automation only increases to 49% (still a minority of firms). 

However, the paper does also consider ‘paths to AI proliferation’, which takes into account changes in the current economic inputs that could accelerate the rate of job displacement. It highlights two important ways in which the attractiveness of AI could be substantially increased. The researches write: 

The first is deployment scale, finding ways for AI systems to automate more labor per system. The second is develop- ment costs, inventing less expensive ways to build AI systems. Here, we explore how these changes would affect the pace of AI deployment. 

However, even considering these factors, they find challenges to proliferation. The report states: 

Over time, changes in the cost of AI systems or the scale at which they are deployed have the potential to increase automation. Scale can be gained either by firms getting larger (e.g. through more market share) or through the formation of AI-as-a-service operations. 

The former effect is unlikely to be meaningful in the short term, because it would require too great a redistribution of firm sizes in the economy. The latter, where AI system development costs could be offset by deploying the system across many firms, would make many more systems economically attractive, but it would likely require industry collaborations or policy initiatives to enable data sharing across companies. 

If this were to happen, it would also imply a major restructuring of industries, as tasks are separated out from firm operations to third-party providers. The economic advantage of machines will also improve as computer vision deployments become cheaper. 

And notably, it states:

But even with rapid decreases in cost of 20% per year, it would still take decades for computer vision tasks to become economically efficient for firms.

My take

A healthy dose of realism from a study that appears to be backed up with sufficient data. Just because we can, doesn’t mean we will.

I think during this adjustment period of AI adoption, rather than asking ‘Is AI coming for our jobs?’, we should be asking ‘What do we want AI to do for us?’. At the moment it feels like the people and companies developing AI are doing so with a select few interests in mind, without too much concern for what the impact will be on what makes us human. Not to mention studies are already pointing to capital income and wealth inequality always increasing with AI. We need to rid ourselves of the notion that ‘the market dictates everything’ when we already know that not to be true. Regulation and civil society can play a role in shaping this - and we shouldn’t assume that the interests of a few at the helm reflect the needs of an already fragmented society. 

Loading
A grey colored placeholder image