Main content

AI kills résumés - what's the plan now?

Brian Sommer Profile picture for user brianssommer November 7, 2023
AI isn't the silver bullet that's going to positively transform the résumé as we know it...


I know AI, especially Generative AI, is really hot right now. Yes, these tools can do some amazing things (e.g., write code, translate books, etc.), make people more productive and businesses run leaner. 

Yet, with every new technology, innovation or change, there can be unintended consequences. Some of those may be foreseeable and others will blindside us. Humans get surprised as we get so enraptured in the newness and novelty of something new, we tend to drop some of our critical thinking while we’re oohing, fawning and obsessing over today’s shiny new tech. 

We need to take a deep breadth and consider the impact that new AI tools will have on entire systems (e.g., a company, its ecosystem of employees, its suppliers, etc.) instead of taking a very narrow look at how an AI utility impacts a specific function (e.g., résumé ranking) or user.  We need to see these new capabilities and technologies in a broader context to really understand the impact they will really have. We need to do this as employees, suppliers, companies, governments, customers, technologies and more are interdependent entities. Actions or changes by one constituency can trigger a myriad of other changes and smart leaders watch out for these knock-on effects. Only fools rush in blindly. 

Let’s look at Generative AI tools that help HR/Recruiters craft better or more inclusive job descriptions as well as the tools that help job seekers improve their résumé so as to improve one’s chances of getting a screening interview. 

A jobseeker’s gen AI View

Suppose you’re a jobseeker and you’ve developed a pretty good résumé. But since every potential employer has slightly different position descriptions out on job boards, you will want to tailor your résumé to each unique position. In that way, you can maximize the number of potential screening interviews you can snag. But, all of that tailoring takes time.

An AI tool that lets you point to the open position’s description and suggest what else your résumé should contain sounds great. What can go wrong with that? 

Plenty as it turns out:

  • Some bad actors will also use this technology to pile on a lot of ‘experience’ into their résumés. (For anyone who thinks this won’t happen, just remember all of those keyword stuffing efforts people put into résumés to trick ATS systems.) That additional content may or may not be truthful/legitimate but the effect will nonetheless be to knock your résumé down in the rankings. You won’t get the screening interview even though you are more qualified than the cheater/stuffer.
  • The bad actor threat is actually worse than what appears above. In the past, a keyword stuffer might be lazy or time constrained and might miss some synonyms or keywords. There was at least a chance your résumé could still persevere if it mentioned other redeeming qualities. Now, with a gen AI tool, that same bad actor needs only a few seconds to magically transform their mediocre résumé into pure gold. 

What’s happening in this scenario is that:

  • AI rewards the lazy (and is that who you want to hire for your firm?).
  • AI helps those incapable or unwillingly to write a well-crafted, thoughtful résumé and cover letter (and do you want people for your firm who can’t write even a page or two of copy well?).
  • AI helps the mediocre jobseeker rise to level of a great candidate (and who consciously wants to hire mediocre folks when they really want the best and brightest?).
  • Recruiting will drown in a sea of undifferentiated, similar résumés that all sound the same (and how does help save recruiters’ time?).
  • Recruiters will need new ways to divine which jobseekers are truthfully representing themselves versus some manufactured ideal.

Bottom line: Résumé volumes will increase, quality of résumés may decline, and, it becomes harder than ever to identify great talent.

A Recruiter’s gen AI View

Now, suppose you’re a recruiter and you’ve developed the guts of a pretty good job/position description. You might even have used a previous one you wrote but tailored it to this new opening. You decide to use a gen AI tool to fortify your handiwork with suggestions it can offer based on the job descriptions out there on the Internet (or in its training databases). 

In a matter of seconds, the tool provides you with a much more fulsome job description. It took your ½ page effort and expanded it to 1.5 pages. It’s well-written (mostly) and is recommending you add specific skills and other verbiage it found on similar job descriptions that have been posted by other employers. It’s good stuff and you incorporate over 90% of the suggested copy. What can wrong with this?

Plenty as it turns out:

  • The new job description may totally miss the cultural, employment brand and other interpersonal aspects of your firm and how people need to mesh well with others. The tool may find lots of skills, competencies, etc. to add to the description, but how can it know those soft-side points that are critical to your firm’s culture, career longevity, employee experience, etc.? It can’t.
  • The tool has just created a job description that any of your competitors could have written. There’s nothing here that isn’t being said/asked for by other employers. How do you win the war for talent with generic, average, undifferentiated, similar position descriptions? You don’t. 
  • The tool is recommending you lard up your job description with a bunch of ‘requirements’ that are not prioritized for your firm. For example, while it may be great to have a JSON programmer with leadership skills, a PMI certification and more, which of these are in the top three of CRITICAL needed skills? I don’t know. The AI tool doesn’t know and neither does the applicant. 
  • This job description on steroids will likely generate a lot of applications of people who are, in actuality, overqualified for what they’ll really be doing, are too expensive for the work to be completed, or, have padded their résumés to get a screening interview. None of those are satisfactory results.

What’s happening in this scenario is that:

  • Recruiting is NOT getting the candidates it actually wants and can afford to pay
  • Some great candidates will balk at applying as they feel the recruiter is apparently looking for someone with impossible or superhuman skill sets. They won’t waste their time applying for a position they perceive is unwinnable.
  • Jobseekers won’t want to tailor their résumé for this position as the job description is already longer than their résumé. It’s a lot of work to do for a low-percentage potential payoff. They would rather focus their energies on open positions whose job descriptions are more rooted in a real-world situation.  

Bottom line:  Creating job descriptions with aggregated data may not (and likely will not) generate the job seeking candidates you actually want to attract.

Losers abound

So, when you apply systems thinking to these and other new AI ‘solutions’ you will see: 

  • Many unintended consequences
  • Unplanned behaviors by certain constituents/actors
  • A need for new measures AND countermeasures to be created and embedded within the tech, processes, controls and more

In the recruiting example, everyone is a loser – except for those jobseekers who use AI to game the recruiting process. Call them what you like: shrewd, conniving or cheats – but that’s the human race and they will act in a variety of manners from angelic and ethical on one extreme to despicable on the other. 

The losers above were:

  • The people who earnestly took the time to tailor their résumé to align with a (mostly machine-generated) job description. It’s a shame that another AI tool, a résumé ranking or ATS solution, will ensure that no human ever even glances at any but a percent or two of them. Readers should note that this problem is quite real today and may only get worse when AI ‘helps’ these processes.
  • The companies who hired suboptimal candidates who got their résumés moved up the ranking with some help from AI.
  • All jobseekers who don’t have years and years of experience.
  • Recruiters who thought this tech would make their jobs easier and more productive. 
  • Jobseekers who bothered to do more than the bare minimum. AI isn’t helping the best and brightest to move to the top of the queue so why should a great jobseeker work hard on polishing up a résumé that will get overlooked because some charlatan loaded up their résumé with dubious claims and experiences. 

So, at the end of the day, Recruiters get a lot of résumés/applications of people they can’t possibly look through. The résumés they get are of doubtful provenance and usefulness. The employer doesn’t get the best and brightest people to hire/interview. The best candidates get the shaft. 

AI tools are on track to ruin the résumé and these incremental AI tools aren’t improving the status quo. They’ll make it worse.

Why didn’t AI provide huge benefits? The simplest answer is that people used an incremental (not fundamental and strategic) approach to improving Recruiting. You can’t bolt a V-8 engine to a horse-drawn carriage and expect great results. You need to think holistically and purposefully so that your efforts really will deliver the intended results. Right now, we have a lot of vendors slapping AI on old software/processes and expecting massive, transformative benefits to ensue.  It might not work or it might not be the right tool to use to solve the problem. 

We need new ideas before we assume AI is the answer to everything. 

As an example, let’s look at the recruiting problem from a different approach. Maybe a different perspective can help us see if AI is even needed or if it can do something different. This example looks at Plum.

Plum – a very different approach to talent acquisition/optimization

The strategic problem today is that few companies are actually winning the war for talent. There are a couple major challenges they face but using old approaches and solutions are NOT moving the needle. Specifically, firms have a supply problem (i.e., too many firms have over-fished the talent pool they’ve been fishing in for years/decades) and a pastoral care problem (i.e., they aren’t doing enough to retain/develop/utilize the people they already have). 

We need to find technologies (not just AI) that solve one or both of those strategic challenges. And, we need to think more expansively, not incrementally. Bits and pieces of AI sprinkled on certain process steps might lead one to expect (but not necessarily realize) specific benefits even if we can work though the unintended consequences of that. 

What we need to do is to dramatically re-imagine what recruiting and talent management could look like. One of the more radical approaches I’ve seen has been developed by Plum. 

Plum doesn’t make the résumé/application the first and key input in talent acquisition. Instead, it has  developed a set of choices prospective hires must pick through. There is no right or wrong answer to these, but they do indicate which people possess certain talents, behaviors, etc.  Plum claims that they can condense 140 hours of job evaluations into eight minutes. 

Plum starts by getting operations, hiring and other managers to identify the key behavioral requirements of specific jobs. The assessment tool is looking at KBI’s (key behavior indicators). Plum goes a step further by recommending interview questions based on the position requirements. According to Plum:

The assessment the job seeker/employee completes never changes. Each jobseeker or employee completely a single universal online assessment that is matched against the key behavioral requirements of different jobs. That information helps firms match people to the jobs where they have the greatest alignment with the behavioral requirements of the specific job.


Employers ask ALL jobseekers to take the assessment. Candidate assessment results get matched to the requirements that managers are seeking. The best matched persons move forward in the recruiting process. Only after the best matched people are identified do recruiters look at résumés or applications. 

Let’s be clear about one key point. Plum is not evaluating people solely based on hard-skills (e.g., Revenue Recognition experience) but rather on their ability to be successful in the role/firm. They have found that better matches along these other criteria dramatically increase retention rates (77%). They’ve also learned that customers value these matches as 93% of hiring managers would rehire people matched by Plum. 

Jobseekers seem to like the process and the results. Some even post their Plum profile on LinkedIn (click here for one example). 

On the talent retention front, Plum can facilitate an assessment of all employees within a firm. In this scenario, a company can learn which employees are in roles they will excel in, which ones could be moved into other positions that they’d love even more, and, which persons are in a bad fit and should be re-assigned or get some re-skilling help. This capability maps cognitive skills, social science, and  personality traits to all available (or existing) positions. The goal of greater retention occurs when people are placed into positions they love/excel at. 

Plum can also add value with composing a team, identifying training and development efforts, and, in succession planning. 

Plum’s customer list includes Whirlpool, Hyundai and Deloitte. 

My take

Businesses are being presented with a number of new AI-powered capabilities today but not all of these are guaranteed to be winners. New tech isn’t always good tech.  Buyers have a fiduciary responsibility to their employer to thoroughly vet any new technology and ensure that it delivers value, mitigates risks and functions in an ethical, fair and reasonable manner. 

Bolting AI onto an existing process might add value in some situations but not in others. The recruiting example above certainly triggers a number of concerns.  Buyers would be wise to consider all manner of solutions today from old-school approaches, cutting edge solutions, AI-enhanced solutions and everything in between. Your best solution may not be on the bleeding edge or it may represent a radical, not incremental, approach to solving your firm’s thorny business or strategic problems.

Finally,  tech buyers must possess a more nuanced thought process that considers how other actors will utilize new technologies. While some will use it ethically, others will not. That means that your firm will need to create countermeasures to deal with the new risks, actions, etc. that others will bring as a result of these new technologies. 

Good luck!

A grey colored placeholder image