Welcome back! This is the second part in our series on Higher, our recruiting platform that we describe as:
A talent ranking system that is human-centric, altruistic and powered by an ergonomic incentive network
Part 1: Talent Ranking Evolved: Introducing Higher
Part 2: Human-centricity and the limits of AI
Our last article covered Talent Ranking and in this article I’m going to cover Human Centric. So first…
Why Human-Centric?
This is a fair question to ask. Why do we believe human-centricity is important? Well firstly we need to understand a little about the alternative, Artificial Intelligence or Machine Learning (referred to as AI in the rest of the article).
Where do we already have AI in the world of talent acquisition?
Candidate sourcing
Given a job spec, an AI will search talent pools like LinkedIn and find ‘matching’ candidates, typically using natural language processing capabilities e.g. does their profile mention the skill we’re looking for.
Candidate filtering
Given some candidates and a job spec, an AI can filter those candidates for best fit. Much like the sourcing AI, it will use natural language processing to find terms in the candidates CV.
Reference checking
Given a candidate and their CV and social profiles, an AI can check references already given, or even request references from relevant contacts. E.g. finding a previous manager listed on LinkedIn and sending them an automated request for reference.
Candidate-role-fit
Given a candidate and many open positions in, say, a group of companies, an AI can help select the role the candidate is best fit for. Similar to sourcing and filtering, using terms to create matches.
Candidate ranking
Given some job spec, candidates and their CVs, an AI can create a ranking based on available data; again likely using natural language processing techniques to extract terms to rank on.
… among others.
Why do we have them? Simple: to deal with volume. We’ve gone from ~120 applicants per role 10 years ago to almost ~250 last year (on average). Technology has been employed to help manage the load, but it was the wrong technology and I’m going to outline fundamental reasons why.
A Small Lens and A Chef’s Menu
Imagine you’re in a dark room and someone has magically taken away all your senses apart from touch. There’s something in the room with you, and you don’t get to leave until you work out what it is.
Okay, you think, this is weird. You start feeling around on the ground and touch something cold and smooth. You keep feeling around… Now it’s warmer and rough and feels as though it’s connected. You follow the shape of it further, it’s still warm and rough, but now you need to stretch your arm to continue the outline. The contour continues beyond your reach, but you get the sense that it connects to something smooth and cold again on the other end. Moving over, you find a rough, elongated protrusion, then something hairy, something wet and warm and, finally, something thin and flexible.
You could be here for hours, possibly even days.
What you’ve just experienced can be likened to a small lens through which you view the world. In this case, a lens of pure physicality.
AIs are limited by lenses as well. Their view of our rich, multi-dimensional world is highly constrained. More specifically, AI in the talent world can only see candidates through their digital artefacts — CV’s, digital profiles, form data, etc.
Here’s another example. You’re the owner of a restaurant and you need a new chef. Now would you employ one purely based on their written menu, without tasting any of their food?
I’d hazard a guess not. Why? Because we have a saying:
the proof is in the pudding
Imagine the state of our restaurants if all owners hired chefs purely through their written menus. What do you think would happen? What would the food be like?
Chefs who could write and present menus well would prosper over those who couldn’t. Do you think writing and presenting a menu is the measure of being a chef? It’s certainly part of it, but it says nothing about their ability to manage stock, run a kitchen or even what their food tastes like.
CVs are the menus of our domain. They capture something, but I don’t hire people to write CVs.
Hopefully you can see where I’m going with all this. We have two irreducible problems:
- AI “sees” a tiny fraction of our multi-dimensional world, and the lens it looks through can only read our digital artefacts.
- Our digital artefacts are only representations of the lived and experienced world, and can never be anything but. They’re the menu, not the Chef.
(An elephant, if you were wondering)
Search and SEO
Another angle through which to examine the world of AI is its effectiveness.
I’m going to take one of the best examples of AI, Google Search. Has it ever occurred to you why SEO (search engine optimisation, the act of becoming more visible in Google Search) exists?
The best engineers in the world have spent decades working on one of the greatest AI achievements in human history… and it still requires a whole additional industry (current market size ~$40 Billion) to make it work effectively.
So, given that Google has had billions of dollars to invest in their primary product and it still requires an industry to better optimise or game it, how well do you really think existing and emerging AI systems in the talent world work? And before you answer, bear in mind that they’ve had far less investment.
In fact, now we know the answer. The Harvard Business School put out a report on #hiddenworkers. In the report, they name Applicant Tracking Systems AI as a large contributor to keeping #hiddenworkers hidden. On average, over 50% of applicants are filtered out of any AI-driven hiring campaign. The factors came down to:
- Term-based exclusions. “We’re looking for x, y and z” and so anyone without x, y or z in their CV is automatically filtered out.
- Inferred criteria. “We’re not interested in candidates who have had employment gaps”, and so any gap in a CV is filtered out without a chance to appeal.
This wouldn’t be a problem if those candidates genuinely weren’t applicable (read: it was doing what it was intended) but unfortunately, as the report points out, they are filtering out very viable candidates.
There also now exists a growing market of ATS optimisation experts, much akin to the SEO experts. They will help candidates adjust their CVs to be more visible to your ATS AI, for a price. I’ve even advised something similar myself. Friends have come to me asking for advice on how to land jobs, and one of the first things I say is to tailor their CV specifically to the role at hand and ditch anything irrelevant. I even have 3 versions of my own CV: as a manager, as an engineer and as an entrepreneur.
Trust and Verification
At this point we’ve established that:
- AI only can “see” through digital artefacts
- These artefacts only represent a tiny slice of the real world
- AI efficacy isn’t really what’s promised
What I haven’t yet mentioned is trust. Even if you could combine the best sensors we have with the fastest computers in the world, install the best AI we can conjure and assemble all the data you could ever want, you’d still have issues of trust. How can you know that an artefact (say, a CV) is an accurate portrayal of the real world?
The answer right now is you cannot.
Candidates can always alter their artefacts to their advantage or disadvantage because there is no tether between the real world and the artefact itself — at least not without someone doing some extra work.
That’s to say, if I put on my CV that I worked at Google as a Senior Software Engineer between 2015–2017 (spoiler, I didn’t), only those willing to check my references all the way back to that period would ever suss me out. And even then I could socially engineer my way around the problem: give my mate’s number as my manager at that time.
It comes down to simple arithmetic: The cost of falsifying my CV (low) + the benefit I gain by association with Google (high) outweighs the cost in due diligence (very high) required to counter it.
And, finally, there’s a large group of people who don’t even recognise their own capabilities and downplay them in their artefacts. We’ll come back to them in a later article.
Human-centricity and Higher
All this isn’t to say AI is useless. There are many specific applications of AI that outperform human experts many times over, but those specific tasks aren’t nearly as complex as real-time social interactions.
We as individuals have the ‘machinery’ to navigate this complex landscape, and once we do we have a memory of it, using first hand experience and not after-the-fact designed-for-purpose artefacts.
Our issues, then, lie in enabling, capturing and communicating the right experiences from the right people at the right time — and empowering the consumer to understand and trust them quickly… at scale.
All solutions before have failed in at least one of these aspects. Higher is built to overcome them all, and furthermore benefit people outside of the hiring process at the same time. We’ll cover that in the next article on Altruism.