How to predict high performers in recruitment
Spotting potential and hiring well is something almost everyone thinks they are good at. We are proud of the successes and tend to brush over and explain those hires that did not quite work out. Most businesses do not track hiring performance in any meaningful way, but it will happen. With that in mind and with 100 years of data, what are the best ways (statistically) to actually spot a great hire?
Over 19 different methods were evaluated in a broadly cited and well regarded paper. Some of the methods like graphology (analysis of handwriting) have quite rightly been relegated to the dustbin of recruitment history. Others are essentially the status quo for most businesses (unstructured interviews & assessment centres for instance) and it throws up some interesting results.
In this case, predicting performance means percentage increases in output, increased monetary value of output, and increased learning of job-related skills. Or, in other words, producing more stuff, of more value and quickly picking up more skills along the way in comparison to other colleagues.
The way that this is measured is predictive validity. This is a statistical term which measures the extent that one thing is consistently likely to result in another thing. A mark of 1 is a perfect correlation, whereas 0 is no correlations whatsoever, -1 is a perfect reverse correlation (anyone who does well in this test is guaranteed to be rubbish).
So how do different methods look? Well, something like this:
We can see that there are 3 clear winners out front; structured interviews, work samples & cognitive ability test (think general intelligence tests like numerical reasoning for example).
The weaker methods are equally interesting, mostly because of how widely used they are; years of experience, interests, reference checks (i.e. what previous employers thought), years of education, basically the things on a CV. Also interesting is how poorly performing unstructured interviews are compared to their structured counterparts.
Let’s have a closer look at the top 3 in more detail.
1) Work Samples
A work sample is a little simulation of the job to be done. For example, a work sample for an engineer could be: ‘Here is a broken motor. Fix it’. On the other hand, if you are applying to a corporate job, it might involve drafting an email to the CEO and making recommendations on the basis of a particular data set.
Unsurprisingly, if someone is good at doing the job they will be… err good at doing the job. Bit of a no-brainer that this would be more predictive than most methods. The downside, however, is the time and expense to administer and mark them. Or at least it was, but companies like Applied have created awesome platforms for distributing and marking work samples on mass, leading to incredible conversion and retention rates of potential applicants, all in an unbiased CV blind way. Definitely expect to see more of this.
2) Cognitive Tests
Full transparency on the product push as Mapped is a cognitive testing platform, so forgive the bias. However, the data speaks for itself; if you isolate managerial professional jobs the validity goes up to .58, putting it firmly out in front. There really is no cheaper, quicker or more time efficient way to make an initial assessment, especially on mass.
Some companies have pulled back out of concerns on how they affect diversity, a valid concern, and the reason Mapped exists. The great thing about them though is not only are the downsides not inevitable there is nothing more predictive.
3) Structured Interview (and comparison with unstructured interview)
This is interesting; it’s not that interviewing can’t be effective, it’s just that most organisations are doing it wrong i.e. doing unstructured interviews, in conjunction with nothing else.
An unstructured interview, (i.e. the type of interview done by most organisations), have no fixed format or set of questions. Responses to individual questions are usually not scored, and usually there is only an overall rating (or even worse a fuzzy “I like/don’t like them”). This is a funny one because, whereas work samples and cognitive tests admittedly require some expertise and often outside help (although I maintain worth it), there is no reason any organisation can’t get this bit right.
All you need to do is ask some well thought out questions (the same to everyone) and mark them on their individual responses. If you want to know the optimal number of interviews to make solid judgements, Google have worked out that it is 4, and that they should not all be the candidate’s senior.
Assessment centre, years of experience, interests, reference checks. You don’t have to abandon them. Assessment centres, for example, might not be as good as structured interviews, but sometimes you have to make a cost/time trade off against the ideal.
However, you can get exciting and predictive results by combining different methods sensibly. For example, if you do cognitive tests + work samples as the initial screen, you get up to .63. At that point, if you then add in extra data like structured interviews or assessment centres run by genuine pros you are definitely on the right track for consistently predicting top performers.
For most organisations it’s both about balancing cost/time against results and the ego of the occasional self-identified wunderkind who is sure they can spot a winner every time. Who knows maybe they really can! Remember though hundreds of hours interviewing, and hiring the wrong people is most expensive of all.
If you would like to see how adding cognitive tests to your recruitment process can help you consistently higher better performers then get in touch or request a demo with Mapped.
 Data is from one of the primary researchers in this field is Frank L. Schmidt who wrote the paper referred to in the chart in 1998 here, there was an update in 2016 here which reconfirmed the results.