What is Mapped and why was it developed?

Mapped is an aptitude testing platform to evaluate numerical and analytical reasoning skills. The initial content is aimed squarely at consulting, finance and other analytically demanding businesses. It was designed from the ground up to be more challenging, varied and relevant while also being diversity friendly and more enjoyable to candidates.

Claiming to make a numerical reasoning test harder and more enjoyable at the same time might seem like a stretch. Making it more diversity-friendly may not make much sense to anyone (what could be more impartial than an analytical test?). Hopefully it will make more sense by the end of this introductory blog.

Why did we make Mapped?

Mapped’s  co-creator Freshminds, runs graduate programmes on behalf of some of the best consulting, technology and finance firms. As part of these processes they used many testing providers for thousands of candidates over many years.

Everything they tried had some particular quirks, but all of them threw up some consistent challenges that they wished to address:

  1. The effect they had on diversity was troubling. The data showed that pass rates differentiated between men and women by 8-20% on all the providers Freshminds had tried, or encountered through 3rd party client data. The absolute amount shrinks at the very top and bottom, but still leaves men more than twice as likely to pass. This is not great if you have a minority of female applicants anyway, and seemed inexplicable when the gap between men and women who get A/A* at Maths GCSE is only 1.8% (UK school qualification). This pattern holds between different ethnic and socio-economic groups.

 

  1. Freshminds were not convinced they were testing what counts. Existing products either seemed to create a challenge by relying on specific maths knowledge (a “who can remember more from school competition”) or create huge time pressure for relatively simple questions. They did not think speed of calculator use was very interesting. Rapid mental calculation  was more so but not enough by itself. It seemed there was a lot more to test, even within the narrow band of numerical reasoning & problem solving.

 

  1. Candidates dislike them. Freshminds knew this anecdotally, but surveys established that 94% of candidates had a negative impression of traditional testing, and 85% saw no relevance to future work. If you are competing for the best graduates, you want to reduce friction wherever possible. The common thread was that candidates felt they were arbitrary and really bore little or no relation to how good they would be at a given job.

 

Why not just get rid of them?

Some companies have. There are several top graduate schemes in finance and the broader commercial worlds that have done away with them all together. It’s a potential solution, but not perfect.

  1. Many jobs rely on a certain degree of analytical/numerical reasoning but are not specialist numerical roles. If you won’t test it, the only other way to check is to fight over the same small demographically skewed STEM pool as everyone else. A solution perhaps, but you will you miss out on some of the best candidates (If McKinsey can find English Literature grads that are sufficiently numerate for their analyst roles, you probably can to), and all the unique skills and perspectives they might bring.
  1. Even if you avoid the pitfalls of 1, losing testing is a real blow for recruitment. Distinguishing between thousands of near-identical CVs of young people, whose achievements will (through no fault of their own) be both limited and largely determined by their social background, is at best difficult, and can become both arbitrary, and highly vulnerable to unconscious bias. Adding objective data points is always good.

What we need is an objective test that allows you to open your doors wide to people with any degree subject, or institution, whilst maintaining or even raising standards, but fixing all the problems outlined i.e. huge diversity gap, relevance, and candidate dislike.

When Freshminds met Applied.

Improving the user experience and commercial relevance would not be straightforward, but the solutions seemed reachable. The diversity concerns around the substantial attainment gap between different groups, (most dramatically men and women) was a bigger challenge.

Freshminds met Applied, a technology company dedicated to removing unconscious bias from recruitment processes. It is also the first technology spin-off from The Behavioural Insight Team (BIT); a well-known group of behavioural scientists and experimental psychologists using “nudge theory” to positively affect behavioural change.

Applied worked with Freshminds to understand how concepts like stereotype threat, risk appetite, competitive confidence and other internal biases were distorting the results. The diagnosis was becoming clearer, and the solutions were not without precedent.

Freshminds and Applied teamed up to build Mapped.

Mapped

As shown above Mapped had three primary problems to solve going into it’s 2017 pilot at 4 global firms. This is the approach it took.

  1. ‘Testing what counts’. Freshminds interviewed their network of analysts, strategists, managers, partners and CEO’s at some of the best consulting, finance and technology firms. They asked what was asked was required numerically and analytically on a day-to-day basis in graduate positions. Their answers were highly consistent, and surprisingly not explicitly covered by any single existing product.
  • Basic numerical skills (calculus/percentages/ratios)
  • Interpreting data, and understanding its implications
  • Problem-solving and analytical thinking
  • Error spotting and attention to detail

We worked with talented psychologists and business leaders to create content that challenged these areas, and also explicitly told candidates what we were testing and why.

2. A positive candidate experience. We know that a lot of people will never love being tested, but this can at least be improved. In our research and pilot programs, some key themes came out. This is by no means everything, but the feedback has been hugely positive with the vast majority preferring it.

  • Candidates want feedback. What were they good at, what were they bad at. Each candidate gets access to a pool of practice tests and, after completion, they all get detailed results on how they did and areas for them to work on.
  • They want to feel it’s relevant not just a hoop to jump through. We’ve made a big point of communicating what is being tested and why.
  • The accidental result of testing what counts is a variety of questions which much closer reflected interviews. Candidates like that.
  • We’re treating the candidates like customers to a service, Applied created a beautiful, user-friendly interface and we also take individual feedback from every single candidate to gather comments and consistently improve the platform.

3. Diversity. Mapped was piloted with four firms in late 2017 and the results showed some significantly positive outcomes; diversity results were improved against existing products in some cases halving the existing pass rate gap. By responding to the thousands of hours of testing data we are aiming to make it even better in 2018. We’re looking forward to sharing our findings on this.

 

The Future

In the future, we will be looking how we can take the same methodology and expand into other areas. For example, a more general graduate test for less analytical roles, one for apprentices, verbal reasoning and another possible test for engineers. These will develop in line with the needs of organisations who would like to take part in further piloting and being at the forefront of new methods of hiring top talent.

For now, we are really proud to be able to launch more formally and excited to see where this new venture will take the graduate market and beyond.