AI pre-screening tools have taken hold in large organisations in under five years. Yet data on their real-world effects remains scarce, scattered, and often produced by the vendors themselves.
What independent studies say
An analysis published in 2024 by the University of Geneva covering 12 Swiss companies using ATS with automatic scoring reveals a clear result: candidates from top French grandes écoles are systematically ranked above equivalent profiles trained at less well-known institutions.
The reason is mechanical: the models were trained on the same companies’ past hiring decisions, which already disproportionately favoured those schools.
What companies that notice are doing
Three documented approaches:
- Impact audit: statistically checking whether the tool produces different exclusion rates by gender, geographic origin or type of education
- Hybrid scoring: using AI for an initial filter, but maintaining a quota of “off-score” files reviewed manually
- Evaluator rotation: not letting the same manager systematically validate the algorithm’s recommendations
None of these approaches is perfect. All involve an operational cost that the tools promised to eliminate.
Key takeaway
AI screening tools encode and amplify the biases present in historical hiring data. Corrective approaches exist, but they require ongoing effort. The opposite of the efficiency gain originally sold.
