How robot decisions can be freed from biasBy Anjana Ahuja
There is bigotry among the bots. Algorithms that are used to make life-changing decisions — rejecting job applicants, identifying prisoners likely to reoffend, even removing a child at risk of suspected abuse — have been found to replicate biases in the real world, most controversially along racial lines.
Now computer scientists believe they have a way to identify these flaws. The technique supposedly overcomes a Catch-22 at the heart of algorithmic bias: how to check, for example, that automated decision-making is fair to both black and white communities without the user explicitly disclosing their racial group. It allows parties to encrypt and exchange enough data to discern useful information while keeping sensitive details hidden inside the computational to-ing and fro-ing. The work, led by Niki Kilbertus of the Max Planck Institute for Intelligent Systems in Tübingen, was presented this month at the International Conference on Machine Learning in Stockholm.