全部版块 我的主页
论坛 提问 悬赏 求职 新闻 读书 功能一区 真实世界经济学(含财经时事)
547 0
2018-09-26
How robot decisions can be freed from bias

By Anjana Ahuja

There is bigotry among the bots. Algorithms that are used to make life-changing decisions — rejecting job applicants, identifying prisoners likely to reoffend, even removing a child at risk of suspected abuse — have been found to replicate biases in the real world, most controversially along racial lines.
Now computer scientists believe they have a way to identify these flaws. The technique supposedly overcomes a Catch-22 at the heart of algorithmic bias: how to check, for example, that automated decision-making is fair to both black and white communities without the user explicitly disclosing their racial group. It allows parties to encrypt and exchange enough data to discern useful information while keeping sensitive details hidden inside the computational to-ing and fro-ing. The work, led by Niki Kilbertus of the Max Planck Institute for Intelligent Systems in Tübingen, was presented this month at the International Conference on Machine Learning in Stockholm.


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群