Troubling study says AI can predict who will be criminals based on facial features

Janet 2016/12/09 1908 0 挑错量: 0

摘要:The fields of artificial intelligence and machine learning are moving so quickly that any notion of ethics is lagging decades behind, or left to works of science fiction. This might explain a new study out of Shanghai Jiao Tong University, which says comp






The fields of artificial intelligence and machine learning are moving so quickly that any notion of ethics is lagging decades behind, or left to works of science fiction. This might explain a new study out of Shanghai Jiao Tong University, which says computers can tell whether you will be a criminal based on nothing more than your facial features.


  Not so in the modern age of Artificial Intelligence, apparently: In a paper titled "Automated Inference on Criminality using Face Images," two Shanghai Jiao Tong University researchers say they fed “facial images of 1,856 real persons” into computers and found "some structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle." They conclude that "all classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic."


  In the 1920s and 1930s, the Belgians, in their role as occupying power, put together a national program to try to identify individuals’ ethnic identity through phrenology, an abortive attempt to create an ethnicity scale based on measurable physical features such as height, nose width and weight.


  This can’t be overstated: The authors of this paper — in 2016 — believe computers are capable of scanning images of your lips, eyes, and nose to detect future criminality.


  The study contains virtually no discussion of why there is a "historical controversy" over this kind of analysis — namely, that it was debunked hundreds of years ago. Rather, the authors trot out another discredited argument to support their main claims: that computers can’t be racist, because they’re computers.


  Unlike a human examiner/judge, a computer vision algorithm or classifier has absolutely no subjective baggages, having no emotions, no biases whatsoever due to past experience, race, religion, political doctrine, gender, age, etc. Besides the advantage of objectivity, sophisticated algorithms based on machine learning may discover very delicate and elusive nuances in facial characteristics and structures that correlate to innate personal traits.

分类: 科技发现  标签:人工智能 分辨罪犯   | 收藏

评论:


关于我们 | 联系我们 | 商务合作 | 网站地图 | 诚聘英才 | 免责声明
中译语通科技股份有限公司 版权所有
Copyright © 2012-2019 www.yeeworld.com All rights reserved. 京ICP备13002826号-3
京网文[2017]5582-659号  京ICP证140152号
京公网安备 11010702001424号