AI experts call to block publication of study on neural network that claims to ‘predict’ criminality

AI experts attempt to block publication of study on neural network that claims to ‘predict’ criminality, saying it’s based on ‘unsound scientific premises’

  • The skeptics say the study is based on shoddy scientific research
  • They’re calling for Springer to rescind its commitment to publish the study 
  • Proponents say the facial recognition algorithm could have applications in law enforcement 

A group of more than 1,000 researchers and academics are calling on Springer to reconsider the publication of an upcoming study on neural networks that claims to ‘predict’ criminality. 

In an open letter published this week, the group, which consists of experts in the field of statistics, machine learning and artificial intelligence, law, sociology, history, communication studies and anthropology, caution Springer, the publisher of Nature, over the publication of the study in an upcoming book series.

Skeptics in the field of AI and machine-learning are calling on Springer to rescind its offer to publish a study on a ‘predictive policing’ algorithm

The study itself claims that an automated facial recognition software can be used as a ‘predictive policing’ tool for law enforcement that can identify criminals before they commit crimes.

‘We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,’ said co-author of the study and Harrisberg University Professor Roozbeh Sadeghian in a statement. 

‘This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.’

Among the concerns addressed in the letter are what skeptic call ‘unsound scientific premises, research, and methods’ used to support the use of predictive policing.

Additionally, the letter cites a well established inability of facial recognition technology to identify people of color which they say could disproportionately affect black communities.

In 2018, a test by the American Civil Liberties Union of Amazon’s facial recognition software, called Rekognition, mismatched 28 members of congress, many of whom were people of color.

Criminal justice data used to ‘identify’ criminals is also insufficient according to the letter, since the judicial systems is often skewed.

 

‘Let’s be clear: there is no way to develop a system that can predict or identify “criminality” that is not racially biased — because the category of “criminality” itself is racially biased,’ states the letter.

Because of those concerns, the letter asks Springer to publicly rescind the offer of publishing the study and to issue an explanation of the criteria used to evaluate it. 

They’re also requesting Springer issue a statement condemning the use of criminal justice statistics to predict criminality and to acknowledge their role in ‘incentivizing such harmful scholarship in the past.’