There is an open letter gotten from a coalition of AI researchers which is calling out scientific publisher Springer Nature over a conference paper which it reportedly planned to include in its forthcoming book called Transactions on Computational Science & Computational Intelligence. The paper which which is title “A Deep Neural Network Model to Predict Criminality Using Image Processing” presents face recognition technology which is meant to predict whether an individual is a criminal or not based on it’s original press release.
It was developed by researchers at Harrisburg University and was due to be presented at a forthcoming conference.
The letter which cites leading BlackAIScholars was able to debunk the scientific basis of the paper and asserts that crime-prediction technologies are racist. It further list three demands which includes the following:
- Springer Nature to rescind it’s offer to publish the study
- Issue a statement condemning the use of statistical techniques such as machine learning to predict criminality and acknowledging its role in incentivizing such research
- Scientific publishers are to commit to not publishing similar papers in the near future.
The letter was sent to Springer Nature on Monday was written by five researchers at MIT, Rensselaer Polytechnic Institute, McGill University as well as AI Now Institute. The original document soon get over 600 signatures and counting across the AI ethics and academic communities including people such as Meredith Whittaker who is the cofounder of the AI Now Institute and Ethan Zuckerman, former director of the Center for Civic Media at the MIT Media Lab.
The letter highlights goals of the authors which is to demonstrate systematic issue with the way scientific publishing incentivizes researchers to perpetuate unethical norms.
“This is why we keep seeing race science emerging time and again,” said Chelsea Barabas, a PhD student at MIT and one of the letter’s coauthors. “It’s because publishers publish it.” “The real significance of this Springer piece is that it’s not unique whatsoever,” echoed Theodora Dryer, a postdoctoral researcher at AI Now and another coauthor. “It’s emblematic of a problem and a critique that has gone on for so, so long.”
Springer Nature made a response to the letter saying that it would not be publishing the paper any longer. “The paper you are referring to was submitted to a forthcoming conference for which Springer had planned to publish the proceedings,” it said.
“After a thorough peer review process the paper was rejected.” Harrisburg University also took down its press release, stating that “the faculty are updating the paper to address concerns raised.”
Comments and a copy of the original copy of the paper were denied of request by Harrisburg University and coauthor of the paper. Meanwhile, the letter’s signatories said they will continue to push for a fulfillment of their second and third demands.
Since the death of George Floyd which had caused a serious criticism and condemnation towards hate crimes and racist acts towards black people around the world, the AI and tech industry have all faced criticism over their roles in reinforcing structural racism.
During the week of June 8, for example, IBM, Microsoft, and Amazon all announced the end or partial suspension of their face recognition products. The move was a culmination of two years of advocacy from researchers and activists to demonstrate a link between these technologies and the overpolicing of minority communities. The open letter is the latest development in this movement toward greater ethical accountability in AI.
“We really wanted to contribute to this growing movement,” said Sonja Solomun, the research director of the Centre for Media, Technology, and Democracy at McGill University. “Particularly when we look outside our windows and see what’s going on right now in the US and globally, the stakes are just so high.”