Facial Recognition Software Can Now Identify People Even If Their Face Is Covered!
A facial recognition system can identify someone even if their face is covered up.
The Disguised Face Identification (DFI) system uses an AI network to map facial points and reveal the identity of people.
It could eventually help to pick out criminals, protesters, or anyone who hides their identity by covering themselves with masks, scarves or sunglasses.
The software could also see the end of public anonymity, sparking privacy concerns from one academic, who has labelled it ‘authoritarian‘.
“This is very interesting for law enforcement and other organisations that want to capture criminals,” Amarjot Singh, a researcher at the University of Cambridge who worked on DIF.
“The potential applications are beyond imagination.”
Led by Mr Singh, the international team of scientists published their research on the pre-print server arXiv.
DFI uses a deep-learning AI neural network that the team trained by feeding it images of people using a variety of disguises to cover their faces.
The images had a mixture of complex and simple backgrounds to challenge the AI in a variety of scenarios.
The AI identifies people by measuring the distances and angles between 14 facial points – ten for the eyes, three for the lips, and one for the nose.
It uses these readings to estimate the hidden facial structure, and then compares this with learned images to unveil the person’s true identity.
In early tests, the algorithm correctly identified people whose faces were covered by hats or scarves 56 per cent of the time.
This accuracy dropped to 43 per cent when the faces were also wearing glasses. The work is still in its early stages, and the algorithm needs to be fed more data before it can be brought into the field.
Despite these hurdles, Mr Singh told Inverse: “We’re close to implementing it practically.”
The DFI team have called on other researchers to help develop the technology using their datasets of covered and uncovered faces.
The research, which has not yet been peer reviewed and is still awaiting publication, has sparked controversy after some raised concerns over privacy rights.
Dr. Zeynep Tufekci, a sociologist at the University of North Carolina, posted the research to Twitter, claiming that the AI is ‘authoritarian’.
He tweeted: ‘The authors claim the system works about half the time even when people wear glasses. And this is just the beginning; first paper.
“And this is maybe the third or fourth most worrying ML paper I’ve seen recently re: AI and emergent authoritarianism. Historical crossroads.”
“Yes, we can & should nitpick this and all papers but the trend is clear. Ever-increasing new capability that will serve authoritarians well.”
The DFI team will present their research at the IEEE International Conference on Computer Vision Workshop in Venice, Italy, next month.
Please like, share and tweet this article.
Pass it on: New Scientist