Headings

How safe and accurate is the automatic face recognition technology installed at the entrance

Police and security forces around the world are testing automated face recognition systems in order to use them as a tool to identify criminals and terrorists. But how accurate is this technology and how easily is it and the artificial intelligence (AI), on the basis of which it functions, become tools of persecution and oppression?

System features

Imagine that a suspected terrorist is sent on a suicide mission to a densely populated city center. If he drops a bomb, hundreds of people may die or be seriously injured. The procedure for scanning faces in a crowd using video surveillance allows you to identify an attacker and automatically compares his characteristics with photographs in a database of famous terrorists or “persons of interest” to security services.

In case of coincidence, the system raises the alarm, and the anti-terrorist forces of quick reaction are sent to the place where they "neutralize" the suspect before he can activate the explosives. Hundreds of lives saved thanks to modern technology.

Possible problems at work

But what if face recognition (FR) technology didn’t work correctly? And it turned out to be not a terrorist, but an ordinary person who was just unlucky and he turned out to be like an attacker. An innocent life would have been simply destroyed because the experts too believed in an erroneous system. But what if you turn out to be such a person?

This is just one of the ethical dilemmas that confronts the face recognition system and artificial intelligence. It is really difficult to train machines to “see”, recognize and distinguish objects and faces. Using computer vision, as it is sometimes called, not so long ago, researchers tried to determine the difference between buns with raisins and a chihuahua, which became a kind of litmus test for determining the effectiveness of this technology.

Face recognition difficulties

Computer technologists, Joy Buolamvini, representing the laboratory at the Massachusetts Institute of Technology (and at the same time the founder of the Algorithmic Justice League), and Timnit Gebru, technical co-director of the Google team on ethical issues related to artificial intelligence, showed that the facial recognition system is difficult distinguishes between men and women if their skin is dark enough. The system quite often mistook women with dark skin for men.

“About 130 million adult Americans are already on face recognition databases,” said Dr. Gebru in May at the AI ​​for Good Summit in Geneva. “But the original data sets contain mostly representatives of the white race, among whom there are much more men.” Therefore, when recognizing people with a darker skin type, there is still a huge percentage of errors due to skin type and gender.

Use of technology

The California city of San Francisco recently banned the use of face recognition by transportation and law enforcement agencies in recognition of its imperfection and threats to civil liberties. But other cities in the United States and other countries of the world continue to experience this technology.

In the UK, for example, police forces in South Wales, London, Manchester and Leicester are testing this technology, which terrifies civil liberties organizations such as Liberty and Big Brother Watch, which are concerned about the number of false matches that occur during the operation of the system.

Mistakes and Fears

In practice, this means that innocent people are mistakenly called potential criminals. According to Dr. Gebru, such problems should certainly cause concern for everyone, the use of such preventive and predictive measures involves high rates.

Given that black Americans make up 37.5% of all prison inmates in the United States (according to the Federal Bureau of Prisons), despite the fact that they make up only 13% of the US population, poorly written algorithms using the currently available datasets can predict that black people are more likely to commit a crime. You don’t need to be a genius to understand what this can mean for police and social policies.

More recently, scientists from the University of Essex have come to the conclusion that the coincidences used in trials in the London police were 80% wrong, which could potentially lead to serious violations of justice and infringement of citizens' right to privacy.

One Briton, Ed Bridges, began a lawsuit over the use of face recognition technology by the South Wales police after his photograph was taken while he was shopping, and British Commissioner for Information Elizabeth Denham expressed concern about the lack of a legal framework governing the use of recognition technology.

But such fears did not stop the tech giant Amazon from selling its Rekognition FR tool to the U.S. police force, despite a hesitant riot by shareholders that still led to nothing.

Prospects

Amazon says it is not responsible for how customers use its technology. But compare this relationship with Salesforce, a technology for customer relationship management, which has developed its own image recognition tool called Einstein Vision.

“Face recognition technology may be appropriate in a prison to track prisoners or prevent gang violence,” said BBC Salesforce, an ethical artificial intelligence expert at BBC. But when the police wanted to use it together with cameras when arresting people, the company found this inappropriate. In this case, one should ask whether it is necessary to use AI in general in certain scenarios, and one of the examples is face recognition.

Currently, this technology is also used by the military, as technology providers claim that their software can not only identify potential enemies, but also recognize suspicious behavior.


Add a comment
×
×
Are you sure you want to delete the comment?
Delete
×
Reason for complaint

Business

Success stories

Equipment