Facial Recognition Tech Tested By UK Police Was Wrong 96% Of The Time

Facial recognition is highly flawed.  Activist Post has consistently reported numerous studies finding that the technology’s accuracy isn’t all it’s marketed to be. Now, a watchdog observing UK Metropolitan Police trials has stated the technology has misidentified members of the public, including a 14-year-old black child in a school uniform who was stopped and fingerprinted by police, as potential criminals in as much as 96 percent of scans, according to Big Brother Watch in a press release.

In eight trials in London between 2016 and 2018, the technology gave “false positives” that wrongly identified individuals as crime suspects when an individual passed through an area with a facial recognition camera.

Big Brother Watch, the watchdog organization that received the data through a freedom of information request, demanded police drop using the technology. Big Brother Watch further warned of the Orwellian consequences of using it, arguing that it “breaches fundamental human rights protecting privacy and freedom of expression.”

“This is a turning point for civil liberties in the UK. If police push ahead with facial recognition surveillance, members of the public could be tracked across Britain’s colossal CCTV networks,” Director Silkie Carlo said. “For a nation that opposed ID cards and rejected the national DNA database, the notion of live facial recognition turning citizens into walking ID cards is chilling.”

Trending: ALERT: Ilhan Omar Goes On Live TV And DISRESPECTS The ENTIRE NATION Once Again

Further according to Big Brother Watch, Police scored a 100% misidentification rate in two separate deployments at Westfield shopping centers in Stratford, London twice. It is a horrifying thought that this technology is now being used to harass citizens as they shop.

Of course, we know that facial recognition technology is currently or will be tested in UK supermarkets for the first time to verify the age of citizens buying alcohol and cigarettes at special self-checkout machines, as Activist Post reported.

The company responsible for the devices to be used in supermarkets, according to the Telegraph, is U.S. company NCR which makes self-checkout machines for Asda, Tesco, and other UK supermarkets.

NCR has announced the integration of facial recognition technology from Yoti with its “FastLane” tills within supermarkets.

Fastlanes are currently used by UK retailers Tesco, Sainsbury’s, Marks & Spencer, Boots, and WHSmith. While not all these retailers will be a part of the pilot test program, it’s important to note how widespread this could be.

Meanwhile, hundreds of retail stores and soon thousands are investigating using another biometric facial recognition software called FaceFirst to build a database of shoplifters as a means of anti-theft, Activist Post reported.

FaceFirst is designed to scan faces as far as 50 to 100 feet away. As customers walk through a store entrance, the video camera captures repetitious images of each shopper and chooses the clearest one to store.

The software then analyzes that image and compares it to a database of “bad customers” that the retailer has compiled; if there is a match, the software sends an alert to store employees that a “high risk” customer has entered the door.

The future of shopping seems to allude to having biometric scanners written all over it, a worrying prospect for privacy enthusiasts.

Several privacy advocate groups, attorneys, and even recently Microsoft, which also markets its own facial recognition system, have all raised concerns over the technology, pointing to issues of consent, racial profiling, and the potential to use images gathered through facial recognition cameras as evidence of criminal guilt by law enforcement.

“We don’t want to live in a world where government bureaucrats can enter in your name into a database and get a record of where you’ve been and what your financial, political, sexual, and medical associations and activities are,” Jay Stanley, an attorney with ACLU, told BuzzFeed News about the use of facial recognition cameras in retail stores. “And we don’t want a world in which people are being stopped and hassled by authorities because they bear resemblance to some scary character.”

The technology currently has a lot of problems; Activist Post recently reported how Amazon’s own facial “Rekognition” software erroneously and hilariously identified 28 members of Congress as people who have been arrested for crimes according to the ACLU. Maybe the technology was trying to tell us something? But then it should have labeled more than just African American members of Congress as criminals, unless the technology has a racial bias, or perhaps this is just more evidence of how inaccurate the technology is.

Activist Post previously reported on another test of facial recognition technology in Britain which resulted in 35 false matches and 1 erroneous arrest. So the technology is demonstrated to be far from foolproof.

Many likely laughed about the paranoid nature this writer has expressed when it comes to facial recognition technology; however, vindication came swiftly recently when Amazon announced it wanted to create a “Crime News Network” to monitor neighborhoods with its Ring doorbell facial recognition cameras. At this point, they are literally just creating George Orwell’s 1984 or reinventing the Nazi Stasi.

Amazon employees who are against the company selling facial recognition technology to the government have protested the company’s decision. Over 20 groups of shareholders have sent several letters to Amazon CEO Jeff Bezos urging him to stop selling the company’s face recognition software to law enforcement.

“We are concerned the technology would be used to unfairly and disproportionately target and surveil people of color, immigrants, and civil society organizations,” the shareholders, which reportedly include Social Equity Group and Northwest Coalition for Responsible Investment, wrote. “We are concerned sales may be expanded to foreign governments, including authoritarian regimes.”

Another letter was just sent in January 2019, organized by Open Mic, a nonprofit organization focused on corporate accountability, and was filed by the Sisters of St. Joseph of Brentwood; both letters warned the technology poses “potential civil and human rights risks.”

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Excepteur sint occaecat cupidatat non proident.

 

Join the conversation!

We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please hover over that comment, click the ∨ icon, and mark it as spam. Thank you for partnering with us to maintain fruitful conversation.