fbpx

Need peaceful screen time negotiations?

Get your FREE GKIS Connected Family Screen Agreement

In 2014, the Founder of GetKidsInternetSafe Dr. Tracy Bennett wrote an article on artificial intelligence (AI) facial recognition and the potential dangers associated with such technology. Fast-forward 6 years to 2020 and many of her predictions have proven true plus more than could have been anticipated. AI facial recognition has boomed to an extent that many companies are using our social media data to increase profits. Big tech is willing to do to capitalize on us, even if it is not in our best interest. For a glimpse into the scary future possibilities of privacy invasion and trampling on our civil rights, check out what’s happening in China in today’s GKIS article.

Artificial intelligence (AI) facial recognition has come a long way in the past few years especially since engineers have been using artificial neural networks. These neural networks are similar to an actual human brain. They consist of a connection of nodes called artificial neurons and can transmit a signal to other nodes. Once a node receives a signal, it can process the signal and relay the information to the other nodes connected to it. When it comes to neural networks, a person can input any type of information. In face recognition technology, an image of the face is entered. AI marks each feature as a nodal point, collecting more data with each image.

Facebook uses neural networks and processes over 350 million new pictures daily. Amazon also has a service called recognition where customers can pay to acquire Clearview AI is a controversial service that many in Silicon Valley have opposed due to implications on privacy. Clearview AI searches social media platforms and has acquired over 3 billion pictures in their inventory. When someone searching Clearview gets a match, they get data AND a link to the social media accounts where the facial data was acquired. Many have concerns that this takes the privacy breach a step further.

Beneficial Ways Facial Intelligence is Being Used

  • AI has led to the recovery of many missing children that have been sex trafficked or sexually exploited.
  • Taylor Swift’s security team used facial recognition at her concerts to see if any of her stalkers were in the audience.
  • Law enforcement uses AI to identify people that cannot identify themselves, like people with severe mental illnesses, people high on drugs, or people that are refusing to identify themselves. With a three-minute turnaround time, law enforcement is saving a ton of money and time so they can focus on other crimes.

 Controversial Ways Facial Intelligence is Being Used

  • A man seen stealing beer at a CVS in New York City looked a lot like Woody Harrelson. The police entered a picture of Woody Harrelson into facial recognition technology and found a match. Although police were able to locate and apprehend the suspect, this technology could implicate the wrong person with similar facial geometry.
  • People of color are more likely to be misidentified due to AI facial recognition not being as good at differentiating people with darker skin.
  • The government could enable continued surveillance of certain individuals like they are doing in China. China uses facial recognition to follow Uighurs, a largely Muslim minority, as well as monitor all Chinese citizens using a social credit score.

Dystopian Surveillance

 AI advancements worry people due to fear of one day living in a dystopian surveillance taped society. Having this type of society would mean that all citizens would be tracked, and privacy would cease to exist. One might think that with the civil rights protections in the United States we are not at risk. I wonder if Chinese citizens have concerns…

China has more AI facial recognition CCTV cameras than any other country in the world and is a prime example of dystopian surveillance. The Chinese government claims to use AI to lower crime and increase prosocial behavior using a social credit system by a company called Sesame Credit. They contend that this system encourages citizens to behave in a socially appropriate manner and if someone is a good citizen, then they have nothing to hide and the cameras should not be a concern.

Specifically, using Sesame Credit in China, if a Chinese citizen is caught on camera doing anything that is not considered “socially appropriate” like jaywalking, littering, smoking, or buying too much alcohol or too many video games, their social credit score will decrease. A low social credit score may result in the inability to purchase airline or train tickets or book at certain hotels, or they may be barred from certain schools and jobs. Citizens can also have their dog taken away if it isn’t walked on a leash or is a public disturbance. It is also mandatory for blacklisted citizens to register to a public blacklist which typically results in social stigmatization. Parent scores can affect other family members, like preventing kids from being accepted to private schools. Public shaming is a big part of the social credit system. Pictures of blacklisted and low scoring citizens are shown on TikTok, pictures and videos with names play on public LED screens, and addresses are shown on a map on WeChat.

People with good social credit scores appreciate the system since they get rewarded. Perks consist of discounts on hotels, entertainment, and energy bills and one can rent bikes without a deposit. High scorers also get into better schools and get access to better jobs. Users on dating apps are required to put in their social credit score; good scores get more dates.

 Ways Citizens Can Raise Their Scores

  • Donating to college funds for poor students
  • Caring for elderly or disabled people
  • Repaying a loan even if the bank canceled it

How the United States is Implementing Social Credit

 The U.S. has not implemented AI as comprehensively as China. However, it is used in some industries. For example, life insurance companies in New York are allowed to look at a person’s public social media account to see if they are engaging in risky behavior. They base a person’s premium on what they find. In fact, a 2020 survey found that 98% of professionals do a background check on new hires and 79% disqualified a job candidate due to unfavorable social media content.

There is also a company called PatronScan which was designed to help restaurants and bars manage customers. It can help spot fake IDs and troublemakers by scanning an ID upon entry. A public list is shown for all PatronScan customers. The problem is that judgment about what constitutes a “troublemaker” is subjected and may result in an unfair listing without the owner’s consent.

Rideshares like Uber and Lyft have reviews for both drivers and riders that may result in a customer being refused a ride. Airbnb also works by reviewing both hosts and renters. Many hosts refuse to rent to certain people based on their past reviews, and many hosts may not be booked based on renter reviews.

China is a prime example of the dangers of AI facial recognition and how it can affect our privacy and freedoms. There is not yet much legislation preventing AI from being used in the United States and there’s a need to push for it. Like the frog in the pot, people adapt so willingly to advancing technology that it’s difficult to recognize possible consequences.

For information and safety tips about how to keep you and your family safe, we highly recommend Dr. B’s Cybersecurity and Red Flags supplement. In an age where technology is advancing at such a fast rate, it is important to keep you and your family informed on current technological risks and how to prevent them.

Thank you to CSUCI intern Andres Thunstrom for co-authoring this article.

I’m the mom psychologist who will help you GetKidsInternetSafe.

Onward to More Awesome Parenting,

Tracy S. Bennett, Ph.D.Mom,

Clinical Psychologist,
CSUCI Adjunct Faculty
GetKidsInternetSafe.com

Photo Credits

Photo by Burst on Pexels
Photo by Pixabay
Photo by Gamefication

 Works Cited

 Campbell, C (2019) How China is Using “Social Credit Scores” to reward and punish its citizens https://time.com/collection/davos-2019/5502592/china-social-credit-score/

Harwell, D (2019, July 9) Facial-recognition use by federal agencies draws lawmakers’ anger https://www.washingtonpost.com/technology/2019/07/09/facial-recognition-use-by-federal-agencies-draws-lawmakers-anger/

Hill, K (2020, Feb 10) The Secretive Company that might end privacy as we know it https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

Mckeon, K (2020, Apr 28) 5 Personal Branding Tips for Your Job Search https://themanifest.com/digital-marketing/5-personal-branding-tips-job-search

Thorn (2020) Eliminate child sexual abuse material from the internet https://www.thorn.org.

Andy Thunstrom
Andy Thunstrom
Facebook
Twitter
LinkedIn