Need peaceful screen time negotiations?

Get your FREE GKIS Connected Family Screen Agreement

AI

Is Your Child Falling in Love with a Bot?

Online entertainment is getting more and more advanced. We’ve come up with just about anything from video games to robots. But the one thing you may not yet know about is the fact that more kids are spending time with online robots instead of their human friends. In today’s GKIS article, we’re doing a deep dive into Character.AI, a popular website that lets subscribers virtually date a robot. We’ll go over how it’s being used, the dangers of it, and our thoughts on the site. Before letting your child use just any new and popular app, we recommend our Screen Safety Essentials Course for info on how to navigate the internet safely for the whole family. 

Artificial Intelligence and Bots

Before we dive deep into the world of C.AI, we’ll want to go over some key terms.

  • Artificial Intelligence refers to the capability of computer systems or algorithms to imitate intelligent human behavior.[1]
  • A bot is a computer program or character (as in a game) designed to mimic the actions of a person.[2] A bot is a form of artificial intelligence.
  • NSFW refers to “not safe (or suitable) for work.” NSFW is used to warn someone that a website, image, message, etc., is not suitable for viewing at most places of employment.[3

What is C.AI?

Character.AI is a website made by Noam Shazeer and Daniel De Freitas that allows users to chat with bots. The C.AI website launched in September 2022, and the app was released in May 2023. In its first week of being launched, the app got 1.7 million downloads. [4]

C.AI uses artificial intelligence to let you create characters and talk to them. You can create original characters, base yours off a character from a TV show or movie, or base your character off a real person.

C.AI became popular when teens started showing their conversations with the C.AI bots on TikTok. Many teens showed romantic and sensual conversations they had with their bots. Week after week, teens all over the world began to fall in love with their new artificial friends. 

How Teens Are Using C.AI

Users create a free account, and then choose from a list of characters to talk to or make their own. Users can talk about whatever they want with the bot, and it will reply with human-like responses. Pre-made characters have their own set personality that users cannot change. 

To make their own custom bot, users choose a name for their character and then upload an image to give the bot a ‘face.’ Users can talk with the bot about any topic. When the bot responds, users rate the bot’s responses with 1-5 stars. After some time, the bot will use the user’s ratings to figure out what personality they want it to have.

Users can make their bots private only for them or public for anyone to use. However, all chats between a person and a bot are private.

The Dangerous Side of C.AI 

Using these bots may seem like a fun idea for kids, but there are a lot of risks that come with them.

Data Storage

A major risk is that C.AI stores the information and texts you share with the character bots.

C.AI claims that no real person reads this information. However, this is still dangerous for privacy reasons. If the website or app were hacked, hackers can do whatever they want with users’ information. This puts all users at major risk when using the site.

No Age Verification and Exposing Minors to NSFW Content 

C.AI encourages its users to be 13 years old or older, but there is no age verification within the site or app.[5] This means users can lie about their age to use C.AI.

C.AI claims to not allow sexual conversation between users and bots, but users can bypass this. Users can misspell certain words or add extra spaces to words to bypass the NSFW filter. The bot knows what word you’re trying to say, so it will reply with NSFW responses. Users can have detailed sexual conversations with the bots. The dangerous part of this is that many of C.AI’s users are minors.

Effects on Children’s Relationships 

Users can speak romantically with the bots, and the bots will respond with romantic messages. The more kids use these bots, the higher chance they have of becoming dependent on them. Children’s brains are impressionable, and they absorb information quickly. Some kids may prefer to engage in these fake relationships instead of relationships with real people.

Using these bots could also create social anxiety. Users know what to expect when talking with a bot since the bot’s personality is pre-set. However, real people in the outside world are unpredictable. The uncertainty of real conversations could make users shy, anxious, and avoidant, especially if they replace real-life challenging practices with safe and easy online practices.

Other risks include: 

  • Disappointment in real-life relationships with others
  • Depression
  • Isolation
  • Loss of social skills 

GKIS Thoughts On C.AI 

GKIS rates C.AI as a red-light website. This means it is not recommended for children under the age of 18 to use. We came to this conclusion because it lacks age verification and exposes minors to NSFW content. However, it could be slightly safer if parents monitor their children’s interactions with the bots. If you’re worried about what other dangerous sites your child may be visiting, consider checking out our article on red-light websites. 

GKIS encourages parents to talk to their children about what topics are safe to discuss if they use C.AI. Before making a decision to use the site, we recommend checking out the GKIS Social Media Readiness Training course. It helps teens and tweens learn the red flags of social media and teaches them valuable psychological wellness skills.

Thanks to CSUCI intern Samantha Sanchez for researching Character.AI and preparing this article.

I’m the mom psychologist who will help you GetKidsInternetSafe.

Onward to More Awesome Parenting,

Dr. Tracy S. Bennett, Ph.D.
Mom, Clinical Psychologist, CSUCI Adjunct Faculty
GetKidsInternetSafe.com 

Works Cited 

[1] Artificial Intelligence – Merriam Webster

[2] Bot – Merriam Webster  

[3] NSFW – Merriam-Webster   

[4] Character.AI 

[5] C.AI Age Requirement  

Photo Credits 

Pete Linforth via Pixabay https://pixabay.com/illustrations/connection-love-modern-kiss-human-4848255/   

Samantha Sanchez (Image #2)

Adrian Swancar via Unsplash https://unsplash.com/photos/JXXdS4gbCTI

What do you think about Sex Robots?

Did you know that brothels filled with sex robots exist in the real world – not just in cheesy science fiction b-movies? These are not your granddad’s blow-up dolls. They are extremely life-like with medical-grade artificial skin that warms and lubricates, have pupils that dilate, and they can even hold a conversation. Sex robots are a growing industry with a market valuation estimated to be in the $30 billion range! In today’s GKIS article, we discuss arguments made for and against the use of sex robots as well as the ethical issues associated with them.

What is a sex robot?

A sex robot is anything that combines technology and sex for the purpose of pleasure. For this article, the term will be applied to anatomically correct, life-like androids. These androids can speak and come in all kinds of shapes, sizes, age ranges, and species, and can be made to look like whoever or whatever you want them to look like. Not only can they hold a conversation with you, but they can be programmed to simulate specific scenarios and make specific responses to actions and phrases.

If stories like these have you freaking out, imagine what your kids are reading! Start a critically-important family dialogue about screen safety and help them learn the risks of digital injury with our Social Media Readiness Course for tweens and teens. We give you the answers you are looking for and help you to avoid the quicksand in the electronic jungle!

Arguments Being Made for Sex Robots

If your mind is blown by what this might mean for the future of human society, you are not alone. Here are some arguments made that you may not have thought of yet.

  • The Capitalist Argument

There appears to be a niche market for sex robots. In simple terms, this means that the need for robot manufacturing would result in the creation of new jobs for people.

  • The Compassionate Argument

Some people have trouble finding a partner. For these people, it could be considered cruel to keep them from getting their physical and emotional needs met. Rather than leave them lonely, a robot partner may result in better life satisfaction and reduced mental illness rates.

  • The It’s Better than the Alternative Argument

One of the biggest arguments being made for the use of sex robots is that it could calm the urges of those who have socially abhorrent proclivities. They argue that sex robots may prevent pedophiles and rapists from harming other people or animals. There is also the argument that it could lead to less human trafficking and prostitution, thus less human suffering.

  • The Practice Makes Perfect Argument

With practice comes improvement in performance and increased confidence. For those filled with self-doubt or anxiety about pursuing healthy relationships, sex robots could fulfill a therapeutic need. Robot sensors and vocal feedback can provide much-need performance improvement instruction.

  • The All the Eggs in One Sex Robot Argument

There appears to be a population of people who are not keen on the idea of pursuing sexual relationships with other human beings. For these individuals, sex robots float their boat. If they don’t harm others, wouldn’t their private behaviors be acceptable? Some people like pepperoni on their pizza, while others like anchovies. In this case, it is just a matter of taste.

Arguments Being Made Against Sex Robots

  • The Operant Reinforcement Argument

The most concerning argument being made against sex robots is that providing people with androids that look like children and that have “rape settings” could increase the chance of sexual predators acting out their fantasies in real life. Sexual assault is often more about exerting dominance and power over another person than it is about sexual gratification. For these people, sex robots could reward pathological behavior and potentially increase the likelihood that people with androids harm others.

  • The Stereotypes and Objectification are Bad Argument

Another concerning argument of significance is that sex robots could lead to the further objectification of women and children. If one treats a robot object like a human, it’s not a far leap to then treat a human-like a robot.

  • The Population Decimation Argument

Some people say that this will lead to a sharp decrease in the human population due to a decline in pregnancy rates caused by a wide acceptance of sex robots.

  • The Social Isolation Argument

There is a valid argument that more time spent alone with technology could socially isolate people and further harm those who are suffering from psychological issues, such as depression, stemming from a lack of human contact.

  • The Use It or Lose It Argument

Some people argue that people will stop having sex with other people if sex robots become socially acceptable. After all, true intimacy is not about subservience and always being ready to be acted upon without having to give consent. Once we quit practicing relationship behaviors that lead to a mutual sharing and vulnerability that help us grow as emotional human beings, we may forget how to do it. Human beings are adaptable. By not having to do the hard things like express emotion, tolerate inconvenience and distress, and get consent for sexual advance, those skills may erode and leave us deficient in our very humanness. Not only may we treat others like robots, but we may become more robotic ourselves.

Ethics

Taking into account the arguments above, what do you think? Is interacting with sex robots right or wrong?

Because it is unlikely that legislators will be able to outlaw sex robots in favor of American civil rights, perhaps we should consider how robot manufacture, purchase, and use should be regulated. The UK already implemented a law forbidding child sex robots. By becoming informed and forming well-thought-out opinions and evidenced arguments, we are all best equipped to protect our families and ourselves. As our world becomes more and more technologically integrated, we will need to ask hard questions and adapt.

How to Stay Informed

Dr. B is in a unique position to help you to learn more about the potential dangers that your family could face when engaging with technology. She can help you to navigate safely throughout your journey as a practicing psychologist, university professor, and mother. You can download the free GKIS Connected Family Agreement simply by creating a GKIS account on our website home page. In Dr. B’s book, Screen Time in the Mean Time, she discusses and attacks the issue of raising a family while safely integrating technology rather than fearing it. Also, our Screen Safety Essentials Course provides useful tips about how to make the internet a safer place for your family, parenting and family coaching information, support, and other valuable information. It is our one-stop shop with fun teaching materials for parents and the whole family!

Thanks to CSUCI intern, Michael Watson for researching the ethical and economic arguments for and against sex robots.

I’m the mom psychologist who will help you GetKidsInternetSafe.

Onward to More Awesome Parenting,

Tracy S. Bennett, Ph.D.
Mom, Clinical Psychologist, CSUCI Adjunct Faculty
GetKidsInternetSafe.com

Photo Credits

Photo by Gaelle Marcel (https://unsplash.com/photos/pcu5rnAl19g)

Photo by Phillip Glickman (https://unsplash.com/photos/2umO15jsZKM)

Photo by Xu Haiwei (https://unsplash.com/photos/_3KdlCgHAn0)

Photo by Alessio Ferretti (https://unsplash.com/photos/upwjVq8cJRY)

 

Thanks to Kent Williams for the beautiful painting used for the thumbnail. (https://www.kentwilliams.com/paintings/2018/8/16/2018/8/16/m-w)

An Obsession with Lifelike Automatons and Dolls

Did you know that a robot has been given legal citizenship and personhood? People are obsessed with lifelike robots and dolls. What makes us so fascinated with objects that resemble us? In this GKIS article, we will be exploring several types of lifelike automatons and dolls as well as the psychology behind our obsession with them. If you are unsure of how to protect your tweens’ and teens’ growing reliance on technology and obsession with online presence, Dr. Bennett’s Social Media Readiness Online Course will give you the answers you are looking for and help you to navigate through these ever-changing waters!

What is the difference between an automaton and a doll?

While some may use the words interchangeably, there is a huge difference between an automaton and a doll. Most importantly, automatons are mechanized robots, while dolls do not move by themselves. Another important difference is the trend to integrate artificial intelligence (AI) into lifelike automatons. Artificial intelligence is a program that allows a computer to mimic the human mind, which allows it to make changes to itself. The advancement of artificial intelligence has stoked new interest and debate about morality and personhood. After all, the more advanced artificial intelligence gets, the more these robots resemble actual humans. It may not be too far off until we see a robot that possesses an actual consciousness.

Sophia

Sophia is one of the most famous lifelike robots in the world. She is an ultra-realistic humanoid robot with advanced artificial intelligence. She can hold conversations with people and has been on several press tours and has done numerous interviews where she converses with people and discusses what it is like to be her. She has even been on the Tonight Show with Jimmy Fallon. Honestly, it is pretty trippy to watch.

While the fact that she can hold intelligible conversations with people is impressive, it is even more monumental that she has citizenship. In 2017, Saudi Arabia gave Sophia citizenship, making her the first AI to be given legal personhood and human rights.[1] While this may be more of a marketing strategy for Hanson Robotics and positive publicity for Saudi Arabia, the fact remains that a robot has been given legal autonomy.

Erica

Lifelike robots are also being considered as labor options. Erica is a robot developed by roboticist, Hiroshi Ishiguro. She has lifelike skin, hair, and facial expressions. Like Sophia, she also utilizes AI to hold conversations, read, and recognize human faces. She currently has her own YouTube channel and appears on television in Japan as a news anchor.[2] While she can not move her limbs, she can move her neck and waist to turn toward people. Erica’s lifelike facial movements and ability to read and recite the news have given her a bit of celebrity status in Japan.

Sex Robots

Did you know there is a huge market for sex robots? Sex robots are lifelike, anatomically correct androids that are built for pleasure. These robots can be ordered to look and sound however the buyer wants. They can also be programmed to say specific phrases and respond in specific ways. They can also run different scenarios to simulate realistic experiences. Unfortunately, rape scenarios are available. If you are curious about sex robots, look out for my upcoming article here on GKIS.

Reborn Dolls

Reborn dolls are lifelike dolls made by artists, that usually resemble babies or toddlers. These dolls are extremely realistic and have garnered an entire subculture of fans who are dedicated to them. While they do not move, speak, or communicate in any way, the people that own them treat them as if they were real children.

Some people use these dolls for therapeutic purposes. There have been instances where mothers who have lost their babies have had lifelike dolls made in their child’s likeness to deal with their grief. They have also been used to deal with infertility, miscarriages, and depression.

Super Dollfie

Volks is an action figure/doll company that makes anatomically correct, hyper-realistic figures. If you are having a hard time imagining this, think Barbie with all the naughty bits. These figures are highly sought after by collectors and go for exorbitant prices. They are extremely customizable, and you can even buy clothing for them that is more finely detailed than most of the stuff in the average person’s closet. The attention to detail on these things is insane.  All the clothing, hair, and body parts can be changed out to make the doll look however you want it to.

Possible Reasons Why People are Obsessed with Lifelike Robots and Dolls

  • People are curious by nature
  • People get lonely
  • People look for connection and meaning everywhere
  • There is no risk of rejection
  • Some people have social anxiety

Staying Informed and Keeping Your Family Safe

Dr. B is in a unique position to help you to learn more about the potential dangers that your family could face when engaging with the internet and technology. As a practicing psychologist, university professor, and mother, she can help you and your family safely traverse the digital world we live in.

In Dr. B’s book, Screen Time in the Mean Time, she discusses and attacks the issue of raising a family while safely integrating technology rather than fearing it. Also, you can download the free GKIS Connected Family Agreement simply by creating a GKIS account on our website home page. If you are looking for other fun and informative stories, check out the GKIS Blog. For other useful tips about how to make the internet a safer place for your family, you can get parenting and family coaching information, support, and other valuable information from the GKIS Screen Safety Essentials Course.

Thanks to CSUCI intern, Michael Watson for researching lifelike automatons and dolls.

I’m the mom psychologist who will help you GetKidsInternetSafe.

Onward to More Awesome Parenting,

Tracy S. Bennett, Ph.D.
Mom, Clinical Psychologist, CSUCI Adjunct Faculty
GetKidsInternetSafe.com

Works Cited

[1] Reynolds, E. (2018). The agony of Sophia, the world’s first robot citizen condemned to a lifeless career in marketing. Wired. https://www.wired.co.uk/article/sophia-robot-citizen-womens-rights-detriot-become-human-hanson-robotics

[2] Specktor, B. (2018). Meet Erica, Japan’s next robot news anchor. Live Science. https://www.livescience.com/61575-erica-robot-replace-japanese-news-anchor.html

Photo Credits

Photo by Compare Fibre (https://unsplash.com/photos/IaX5aH9spPk)

Photo by Possessed Photography (https://unsplash.com/photos/YKW0JjP7rlU)

Photo by Sigrid Wu (https://unsplash.com/photos/KSTM340nmyA)

Photo by Arteum.ro (https://unsplash.com/photos/7H41oiADqqg)

 

Thanks to Kent Williams for the beautiful painting used for the thumbnail. (https://www.kentwilliams.com/paintings/2018/8/16/2018/8/16/m-w)

 

Is Artificial Intelligence Facial Recognition Threatening Our Privacy?

In 2014, the Founder of GetKidsInternetSafe Dr. Tracy Bennett wrote an article on artificial intelligence (AI) facial recognition and the potential dangers associated with such technology. Fast-forward 6 years to 2020 and many of her predictions have proven true plus more than could have been anticipated. AI facial recognition has boomed to an extent that many companies are using our social media data to increase profits. Big tech is willing to do to capitalize on us, even if it is not in our best interest. For a glimpse into the scary future possibilities of privacy invasion and trampling on our civil rights, check out what’s happening in China in today’s GKIS article.

Artificial intelligence (AI) facial recognition has come a long way in the past few years especially since engineers have been using artificial neural networks. These neural networks are similar to an actual human brain. They consist of a connection of nodes called artificial neurons and can transmit a signal to other nodes. Once a node receives a signal, it can process the signal and relay the information to the other nodes connected to it. When it comes to neural networks, a person can input any type of information. In face recognition technology, an image of the face is entered. AI marks each feature as a nodal point, collecting more data with each image.

Facebook uses neural networks and processes over 350 million new pictures daily. Amazon also has a service called recognition where customers can pay to acquire Clearview AI is a controversial service that many in Silicon Valley have opposed due to implications on privacy. Clearview AI searches social media platforms and has acquired over 3 billion pictures in their inventory. When someone searching Clearview gets a match, they get data AND a link to the social media accounts where the facial data was acquired. Many have concerns that this takes the privacy breach a step further.

Beneficial Ways Facial Intelligence is Being Used

  • AI has led to the recovery of many missing children that have been sex trafficked or sexually exploited.
  • Taylor Swift’s security team used facial recognition at her concerts to see if any of her stalkers were in the audience.
  • Law enforcement uses AI to identify people that cannot identify themselves, like people with severe mental illnesses, people high on drugs, or people that are refusing to identify themselves. With a three-minute turnaround time, law enforcement is saving a ton of money and time so they can focus on other crimes.

 Controversial Ways Facial Intelligence is Being Used

  • A man seen stealing beer at a CVS in New York City looked a lot like Woody Harrelson. The police entered a picture of Woody Harrelson into facial recognition technology and found a match. Although police were able to locate and apprehend the suspect, this technology could implicate the wrong person with similar facial geometry.
  • People of color are more likely to be misidentified due to AI facial recognition not being as good at differentiating people with darker skin.
  • The government could enable continued surveillance of certain individuals like they are doing in China. China uses facial recognition to follow Uighurs, a largely Muslim minority, as well as monitor all Chinese citizens using a social credit score.

Dystopian Surveillance

 AI advancements worry people due to fear of one day living in a dystopian surveillance taped society. Having this type of society would mean that all citizens would be tracked, and privacy would cease to exist. One might think that with the civil rights protections in the United States we are not at risk. I wonder if Chinese citizens have concerns…

China has more AI facial recognition CCTV cameras than any other country in the world and is a prime example of dystopian surveillance. The Chinese government claims to use AI to lower crime and increase prosocial behavior using a social credit system by a company called Sesame Credit. They contend that this system encourages citizens to behave in a socially appropriate manner and if someone is a good citizen, then they have nothing to hide and the cameras should not be a concern.

Specifically, using Sesame Credit in China, if a Chinese citizen is caught on camera doing anything that is not considered “socially appropriate” like jaywalking, littering, smoking, or buying too much alcohol or too many video games, their social credit score will decrease. A low social credit score may result in the inability to purchase airline or train tickets or book at certain hotels, or they may be barred from certain schools and jobs. Citizens can also have their dog taken away if it isn’t walked on a leash or is a public disturbance. It is also mandatory for blacklisted citizens to register to a public blacklist which typically results in social stigmatization. Parent scores can affect other family members, like preventing kids from being accepted to private schools. Public shaming is a big part of the social credit system. Pictures of blacklisted and low scoring citizens are shown on TikTok, pictures and videos with names play on public LED screens, and addresses are shown on a map on WeChat.

People with good social credit scores appreciate the system since they get rewarded. Perks consist of discounts on hotels, entertainment, and energy bills and one can rent bikes without a deposit. High scorers also get into better schools and get access to better jobs. Users on dating apps are required to put in their social credit score; good scores get more dates.

 Ways Citizens Can Raise Their Scores

  • Donating to college funds for poor students
  • Caring for elderly or disabled people
  • Repaying a loan even if the bank canceled it

How the United States is Implementing Social Credit

 The U.S. has not implemented AI as comprehensively as China. However, it is used in some industries. For example, life insurance companies in New York are allowed to look at a person’s public social media account to see if they are engaging in risky behavior. They base a person’s premium on what they find. In fact, a 2020 survey found that 98% of professionals do a background check on new hires and 79% disqualified a job candidate due to unfavorable social media content.

There is also a company called PatronScan which was designed to help restaurants and bars manage customers. It can help spot fake IDs and troublemakers by scanning an ID upon entry. A public list is shown for all PatronScan customers. The problem is that judgment about what constitutes a “troublemaker” is subjected and may result in an unfair listing without the owner’s consent.

Rideshares like Uber and Lyft have reviews for both drivers and riders that may result in a customer being refused a ride. Airbnb also works by reviewing both hosts and renters. Many hosts refuse to rent to certain people based on their past reviews, and many hosts may not be booked based on renter reviews.

China is a prime example of the dangers of AI facial recognition and how it can affect our privacy and freedoms. There is not yet much legislation preventing AI from being used in the United States and there’s a need to push for it. Like the frog in the pot, people adapt so willingly to advancing technology that it’s difficult to recognize possible consequences.

For information and safety tips about how to keep you and your family safe, we highly recommend Dr. B’s Cybersecurity and Red Flags supplement. In an age where technology is advancing at such a fast rate, it is important to keep you and your family informed on current technological risks and how to prevent them.

Thank you to CSUCI intern Andres Thunstrom for co-authoring this article.

I’m the mom psychologist who will help you GetKidsInternetSafe.

Onward to More Awesome Parenting,

Tracy S. Bennett, Ph.D.Mom,

Clinical Psychologist,
CSUCI Adjunct Faculty
GetKidsInternetSafe.com

Photo Credits

Photo by Burst on Pexels
Photo by Pixabay
Photo by Gamefication

 Works Cited

 Campbell, C (2019) How China is Using “Social Credit Scores” to reward and punish its citizens https://time.com/collection/davos-2019/5502592/china-social-credit-score/

Harwell, D (2019, July 9) Facial-recognition use by federal agencies draws lawmakers’ anger https://www.washingtonpost.com/technology/2019/07/09/facial-recognition-use-by-federal-agencies-draws-lawmakers-anger/

Hill, K (2020, Feb 10) The Secretive Company that might end privacy as we know it https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

Mckeon, K (2020, Apr 28) 5 Personal Branding Tips for Your Job Search https://themanifest.com/digital-marketing/5-personal-branding-tips-job-search

Thorn (2020) Eliminate child sexual abuse material from the internet https://www.thorn.org.

The Psychology Behind Fake News, Bots, and Conspiracy Theories on the Internet

Clickbait headlines and Internet autofeeds tempt us into mindless scrolling. They soak into our memories without our awareness and tempt us to share even after only reading the headline. False information manipulates stock markets, our political views, and our purchasing. It makes us feel connected to celebrities and can divide families. Everybody has an opinion that they are happy to argue about online even if they believe it’s too rude to share at a dinner party. What is fake news? How do bots contribute to fake news? Why does fake news suck us in so expertly? And how can we avoid its seductive allure?

What is “fake news?”

Fake news is false information designed to inform opinions and tempt sharing. It could be a rumor, deliberate propaganda, or an unintended error that deceives readers.

Fake news can affect attitudes and behavior. Fake news about a celebrity may not be harmless, but chances are it won’t have a long-lasting and devastating impact. However, fake news about the spread of a virus, the necessity of medical interventions, or the intentions of a politician can have a huge impact and manipulate behavior in dangerous ways.

Bots!

In addition to the three billion human accounts on social media, there are also millions of bots.[i] Bots are created using a computer algorithm (a set of instructions used to complete a task) and work autonomously and repetitively. They can simulate human behavior on social media websites by interacting with other users and by sharing information and messages.

Bots possess artificial intelligence (AI). They can learn response patterns in different situations. Programmed to identify and target influential social media users, bots can spread fake news quickly.

According to a 2017 estimate, there were about 23 million bots on Twitter, 27 million bots on Instagram, and 140 million bots on Facebook. Altogether, that adds to 190 million bots on just three social media platforms, more than half the population of the United States.[ii]

3 Reasons Why We Get Sucked in by Fake News

With convenient on-demand internet access, we’ve gotten into the habit of greedily gulping rather than thoughtfully chewing our news. We browse instead of reading then impulsively jump to share.

A recent study found that 59% of shared articles on social media are never even read. Most social media users get their information based solely on a headline.[iii] Why are we susceptible to this form of online behavior? Are we lazy with low attention spans, or could it be something else?

Fake news is crafted to be widely appealing. 

A recent study found that fake news is 70% more likely to be retweeted than true stories. A true story takes six times longer to reach 1,500 people than it takes for fake news to reach the same amount of people. Fake news is typically new and unusual information that is tested for shareability. Unlike truth, which you consume and it’s over, fake news is alive and constantly evolving.[iv]

We hear and see what we want.

An echo chamber is a metaphor for a closed online space where beliefs are repeated by different users. With each contact with that information, the information is exaggerated and the reader becomes more convinced that the content is factual and impactful.

Social media sites repetitively send us links to information based on our previous internet searches. This is called targeted advertising. It is designed to take us into a rabbit hole of single-minded desire. Not only does this sell us ideas, belief systems, and facts, but it can also get us to back politicians and influencers and ultimately spend our money. The act of unconsciously seeking out and remembering information that supports our views is called confirmation bias. Fake news feeds this bias.

Shortcuts are easier.

Heuristics are shortcuts our minds take to make quicker decisions. They allow us to function without having to think about every action we make.

Humans are not designed to have an honest view of the world. We form our decisions based on a vague worldview supported by emotional confirmation. We search for facts that make us feel more confident and avoid or flatly reject those that don’t.

Black-and-white thinking calms our anxiety and makes us feel like we have more control. Considering complex information and complicated nuance takes more effort and time. It also requires a more informed database to work from. Most online readers don’t want to take the time to patiently and humbly build up that kind of expertise. Quick information that offers more successful shareability is a more attractive option for online communication.

3 Reasons Why We Believe It

British psychologist Karen Douglas found three criteria for why someone would believe in conspiracy theories.

The Desire for Understanding and Certainty

It’s human nature to try to explain why things happen. Evolutionarily, those who were the best problem-solvers were more likely to survive. There is an adaptive advantage for those who ask questions and quickly find answers. Easy answers ease our anxiety and simply confirm our worldview.

Conspiracy theories are also false beliefs, and those who believe in them have a vested interest in keeping them. Uncertainty is an unpleasant state. Conspiracy theories provide a sense of understanding and certainty that is comforting.

The Desire for Control and Security

We need to feel like we have control over our lives. For conspiracy theorists, this is especially true when the alternative to their belief is stressful. For instance, if global warming is true and temperatures are rising, we will have to change our lifestyles. That would be uncomfortable and costly. Instead, you could listen to influencers who assure you that global warming is a hoax so you can continue with your way of living. This is called motivated reasoning and is a strong component of belief in conspiracy theories.

The Desire to Maintain a Positive Self-Image

Research has shown that those who feel they are socially marginalized will be more likely to believe in conspiracy theories. A positive self-image is fed from our successes in our relationships and accolades from those we admire. Chatting in online forums with same-minded others brings us community and feelings of self-worth. Researching a conspiracy theory can give one a feeling of having exclusive knowledge and expertise and offer opportunities for adulation and leadership.[v]

How to Protect Ourselves from Being Duped by Fake News and Conspiracy Theories

Assess the characteristics of the article you are reading.

  • Is it an editorial or an opinion piece?
  • Who is the author?
  • Is the author credible?
  • Have they specialized in a certain field or are they a random person with an unresearched opinion?
  • Can you trust the information they offer?
  • Do they cite their sources or is the article designed to impress instead of informing?

Check the ads.

Be wary of articles containing multiple pop-ups, advertisements of items not associated with the article, or highly provocative and sexual advertisements.

Verify images.

Are the images copied from other sources or are they licensed for use by the author? Google Image Search is an easy tool to find published copies of the image.

Use fact-checking websites.

Examples are Snopes, Factcheck.org, and PolitiFact.

Research opposing views. 

Check out sources with viewpoints opposing the articles you read that differ from your own opinions. To defend a point of view, you must understand the other side.

Learn to tolerate several complex ideas at once, even if it causes tension.

Smart discussion requires that we discuss the nuance of complex ideas rather than engaging in faulty or black-and-white thinking. Experts are not shy to say they don’t know something. Insecure amateurs try to fake it.

Share responsibly. 

As important as it is to protect yourself from fake news, it is equally important to help protect others from fake news. Make sure to check the authenticity of an article before posting it online. If Aunt Joyce posts something inaccurate, side message her and let her know that it is fake news and how you found that information so she can better use fact-checking in the future.

Thanks to CSUCI intern, Dylan Smithson for researching the ways fake news is affecting us and how to avoid being morons online. To view some valuable news clips of Dr. Bennett’s interviews about parenting and screen safety, check out her YouTube channel at https://www.youtube.com/DRTRACYBENNETT,

I’m the mom psychologist who will help you GetKidsInternetSafe.

Onward to More Awesome Parenting,

Tracy S. Bennett, Ph.D.
Mom, Clinical Psychologist, CSUCI Adjunct Faculty
GetKidsInternetSafe.com

Works Cited

[i] Simon Kemp (2019) Digital trends 2019: Every single stat you need to know about the internet https://thenextweb.com/contributors/2019/01/30/digital-trends-2019-every-single-stat-you-need-to-know-about-the-internet/

[ii] Amit, Argawal (2019) How is Fake News Spread? Bots, People like You, Trolls, and Microtargeting http://www.cits.ucsb.edu/fake-news/spread

[iii] Jayson DeMers (2019) 59 Percent Of You Will Share This Article Without Even Reading It https://www.forbes.com/sites/jaysondemers/2016/08/08/59-percent-of-you-will-share-this-article-without-even-reading-it/#646fecdb2a64

[iv] Kari Paul (2018) False news stories are 70% more likely to be retweeted on Twitter than true ones https://www.marketwatch.com/story/fake-news-spreads-more-quickly-on-twitter-than-real-news-2018-03-08

[v] David, L (2018) Why Do People Believe in Conspiracy Theories?

https://www.psychologytoday.com/us/blog/talking-apes/201801/why-do-people-believe-in-conspiracy-theories

Photo Credits

Antonio Marín Segovia Internet ha sido asesinado por el macarrismo ilustrado de Wert, con el beneplácito del PPSOE CC BY-NC-ND 2.0

Free Press/ Free Press Action Fund’s photostream Invasion of Fake News CC BY-NC-SA 2.0

Sean MacEntee social media CC BY 2.0

Keywords: Internet, Conspiracy Theories, Fake News, Bots, AI, Confirmation Bias, Heuristics, Echo Chamber

 

Could Your Daughter Be the Victim of Deepfake like Taylor Swift and Scarlett Johansson?


We are consuming more online media than ever. A recent poll showed that 85% of adults receive their news through a mobile device. And 67% get their news from social media websites.[1] Still fresh in the minds of most Americans are the Internet propaganda attacks performed by Russian hackers. With sensational headlines, these hackers significantly affected the thoughts and beliefs of the American people. What would a hacker be able to accomplish if they could create videos of our heroes and celebrities performing any act they choose? What if we couldn’t distinguish real from fake? What if you or your family were targeted?

Deepfake Attacks Hollywood Celebrities

In December 2017, Reddit user Deepfake released a series of pornographic videos featuring Scarlett Johansson, Gal Gadot, Taylor Swift, and Aubrey Plaza. Using a process called human image synthesis, the hacker created photorealistic images and video renditions of celebrity faces indistinguishable from the real thing.

To do this, he compiled multiple photos and videos of his victims and fed them into specialized software. An artificially intelligent (AI) algorithm then ran the data through multiple computations, training itself to perform the task. Deepfake trained his AI to convincingly swap celebrities’ faces with the faces of pornographic video actors. Voila! A Hollywood scandal was born.

How in the …

Computer-generated imagery (CGI) has been a staple of Hollywood special effects for decades. It’s been used to make cartoon toys come to life in Toy Story and turn people into wholly different creatures in The Lord of the Rings. The software and technology that made it possible for big Hollywood studios to put someone’s face onto a toy or Hobbit was incredibly expensive and laborious work. But now, anyone with a few thousand dollars can afford the computer and software necessary for Hollywood-quality special effects.

After the scandal, the deepfake community worked hard and fast to make face-swapping technology available to the masses.

In January 2018, only a month after the release of Deepfake’s videos online, an app was publicly released called Fakeapp. Fakeapp uses a machine learning tool called TensorFlow which was developed by Google AI. Fakeapp is free and relatively easy to use if you have a powerful enough computer. That means we are likely to see more victims and increasingly dangerous scenarios.

How to Implant False Memories

Not only can hackers create a fake event to trick us, but they can also impact our recollection of events. Memory isn’t simply a black-and-white retrieval system where information is accurately laid down and later retrieved from your brain’s database. Instead, memory is a reconstructive process. The original memory is impacted by several environmental and perceptual factors before being consolidated for memory storage. Our brains also modify the memory during each retrieval. This process is referred to as applying post-event misinformation. Post-event misinformation can dramatically affect attitudes and behavioral intentions.[2]

Post-event misinformation can be invisibly and intentionally created. In 2010, Slate Magazine released a series of political photos (some real, some fake) to approximately 1000 of its readers. They later asked those readers if they could remember the photos. The results were alarming. Readers inaccurately recalled 50% of the events in the faked photos. Fifteen percent of the time the readers could even recall emotions associated with the faked photos. The readers were even more likely to remember a faked photo when it fit their political view.[3]

Hollywood Magic Impacts World Security

Even before the recent deepfake celebrity scandal and Russian election meddling, there was deepfaking happening online with a dangerous political impact.

In September 2017, an Iranian video was released claiming the country had successfully launched a new ballistic missile. The video was, in fact, a failed missile launch filmed several months prior. President Trump believed the video was real and condemned the country of Iran for actions it did not commit. Iran responded claiming it would not tolerate any threats from the president. This faked missile launch further divided the two nations. Luckily, the mistake did not result in a military response. However, it clearly could have!

Considering the sophistication of digital technology, will we be able to tell the truth from fake quickly enough to prevent a global catastrophe in the future?

Government Intervention

The United States Government is reportedly working on it. A research group called SRI International has been awarded three contracts by the Defense Advanced Research Projects Agency (DARPA) to develop tools capable of identifying whether a video or image has been altered and how the manipulations were performed.[4]

Other steps that can be taken to reduce the potential dangers of deepfakes are to equip photos and videos with a digital code that proves authenticity. Increasingly, websites are attending to fraudulent image and video activity and making special efforts for identification and removal.

Unsure if an image, video, or news report is fake? Get in the habit of searching for truth analysis on the popular website Snopes before you make false assumptions or forward deepfakes to friends or on social media.

Your Legal Rights

If you find yourself to be the victim of a video or image with your likeness, it is your legal right to act against it. Here are a few ways the legal system may apply to cases involving deepfakes.

  • Extortion – using deepfakes to force or threaten someone into obtaining something.
  • Harassment – using deepfakes to pressure or intimidate.
  • False Light – the invasion of privacy by utilizing a deepfake.
  • Defamation – damage to reputation due to deepfake.
  • Intentional Infliction of Emotional Stress – emotional stress caused by deepfake.
  • Right of Publicity – deepfake was produced and distributed without consent.
  • Copyright Infringement – facial image in deepfake is copyrighted material.

Thank you to CSUCI Intern, Dylan Smithson for giving us factual, interesting information to share with our kids during a screen-free dinner. Haven’t implemented that best-practice family habit yet?

I’m the mom psychologist who will help you GetKidsInternetSafe.

Onward to More Awesome Parenting,

Tracy S. Bennett, Ph.D.
Mom, Clinical Psychologist, CSUCI Adjunct Faculty
GetKidsInternetSafe.com

Works Cited

[1]Kristen B, Katerina M, (2017) Key Trends in social and digital news media http://www.pewresearch.org/fact-tank/2017/10/04/key-trends-in-social-and-digital-news-media/

[2]DARIO S , FRANCA A ,and ELIZABETH L (2007) Changing History: Doctored Photographs Affect Memory for Past Public Events 10.1002/acp.1394 https://webfiles.uci.edu/eloftus/Sacchi_Agnoli_Loftus_ACP07.pdf

[3]William S. 2010 The Ministry of Truth http://www.slate.com/articles/health_and_science/the_memory_doctor/2010/05/the_ministry_of_truth.html

[4]Taylor H. 2018 DARPA is funding new tech that can identify manipulated videos and ‘deepfakes’ https://techcrunch.com/2018/04/30/deepfakes-fake-videos-darpa-sri-international-media-forensics/

Photo Credits

M U Opening The Objectivist Drug Party – Zach Blas & Genomic Intimacy – Heather Dewey-Hagborg. CC BY-NC-ND 2.0

Mike MacKenzie Fake News – Computer Screen Reading Fake News CC BY 2.0

Dave 109 / 365 It’s definitively a candlestick holder CC BY-NC 2.0