fbpx

Need peaceful screen time negotiations?

Get your FREE GKIS Connected Family Screen Agreement

Bots

Is Your Child Falling in Love with a Bot?

Online entertainment is getting more and more advanced. We’ve come up with just about anything from video games to robots. But the one thing you may not yet know about is the fact that more kids are spending time with online robots instead of their human friends. In today’s GKIS article, we’re doing a deep dive into Character.AI, a popular website that lets subscribers virtually date a robot. We’ll go over how it’s being used, the dangers of it, and our thoughts on the site. Before letting your child use just any new and popular app, we recommend our Screen Safety Essentials Course for info on how to navigate the internet safely for the whole family. 

Artificial Intelligence and Bots

Before we dive deep into the world of C.AI, we’ll want to go over some key terms.

  • Artificial Intelligence refers to the capability of computer systems or algorithms to imitate intelligent human behavior.[1]
  • A bot is a computer program or character (as in a game) designed to mimic the actions of a person.[2] A bot is a form of artificial intelligence.
  • NSFW refers to “not safe (or suitable) for work.” NSFW is used to warn someone that a website, image, message, etc., is not suitable for viewing at most places of employment.[3

What is C.AI?

Character.AI is a website made by Noam Shazeer and Daniel De Freitas that allows users to chat with bots. The C.AI website launched in September 2022, and the app was released in May 2023. In its first week of being launched, the app got 1.7 million downloads. [4]

C.AI uses artificial intelligence to let you create characters and talk to them. You can create original characters, base yours off a character from a TV show or movie, or base your character off a real person.

C.AI became popular when teens started showing their conversations with the C.AI bots on TikTok. Many teens showed romantic and sensual conversations they had with their bots. Week after week, teens all over the world began to fall in love with their new artificial friends. 

How Teens Are Using C.AI

Users create a free account, and then choose from a list of characters to talk to or make their own. Users can talk about whatever they want with the bot, and it will reply with human-like responses. Pre-made characters have their own set personality that users cannot change. 

To make their own custom bot, users choose a name for their character and then upload an image to give the bot a ‘face.’ Users can talk with the bot about any topic. When the bot responds, users rate the bot’s responses with 1-5 stars. After some time, the bot will use the user’s ratings to figure out what personality they want it to have.

Users can make their bots private only for them or public for anyone to use. However, all chats between a person and a bot are private.

The Dangerous Side of C.AI 

Using these bots may seem like a fun idea for kids, but there are a lot of risks that come with them.

Data Storage

A major risk is that C.AI stores the information and texts you share with the character bots.

C.AI claims that no real person reads this information. However, this is still dangerous for privacy reasons. If the website or app were hacked, hackers can do whatever they want with users’ information. This puts all users at major risk when using the site.

No Age Verification and Exposing Minors to NSFW Content 

C.AI encourages its users to be 13 years old or older, but there is no age verification within the site or app.[5] This means users can lie about their age to use C.AI.

C.AI claims to not allow sexual conversation between users and bots, but users can bypass this. Users can misspell certain words or add extra spaces to words to bypass the NSFW filter. The bot knows what word you’re trying to say, so it will reply with NSFW responses. Users can have detailed sexual conversations with the bots. The dangerous part of this is that many of C.AI’s users are minors.

Effects on Children’s Relationships 

Users can speak romantically with the bots, and the bots will respond with romantic messages. The more kids use these bots, the higher chance they have of becoming dependent on them. Children’s brains are impressionable, and they absorb information quickly. Some kids may prefer to engage in these fake relationships instead of relationships with real people.

Using these bots could also create social anxiety. Users know what to expect when talking with a bot since the bot’s personality is pre-set. However, real people in the outside world are unpredictable. The uncertainty of real conversations could make users shy, anxious, and avoidant, especially if they replace real-life challenging practices with safe and easy online practices.

Other risks include: 

  • Disappointment in real-life relationships with others
  • Depression
  • Isolation
  • Loss of social skills 

GKIS Thoughts On C.AI 

GKIS rates C.AI as a red-light website. This means it is not recommended for children under the age of 18 to use. We came to this conclusion because it lacks age verification and exposes minors to NSFW content. However, it could be slightly safer if parents monitor their children’s interactions with the bots. If you’re worried about what other dangerous sites your child may be visiting, consider checking out our article on red-light websites. 

GKIS encourages parents to talk to their children about what topics are safe to discuss if they use C.AI. Before making a decision to use the site, we recommend checking out the GKIS Social Media Readiness Training course. It helps teens and tweens learn the red flags of social media and teaches them valuable psychological wellness skills.

Thanks to CSUCI intern Samantha Sanchez for researching Character.AI and preparing this article.

I’m the mom psychologist who will help you GetKidsInternetSafe.

Onward to More Awesome Parenting,

Dr. Tracy S. Bennett, Ph.D.
Mom, Clinical Psychologist, CSUCI Adjunct Faculty
GetKidsInternetSafe.com 

Works Cited 

[1] Artificial Intelligence – Merriam Webster

[2] Bot – Merriam Webster  

[3] NSFW – Merriam-Webster   

[4] Character.AI 

[5] C.AI Age Requirement  

Photo Credits 

Pete Linforth via Pixabay https://pixabay.com/illustrations/connection-love-modern-kiss-human-4848255/   

Samantha Sanchez (Image #2)

Adrian Swancar via Unsplash https://unsplash.com/photos/JXXdS4gbCTI

Is YouTube Still Targeting Your Kids?

In 2019, YouTube was fined 170 million dollars for illegally advertising to kids. In this article, we’ll cover how YouTube broke the law designed to offer protection for children online, what they did to fix it, and the gap that still puts kids at risk.

To help protect your kids from inappropriate content on the internet, check out our Screen Safety Essential Course. This program offers access to weekly parent and family-oriented coaching videos that will help you to create safer screen home environments and foster open communication all while connecting and having fun as a family. Dr. Bennett’s coaching helps parents make more informed decisions about internet safety and educates families so they can use good judgment when encountering risks online.

What is COPPA?

The Children’s Online Privacy Protection Act (COPPA) requires websites to get parent’s permission before collecting identifying data (like a kid’s name or address) or the cookies from the computer the child is using for children 13 and under. Cookies is a term for a type of data packet sent from a website to a computer and the computer returns the packet to the website. These data packets are a way for websites to track a user and record their actions on the site. Any company caught violating COPPA may be fined up to a maximum of $42,530 per violation.

COPPA applies to any website that is aimed at children or has an audience that can include children such as:

  • PBS Kids
  • Sesame Street
  • Nickelodeon
  • Cartoon Network

How did YouTube break the law?

In 2015 YouTube created a secondary website and app called YouTube Kids dedicated to content for children ages 12 and under. YouTube makes the bulk of their revenue by selling ads and gathering customer data. Customer data is valuable to marketers because it helps them better target advertisements. YouTube Kids gathered child customer data using cookies without parent permission. This was a violation of COPPA. As a result, YouTube received a fine of 170 million dollars.

YouTube marketed itself to advertisers on its popularity with children and made millions of dollars on the subsequent revenue. This led to a surge in kid-oriented content creators who made quick and easy-to-produce videos to capitalize on the profitability of these new advertisers. For example, toy unboxing videos became popular because it was an easy to produce video that generated a lot of views. These content creators are also violators of COPPA because they capitalized on YouTube’s violation for profit.

What has YouTube changed?

The good news is that YouTube no longer collects your children’s personal identifiers and will not allow advertisements that attempt to collect them either. YouTube along with the FTC have also cracked down on content creators who intentionally abused the ad revenue system by mass producing content while YouTube was still collecting kid’s data. Those channels were reported by YouTube, reviewed by the FTC, and channels found guilty were then fined for their own COPPA violation.

YouTube also has guidelines to limit what can be advertised to children. For example, YouTube does not allow advertising of any kind of food or beverage to children. YouTube has also added content filters that are meant to catch content that is oriented at kids and ensure that any advertisement that can collect your data can’t show up on those videos.

But kids are still viewing inappropriate content

The bad news is that the YouTube advertisement system isn’t perfect. YouTube may not be able to target advertisements at your child specifically anymore, but they can still target advertisements at children using videos marked as for children on their main site, or using their secondary site YouTube Kids. YouTube has extra guidelines for kid-oriented advertisements. However,  YouTube does not regulate video content in the same way they regulate advertisements. For example, YouTube won’t allow a thirty second ad about Kool-Aid on their platform if it’s aimed at kids, but Kool-Aid can make a channel and post videos that are essentially an advertisement dressed up like an entertaining video for children. If you’d like to learn more about how advertising affects your children, GKIS already has an article detailing just that linked here.

What does this mean for your child on YouTube?

YouTube has put better practices into place after the COPPA fine. That doesn’t mean that their business model is any different. YouTube is still a website that makes the majority of its money off of advertisements. The website may not be collecting your child’s data but their attention is still a commodity being sold. Content on YouTube can be fun and even educational for children, but you have to be careful of what content your kids are watching.

What can you do to protect your kids on YouTube?

Check what your kids are watching

If you check in on what your child is watching every few videos then you can be sure that they haven’t slipped into watching advertisements dressed up as videos.

Familiarize yourself with your child’s favorite creators

Check a couple of their videos and make sure their content is something you want your child to watch. It will also allow you to be sure this content creator isn’t advertising anything to your children in their videos.

GKIS how to spot marketing supplement

Here at GKIS our how to spot marketing supplement will help teach your kids about the strategies marketers use, and will help them identify when a video is really an advertisement in disguise.

GKIS social media readiness course

Bennett’s social media readiness course helps to teach your kids how to be safe online and recognize the risks on social media sites and found in gaming.

Thanks to CSUCI intern, Jason T. Stewart for researching YouTube’s COPPA fine and co-authoring this article.

I’m the mom psychologist who will help you GetKidsInternetSafe.

Onward to More Awesome Parenting,

Tracy S. Bennett, Ph.D.
Mom, Clinical Psychologist, CSUCI Adjunct Faculty
GetKidsInternetSafe.com

Works Cited

“Google and YouTube Will Pay Record $170 Million for Alleged Violations of Children’s Privacy Law” FTC, https://www.ftc.gov/news-events/press-releases/2019/09/google-youtube-will-pay-record-170-million-alleged-violations

“What are cookies” Norton, https://us.norton.com/internetsecurity-privacy-what-are-cookies.html

Stuart Cobb, “It’s Coppa-cated: Protecting Children’s Privacy in the Age of YouTube” Houston Law Review, https://houstonlawreview.org/article/22277-it-s-coppa-cated-protecting-children-s-privacy-in-the-age-of-youtube

“Advertising on YouTube Kids” Google, https://support.google.com/youtube/answer/6168681?hl=en

Photo Credits

Photo by Tymon Oziemblewski from Pixabay

(https://pixabay.com/photos/youtube-laptop-notebook-online-1158693/)

Photo by Pradip Kumar Rout from Pixabay (https://pixabay.com/photos/cyber-law-legal-internet-gavel-3328371/)

Photo by allinonemovie from Pixabay

(https://pixabay.com/illustrations/minecraft-video-game-blocks-block-1106253/)

Photo by Chuck Underwood from Pixabay

(https://pixabay.com/photos/child-girl-young-caucasian-1073638/)

 

The Psychology Behind Fake News, Bots, and Conspiracy Theories on the Internet

Clickbait headlines and Internet autofeeds tempt us into mindless scrolling. They soak into our memories without our awareness and tempt us to share even after only reading the headline. False information manipulates stock markets, our political views, and our purchasing. It makes us feel connected to celebrities and can divide families. Everybody has an opinion that they are happy to argue about online even if they believe it’s too rude to share at a dinner party. What is fake news? How do bots contribute to fake news? Why does fake news suck us in so expertly? And how can we avoid its seductive allure?

What is “fake news?”

Fake news is false information designed to inform opinions and tempt sharing. It could be a rumor, deliberate propaganda, or an unintended error that deceives readers.

Fake news can affect attitudes and behavior. Fake news about a celebrity may not be harmless, but chances are it won’t have a long-lasting and devastating impact. However, fake news about the spread of a virus, the necessity of medical interventions, or the intentions of a politician can have a huge impact and manipulate behavior in dangerous ways.

Bots!

In addition to the three billion human accounts on social media, there are also millions of bots.[i] Bots are created using a computer algorithm (a set of instructions used to complete a task) and work autonomously and repetitively. They can simulate human behavior on social media websites by interacting with other users and by sharing information and messages.

Bots possess artificial intelligence (AI). They can learn response patterns in different situations. Programmed to identify and target influential social media users, bots can spread fake news quickly.

According to a 2017 estimate, there were about 23 million bots on Twitter, 27 million bots on Instagram, and 140 million bots on Facebook. Altogether, that adds to 190 million bots on just three social media platforms, more than half the population of the United States.[ii]

3 Reasons Why We Get Sucked in by Fake News

With convenient on-demand internet access, we’ve gotten into the habit of greedily gulping rather than thoughtfully chewing our news. We browse instead of reading then impulsively jump to share.

A recent study found that 59% of shared articles on social media are never even read. Most social media users get their information based solely on a headline.[iii] Why are we susceptible to this form of online behavior? Are we lazy with low attention spans, or could it be something else?

Fake news is crafted to be widely appealing. 

A recent study found that fake news is 70% more likely to be retweeted than true stories. A true story takes six times longer to reach 1,500 people than it takes for fake news to reach the same amount of people. Fake news is typically new and unusual information that is tested for shareability. Unlike truth, which you consume and it’s over, fake news is alive and constantly evolving.[iv]

We hear and see what we want.

An echo chamber is a metaphor for a closed online space where beliefs are repeated by different users. With each contact with that information, the information is exaggerated and the reader becomes more convinced that the content is factual and impactful.

Social media sites repetitively send us links to information based on our previous internet searches. This is called targeted advertising. It is designed to take us into a rabbit hole of single-minded desire. Not only does this sell us ideas, belief systems, and facts, but it can also get us to back politicians and influencers and ultimately spend our money. The act of unconsciously seeking out and remembering information that supports our views is called confirmation bias. Fake news feeds this bias.

Shortcuts are easier.

Heuristics are shortcuts our minds take to make quicker decisions. They allow us to function without having to think about every action we make.

Humans are not designed to have an honest view of the world. We form our decisions based on a vague worldview supported by emotional confirmation. We search for facts that make us feel more confident and avoid or flatly reject those that don’t.

Black-and-white thinking calms our anxiety and makes us feel like we have more control. Considering complex information and complicated nuance takes more effort and time. It also requires a more informed database to work from. Most online readers don’t want to take the time to patiently and humbly build up that kind of expertise. Quick information that offers more successful shareability is a more attractive option for online communication.

3 Reasons Why We Believe It

British psychologist Karen Douglas found three criteria for why someone would believe in conspiracy theories.

The Desire for Understanding and Certainty

It’s human nature to try to explain why things happen. Evolutionarily, those who were the best problem-solvers were more likely to survive. There is an adaptive advantage for those who ask questions and quickly find answers. Easy answers ease our anxiety and simply confirm our worldview.

Conspiracy theories are also false beliefs, and those who believe in them have a vested interest in keeping them. Uncertainty is an unpleasant state. Conspiracy theories provide a sense of understanding and certainty that is comforting.

The Desire for Control and Security

We need to feel like we have control over our lives. For conspiracy theorists, this is especially true when the alternative to their belief is stressful. For instance, if global warming is true and temperatures are rising, we will have to change our lifestyles. That would be uncomfortable and costly. Instead, you could listen to influencers who assure you that global warming is a hoax so you can continue with your way of living. This is called motivated reasoning and is a strong component of belief in conspiracy theories.

The Desire to Maintain a Positive Self-Image

Research has shown that those who feel they are socially marginalized will be more likely to believe in conspiracy theories. A positive self-image is fed from our successes in our relationships and accolades from those we admire. Chatting in online forums with same-minded others brings us community and feelings of self-worth. Researching a conspiracy theory can give one a feeling of having exclusive knowledge and expertise and offer opportunities for adulation and leadership.[v]

How to Protect Ourselves from Being Duped by Fake News and Conspiracy Theories

Assess the characteristics of the article you are reading.

  • Is it an editorial or an opinion piece?
  • Who is the author?
  • Is the author credible?
  • Have they specialized in a certain field or are they a random person with an unresearched opinion?
  • Can you trust the information they offer?
  • Do they cite their sources or is the article designed to impress instead of informing?

Check the ads.

Be wary of articles containing multiple pop-ups, advertisements of items not associated with the article, or highly provocative and sexual advertisements.

Verify images.

Are the images copied from other sources or are they licensed for use by the author? Google Image Search is an easy tool to find published copies of the image.

Use fact-checking websites.

Examples are Snopes, Factcheck.org, and PolitiFact.

Research opposing views. 

Check out sources with viewpoints opposing the articles you read that differ from your own opinions. To defend a point of view, you must understand the other side.

Learn to tolerate several complex ideas at once, even if it causes tension.

Smart discussion requires that we discuss the nuance of complex ideas rather than engaging in faulty or black-and-white thinking. Experts are not shy to say they don’t know something. Insecure amateurs try to fake it.

Share responsibly. 

As important as it is to protect yourself from fake news, it is equally important to help protect others from fake news. Make sure to check the authenticity of an article before posting it online. If Aunt Joyce posts something inaccurate, side message her and let her know that it is fake news and how you found that information so she can better use fact-checking in the future.

Thanks to CSUCI intern, Dylan Smithson for researching the ways fake news is affecting us and how to avoid being morons online. To view some valuable news clips of Dr. Bennett’s interviews about parenting and screen safety, check out her YouTube channel at https://www.youtube.com/DRTRACYBENNETT,

I’m the mom psychologist who will help you GetKidsInternetSafe.

Onward to More Awesome Parenting,

Tracy S. Bennett, Ph.D.
Mom, Clinical Psychologist, CSUCI Adjunct Faculty
GetKidsInternetSafe.com

Works Cited

[i] Simon Kemp (2019) Digital trends 2019: Every single stat you need to know about the internet https://thenextweb.com/contributors/2019/01/30/digital-trends-2019-every-single-stat-you-need-to-know-about-the-internet/

[ii] Amit, Argawal (2019) How is Fake News Spread? Bots, People like You, Trolls, and Microtargeting http://www.cits.ucsb.edu/fake-news/spread

[iii] Jayson DeMers (2019) 59 Percent Of You Will Share This Article Without Even Reading It https://www.forbes.com/sites/jaysondemers/2016/08/08/59-percent-of-you-will-share-this-article-without-even-reading-it/#646fecdb2a64

[iv] Kari Paul (2018) False news stories are 70% more likely to be retweeted on Twitter than true ones https://www.marketwatch.com/story/fake-news-spreads-more-quickly-on-twitter-than-real-news-2018-03-08

[v] David, L (2018) Why Do People Believe in Conspiracy Theories?

https://www.psychologytoday.com/us/blog/talking-apes/201801/why-do-people-believe-in-conspiracy-theories

Photo Credits

Antonio Marín Segovia Internet ha sido asesinado por el macarrismo ilustrado de Wert, con el beneplácito del PPSOE CC BY-NC-ND 2.0

Free Press/ Free Press Action Fund’s photostream Invasion of Fake News CC BY-NC-SA 2.0

Sean MacEntee social media CC BY 2.0

Keywords: Internet, Conspiracy Theories, Fake News, Bots, AI, Confirmation Bias, Heuristics, Echo Chamber