The Chicago Journal

Snapchat and AI: a parent’s new headache

Snapchat and AI: a parent's new headache
Image Commercially Licensed from: Unsplash

Snapchat With the AI revolution in full swing, it’s no surprise that nearly every technological company wants a piece of the action.

Snapchat, the multimedia instant messaging service, has just created My AI in order to compete in the field of artificial intelligence.

While technology has many advantages, it also causes stress in teenagers and parents.

Read also: ChatGPT already presenting a challenge for schoolwork

What happened?

Lyndi Lee of East Prairie, Missouri, warned her 13-year-old daughter against using the function.

Lee, a software developer, is concerned about how My AI is seen by younger users.

“It’s a temporary solution until I know more about it and can set some healthy boundaries and guidelines,” said Lee.

“I don’t think I’m prepared to know how to teach my kid how to emotionally separate humans and machines when they essentially look the same from her point of view.”

“I just think there is a really clear line [Snapchat] is crossing.”


Snapchat’s newest feature was just released.

It is powered by ChatGPT, a platform that offers ideas, answers queries, and interacts with users.

However, there are some significant differences:

  • Users can customize the chatbots name
  • Users can design custom Bitmoji avatars for the AI
  • Users can bring conversations with friends

Interacting with the chatbot may appear to be less transactional than interacting with the ChatGPT website.

It also makes it difficult to tell the difference between engaging with humans and communicating with robots.


Because of privacy issues, Snapchat’s new feature has received scathing criticism in app stores and on social media.

Other users have complained about creepy exchanges and the inability to remove My AI from their chat feed without purchasing a premium subscription.

Despite the fact that some people find the tool useful, the mixed response highlights the risks that businesses face when incorporating generative AI technology into their products, especially for brands with young audiences like Snapchat.

Snapchat was one of the early launch partners when OpenAI made ChatGPT open to third-party firms.

Snapchat almost immediately prompted families and governments to express hitherto unseen worries.

Democratic Senator Michael Bennett wrote to the CEOs of Snap and other technological companies in March, just after My AI was made available to Snap’s membership customers.

Bennet is skeptical of the chatbot’s interactions with younger users.

According to reports, it may demonstrate how children may fool their parents.

“These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 percent of American teenagers use,” Bennet wrote.

“Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enroll American kids and adolescents in its social experiment.”

Snap responded recently, saying:

“My AI is far from perfect, but we’ve made a lot of progress.”


Snapchat users have been concerned since its official introduction.

After the chatbot lied about not knowing where he was, one user described his interaction with it as frightening.

After changing the tone of the conversation, the chatbot revealed that he lived in Colorado.

Ariel used My AI’s beginning, chorus, and piano sounds to compose a song about what it’s like to be a chatbot in a TikTok video.

When she returned the song, the chatbot denied any involvement, saying:

“I’m sorry, but as an AI language model, I don’t write songs.”

Snapchat claimed that it will continue to develop My AI based on user feedback, as well as implement additional security measures to keep users safe.

According to the firm, users can also choose not to interact with My AI.

Dropping My AI from conversation streams, on the other hand, requires a paid subscription to Snapchat+.

Some caved and turned off the tool before canceling the service.

Teens and chatbots

ChatGPT has previously been criticized for giving misleading information, behaving inappropriately with users, and allowing students to cheat.

Integrating Snapchat, on the other hand, may worsen current challenges while introducing new ones.

According to New York clinical psychologist Alexandra Hamlet, several patients’ parents were concerned about how their adolescent might use Snapchat.

Concerns have also been expressed about chatbot guidance, particularly in the context of mental health.

Artificial intelligence technologies have the ability to increase a person’s confirmation bias and drive them to seek out individuals who support their wrong beliefs.

“If a teen is in a negative mood and does not have the awareness desire to feel better, they may seek out a conversation with a chatbot that they know will make them feel worse,” said Hamlet.

“Over time, having interactions like these can erode a teens’ sense of worth, despite their knowing that they are really talking to a bot.”

“In an emotional state of mind, it becomes less possible for an individual to consider this type of logic.”

Parents, according to WAYE founder Sinead Bovell, must make it clear that the chatbot is not a friend.

“They’re also not your therapist or a trusted adviser, and anyone interacting with them needs to be very cautious, especially teenagers who may be more susceptible to believing what they say,” said Bovell.

“Parents should be talking to their kids now about how they shouldn’t share anything with a chatbot that they would a friend – even though, from a user design perspective, the chatbot exists in the same corner of Snapchat.”