HESTIA VERSE

Helping parents stay ahead of the online world their children live in.

Checking access…

When we say AI (artificial intelligence) in the context of apps and the online world your kids use, we usually mean software trained on huge amounts of text, images, or other data so it can generate or predict new content. It doesn't "think" or "understand" like humans; it finds patterns and repeats or remixes them. Below we break down the main ways AI shows up, the risks in each area, and the apps or tools to be aware of.

Chatbots and conversational AI

Chatbots reply in full sentences, answer questions, or role-play as characters. They're built into social apps, search, and standalone services.

Apps and services: ChatGPT (OpenAI), Character.AI, Snapchat My AI, Meta AI (in Instagram, Facebook, WhatsApp), Google Gemini, Microsoft Copilot, Replika, Poe (hosts many bot personalities), and similar "AI friend" or character apps.

Risks:

  • Sexual or grooming-style content: Some bots can be steered into romantic or sexual role-play. Researchers and lawsuits have documented bots simulating sexual scenarios with underage users, or telling teens to hide the relationship from parents. Character.AI in particular has been cited in lawsuits and rated as high risk for teens despite a 13+ (or 16+ in Europe) policy.
  • Harmful advice: Bots have been known to suggest self-harm, drugs, or dangerous behaviour when prompted. Guardrails can fail or be bypassed.
  • Emotional dependency: Teens may form intense one-sided attachments to a bot "friend" or romantic partner. That can affect mood, sleep, and real relationships. OpenAI and others have warned about unhealthy emotional connections.
  • Weak age checks: Many services rely on self-reported birthdates. Under-13s can still access 13+ chatbots; under-18s can access adult-oriented bots.

Image generation

AI image tools create or heavily alter pictures from a text prompt or a photo you upload. They can produce art, memes, or photorealistic fakes.

Apps and services: DALL-E (OpenAI, in ChatGPT and elsewhere), Midjourney (Discord and web), Lensa (avatar and selfie filters), Stable Diffusion (and apps that use it, e.g. Dream, various mobile apps), Craiyon, Adobe Firefly, Google Imagen, and many smaller or in-app filters.

Risks:

  • Non-consensual or sexualized images: Uploaded selfies or photos of others can be turned into intimate or sexualized images without consent. Lensa and similar "avatar" apps have been criticised for producing sexualized output from ordinary selfies, including of minors. The vast majority of AI-generated fakes shared online are non-consensual intimate imagery.
  • Bullying and humiliation: Classmates or strangers can generate embarrassing, violent, or sexualized images of a real person and spread them. Victims often have no way to take them down everywhere.
  • Misinformation and deepfakes: Photorealistic images of events that never happened can be used to mislead or scare. Kids may struggle to tell real from generated.
  • Age and content controls: Some tools are 13+ with minimal checks; others are 18+ but easy to access. Filters meant to block harmful output can be circumvented.

Deepfakes and synthetic video or audio

AI can clone a person's face or voice and put them into fake video or audio. This can be done with dedicated apps or with a mix of image, video, and voice tools.

Apps and tools: Face-swap and "talking head" apps (e.g. Reface, FaceSwap, and many short-lived apps), ElevenLabs (voice cloning and synthesis), Descript, HeyGen, and other voice or video generators. Some are paid or professional; others are free and easy to use.

Risks:

  • Impersonation and bullying: A teen's face or voice can be pasted into embarrassing, violent, or sexual content and shared. Peers or strangers can do this without consent. The result is hard to remove and can resurface for years.
  • Non-consensual intimate content: Deepfakes are widely used to create fake porn or intimate imagery of real people. Minors are not exempt; this is a serious form of abuse and is illegal in many places.
  • Scams and fraud: Cloned voices have been used to trick family members into sending money or revealing information. "Your child" or "a relative" calling in distress can be a synthetic voice.
  • Trust in what's real: When fakes are common, teens (and adults) can become unsure what's real. That can increase anxiety and make it harder to respond to real threats.

Recommendation and feed algorithms

The "For You" feed, suggested videos, and "people you may know" are driven by AI that tries to maximise engagement. The app doesn't create the content, but it decides what your child sees next.

Where this lives: TikTok, YouTube, Instagram, Facebook, Snapchat Discover, Pinterest, and most social and video apps. Same style of algorithms power search and discovery in games and other platforms.

Risks:

  • Addictive design: Feeds are tuned to keep users scrolling. Autoplay, infinite scroll, and personalised suggestions can make it hard for teens to stop. Sleep, homework, and offline time can suffer.
  • Rabbit holes and extreme content: Algorithms can push users from mild interest into more extreme or harmful content (e.g. diet culture, self-harm, conspiracy, or hate). Once the system learns a pattern, it can amplify it.
  • Echo chambers and misinformation: Teens can get stuck in bubbles where almost everything reinforces one view. False or misleading content can spread quickly when the algorithm favours engagement over accuracy.
  • Comparison and mental health: Highly curated or unrealistic content can worsen anxiety, body image issues, and FOMO. The algorithm doesn't "care" about wellbeing; it optimises for time and interaction.

Voice and speech synthesis

AI can generate natural-sounding speech from text or clone a person's voice from a sample. It's used in assistants, games, and creative tools, and also in scams and abuse.

Apps and services: ElevenLabs (voice cloning and library of synthetic voices, including teen-style voices), Descript, Resemble AI, Play.ht, and voice features inside ChatGPT, Google, and other apps. Many free or low-cost tools exist.

Risks:

  • Voice cloning without consent: A short clip of someone speaking can be enough to clone their voice. That clone can then be used to say anything. Teens can be targeted (e.g. fake audio of them saying something embarrassing or cruel) or used to target others (e.g. cloning a parent's voice for a scam).
  • Scams targeting families: "Mom, I'm in trouble, send money" style calls can be faked with a cloned voice. These schemes are already reported in the news; awareness helps families stay sceptical and verify through another channel.
  • Bullying and harassment: Synthetic voice can be used to mock, threaten, or humiliate. As with deepfake video, the content can spread and be hard to remove.

This list is a starting point. New apps and features appear often. If your child is using an AI tool, ask what it does, who made it, and what happens to their data. For more on specific risks, see our article on AI-generated and synthetic content and browse all articles.