Artificial intelligence has moved from science fiction to everyday life faster than most of us expected. Your children are using AI-powered homework helpers, your smart speaker is processing voice commands through AI, and your email client is using AI to write suggested replies. These tools are genuinely useful — but they come with privacy trade-offs that most people never think about.

3 in 4
teens have used AI companion chatbots
148%
surge in AI voice-cloning scams in 2025
1,300%
rise in AI-generated CSAM reports

This isn't a guide about whether AI is "good" or "bad." It's a practical look at how AI tools handle your family's data, where the real risks are, and what you can do about them without giving up the convenience.

How AI tools use your data

When you type a question into ChatGPT, dictate a note to Siri, or let Google's AI summarise your emails, that data goes somewhere. Understanding where is the first step to managing the risk.

The training question

Many AI companies use conversations with their tools to improve future models. This means the question your child asks about their homework, the business strategy you brainstorm, or the medical symptoms you describe may be retained and used in training data.

The good news: most major providers now offer opt-out options. OpenAI (ChatGPT), Google (Gemini), and Anthropic (Claude) all provide settings or data controls that prevent your conversations from being used for training. But these settings are typically off by default — you have to actively enable them.

What gets stored

Even when training is disabled, most AI services store conversation logs for some period — typically 30 days — for safety monitoring and abuse prevention. During this window, your conversations exist on the provider's servers. For most personal use, this is a low risk. For sensitive business discussions or personal information, it's worth noting.

AI and children: specific risks

Children are adopting AI tools faster than any other age group, often without understanding what they're sharing or with whom.

Personal information in prompts

Children frequently share personal details without thinking: "Write an essay about my school, St Mary's College in Richmond," or "Help me write a birthday message for my friend Sarah who lives at 42 Elm Street." These details become part of the conversation log and potentially part of training data.

What to teach: Treat AI chat tools like a conversation with a stranger in public. Don't share your full name, school name, address, phone number, or friends' personal details. Use first names only and avoid location specifics.

AI companions and chatbots

A growing number of apps market themselves as AI "friends" or companions for children and teenagers. Nearly three in four teens have used an AI companion chatbot. These apps encourage extended personal conversations and can collect significant amounts of emotional and personal data. Several have been found to have inadequate safety measures, with children receiving inappropriate responses or being encouraged to share increasingly personal information.

What to do: Review any AI companion apps your children use. Check the privacy policy for data retention and sharing practices. If the app encourages personal disclosure and doesn't have robust content filtering, consider removing it.

Homework and original work

Beyond privacy, there's a practical concern: children who rely on AI to write their essays, solve their maths problems, or complete their assignments aren't developing the skills those assignments are designed to build. Many schools now use AI detection tools, and submitting AI-generated work can have academic consequences.

A reasonable approach: AI is a useful learning tool when used for explanation, not generation. "Explain how photosynthesis works" is a learning prompt. "Write my essay about photosynthesis" is not. Help your children understand the difference.

Deepfakes and synthetic media

AI can now generate realistic images, audio, and video of real people. This technology has legitimate uses in entertainment and creative industries, but it also creates new risks:

Protecting against deepfakes

Smart home devices and AI assistants

Alexa, Google Home, Siri — these devices listen for a wake word and then process your voice command via cloud servers. The privacy implications are often overlooked because the devices are so convenient.

AI in business: what executives should know

If you're using AI tools for work — drafting emails, analysing data, summarising documents — the data you share may be subject to your company's data handling policies and potentially exposed to third parties.

A practical family AI policy

You don't need a formal document, but having a shared understanding within your household makes a real difference. Here's a starting framework:

  1. No personal details in AI chats. Don't share full names, addresses, school names, phone numbers, or financial information with AI tools. If you need to discuss something specific, use generic terms.
  2. Opt out of training data. Go into the settings of ChatGPT, Gemini, and any other AI tools your family uses. Turn off the option that allows your conversations to be used for model improvement. In ChatGPT, parents can link their account to their teen's account to set "quiet hours" and disable features like memory and image generation.
  3. Verify before you trust. AI tools make confident-sounding mistakes. Teach children (and remind yourself) to double-check important facts from AI with a reliable source.
  4. Review your children's AI apps. Look at what's installed, what data it collects, and whether it encourages personal disclosure. Remove anything that doesn't have clear safety measures.
  5. Use the code word protocol. Establish a family verification word for phone calls. Share it in person, never digitally. Update it periodically.
  6. Keep photos limited. Be intentional about which photos of your family — especially children — you share publicly. Fewer clear face photos online means less material for potential misuse.

AI isn't going away, and banning it outright isn't practical for most families. The better approach is to use these tools with awareness — knowing what you're sharing, who can see it, and where the risks are. Like any technology, the people who use it thoughtfully get the benefits without the worst of the downsides.