Google is set to introduce its AI chatbot, Gemini, to children under 13. This marks a significant shift in how young users interact with artificial intelligence. The rollout, expected next week, will allow kids with parent-managed Google accounts to access Gemini through the Family Link platform.
The company has positioned Gemini as an educational tool, capable of assisting with schoolwork, answering questions, and even helping children craft imaginative stories. Parents will need to provide their child’s details, including name and birthdate, to set up access, ensuring that the chatbot is tailored to age-appropriate users.
Google has emphasized that strict safety measures are in place to prevent Gemini from generating harmful or inappropriate content. Additionally, the company has assured parents that children’s data collected through Family Link accounts will not be used to train the AI model.
Despite these safeguards, concerns remain among experts and parents. Child advocacy groups, including UNICEF, have warned that AI chatbots could confuse young users, making it difficult for them to distinguish between machine-generated responses and human guidance. Some experts fear that children may become overly dependent on AI interactions, potentially impacting their ability to form real-world relationships.
Google acknowledges these risks and has urged parents to engage in open discussions with their children about AI’s limitations. The company advises parents to remind kids that Gemini is not a human and to avoid sharing personal or sensitive information with the chatbot.
As Google moves forward with its plans, the tech industry and child safety advocates will be watching closely to see whether Gemini becomes a valuable educational companion or a source of unintended consequences.
