Responsible AI Usage Policy
Updated: October 14th 2025
- Purpose
MegaMinds uses artificial intelligence (AI) to enhance learning experiences while protecting the rights, safety, and privacy of all users. This policy explains how we use AI, the principles guiding its use, and the safeguards we have in place.
- How We Use AI
We use AI in the following ways:
- Speech-to-Speech AI NPCs in 3D: Students can interact with non-player characters (NPCs) that use AI to understand speech and respond in real time, enabling immersive learning scenarios.
- Activity Insights for Teachers: Transcripts and other data from student interactions are securely processed by an AI language model to generate summaries and insights for teachers, helping them track progress and engagement. For a full overview see our Student Data Info Document.
- Principles of Responsible AI Use
We follow these core principles:
- Prompts are curated and guard-railed by the MegaMinds team (with the rare exception of some teachers creating their own content). Our content creation team carefully curates and reviews the interactions and guardrails per activity to ensure focus on the activity. From experience, word-filtering and guardrails that focus on censorship can easily be broken. Defining clear goals allows our algorithms to quickly flag off-topic conversation, instead of the other way trying to flag everything considered “bad”.
- Privacy & Data Protection: We collect only the data necessary for educational purposes. Personal information is handled in compliance with applicable data protection laws (e.g., GDPR, COPPA, FERPA). No student data is ever used to train, fine-tune, or otherwise improve any external or internal large language models (LLMs).
- Security: All AI data processing occurs through secure, encrypted channels.
- Bias Awareness: We monitor AI outputs for potential bias or inaccuracies and continually improve our models and processes.
- Human Oversight: AI-generated insights are advisory. Teachers remain the final decision-makers in assessing student progress.
- Transparency: Students and teachers are informed when they are interacting with or receiving output from an AI system.
- Age Appropriateness: AI interactions are designed to be safe and suitable for the intended student age group.
- What We Don’t Do
- We do not use AI to make automated decisions that directly affect a student’s grades or opportunities without human review. Insights generated should always be considered in the context of AI and never be taken as truth.
- We do not sell or share student data with advertisers or unrelated third parties.
- We do not use AI to profile students for non-educational purposes.
- Accountability & Feedback
We regularly audit AI systems for quality, accuracy, and safety. Teachers, students, and guardians are encouraged to report any concerns or unexpected AI behavior to **edvard@gomegaminds.com AND** tech@gomegaminds.com on copy.
- Updates to This Policy
We may update this policy as our AI technology or regulations evolve. We will notify users of significant changes in advance.