Mindjoy
Blog
Future of Education

Introducing Mindjoy’s New Moderation System 🛡️

Introducing Mindjoy’s New Moderation System 🛡️

AI is being integrated into our everyday lives, and so AI safety is another responsibility being added to educators’ already full plates. Here’s how Mindjoy is offering support 🤖

“ChatGPT may produce output that is not appropriate for all audiences or all ages and educators should be mindful of that while using it with students or in classroom contexts.”
OpenAI Educator FAQ

There's no hiding from the fact that AI is being woven into our every day lives. Students will use these tools – that’s a given. The real question is how we ensure those interactions remain safe, appropriate, and conducive to learning.

That’s why we’ve built a new moderation system – designed to protect student wellbeing while enabling the rich learning experiences that align with your organization’s ethos. We’ve done the hard yards so educators can have peace-of-mind whilst focusing on student success.

⛳ Our Approach to Moderation

Most traditional content filters take an overly simple approach: block a list of words or topics and call it a day. But education is nuanced:

As a result, our approach rests on four core principles:

  1. Human in the loop – Educators are alerted when (and why) something is flagged, turning frustrations into teachable moments.
  2. Explainability – Students get context on why an input isn’t appropriate in a learning environment.
  3. Autonomy – Each organization selects the moderation policy that aligns with its values and student needs.
  4. Transparency – Policies clearly outline what they block and allow.

Our system is designed to build trust. Students, teachers, and parents need to know that when difficult or sensitive topics arise, they'll be handled with care and intention. When you get a moderation alert, you’ll know it matters. It also reduces false-negatives – no more typos flagged as inappropriate words! We know you meant "assess" and not "asses" ;)

🚘 What’s Under the Hood

For a deep dive into the technical components – covering text, image, and audio moderation – see the technical guide on our website. Below is a quick overview of the three preset moderation policies every organization can choose from:

Policy Description Recommended Audience
Strict Prohibits all conversations around sexual content, drugs, violence, weapons, illegal activities, and profanity. Creates a more protected environment for younger audiences. Primary-school students
Moderate Allows educational discussion of sensitive topics like sexual health and historical events while blocking explicit content and harmful advice. Restricts graphic depictions and maintains appropriate boundaries for developing maturity. High-school students
Lenient Permits academic discussion of mature and controversial topics, restricting only content causing severe harm (threats, harassment, explicit graphical descriptions). Supports mature educational needs while maintaining basic safeguards for older audiences. Students 18 +, higher-education

📚 Implementation Guide for Administrators

  1. Set your policy – By default, K-12 schools start with Strict and higher education organizations with Lenient. If that fits, you’re done.
  2. Change your policy – Go to Organization Settings (admin access required) and pick the policy that suits your students.
  3. Monitor notifications – Educators should keep an eye on their email for moderation alerts and respond accordingly.
  4. Provide feedback – Tell us what’s working (and what isn’t). Your insights help us refine our approach.

The Future of Safe Learning with AI

We’re committed to continuously improving our moderation systems as technology, educational needs, and regulations evolve. The current system has been built to align with the EU AI Act and the UK government’s Generative AI product-safety expectations.

We’re actively exploring ways to turn moderation into a positive teaching tool—helping students develop critical thinking about digital communication while staying safe.

The integration of AI in education presents incredible opportunities, but it also demands thoughtful protection. With Mindjoy’s moderation system, we show that safety and rich learning experiences are not opposing forces—they’re complementary elements of truly effective educational technology.

A thanks goes to Matthew Wemyss for the conversations around AI policy in the EU and how this should guide our thinking around moderation. For those interested, check out his book AI in Education: An EU AI Act Guide.

Jason Brown
Jason Brown

Pedagogy-Tech-Bro

Start teaching with Mindjoy

Join our community of educators and start creating engaging learning experiences for your students.