High-Level Overview
Two Hat Security was a Kelowna, Canada-based technology company founded in 2012 that developed an AI-powered content moderation platform called Community Sift. The platform classified, filtered, and escalated over 30 billion interactions monthly across messages, usernames, images, and videos in real-time, targeting harms like cyberbullying, abuse, hate speech, violent threats, and child exploitation to foster safe online communities for social networks and gaming platforms.[1][2][3] It served major clients including Microsoft (Xbox, Minecraft, MSN), Activision, Roblox, Rovio, Supercell, and Warner Bros Games, enabling configurable moderation that boosted user safety, engagement, and retention while allowing users to set comfort levels.[2][3] In October 2021, Microsoft acquired Two Hat to integrate its technology into broader online safety solutions, combining Two Hat's AI with Microsoft's cloud infrastructure and research for gaming and non-gaming experiences.[2][3]
Origin Story
Two Hat Security was founded in 2012 by Chris Priebe, inspired by the tragic story of Amanda Todd, a teen who faced severe online bullying, highlighting the need for safe digital spaces especially for kids.[2][5] Priebe, as CEO, drove the company's mission: "everyone should be able to share online without the fear of abuse or harassment," evolving from this heartfelt origin to build scalable tools nurturing diverse global communities.[1][2][5] Early traction came through partnerships with gaming and social leaders, culminating in a longstanding collaboration with Microsoft on proactive moderation for Xbox and Minecraft, which led to the 2021 acquisition as a "deep investment" in shared technology and customer continuity.[2][3]
Core Differentiators
- AI-Powered Real-Time Moderation: Processed 30+ billion interactions monthly across text, images, and videos, detecting complex harms like subversion tactics in multiple languages before content reached users.[1][2][3]
- High Configurability: Allowed organizations and users to customize moderation thresholds, balancing safety with user preferences to improve engagement and retention metrics.[2][3]
- Proven Scale and Impact: Deployed in high-volume environments like Xbox, Minecraft, Roblox, and others, emphasizing proactive filtering over reactive measures.[2][3]
- Privacy and Trust Focus: Maintained customer data confidentiality, building reliable relationships that persisted post-acquisition by Microsoft.[3]
Role in the Broader Tech Landscape
Two Hat rode the rising tide of online harms amid exploding social media and gaming user bases, where unchecked content threatened community health and business metrics like retention.[1][3] Its timing aligned with urgent demands for scalable AI moderation as platforms scaled globally, outpacing manual efforts amid evolving threats like multilingual hate speech and child exploitation.[1][2] Market forces favoring Two Hat included gaming's push for inclusivity—Microsoft's vision that "gaming should be inclusive and welcoming for everyone"—and regulatory pressures for safer digital spaces, amplified by incidents like Amanda Todd's story.[2][3] Post-acquisition, Two Hat's tech influences Microsoft's ecosystem, accelerating first-party tools, partner solutions, and industry standards for proactive safety across consumer services.[3]
Quick Take & Future Outlook
Since its 2021 acquisition, Two Hat's technology has been fully integrated into Microsoft's online safety initiatives, enhancing Xbox, Minecraft, and broader services while serving legacy clients through continued relationships.[2][3] Looking ahead, expect deeper AI advancements in configurable, privacy-first moderation amid trends like generative AI content floods and stricter global regulations on harms. Microsoft's cloud scale positions this legacy to evolve influence, potentially powering safer metaverses and social platforms, ensuring Two Hat's founding vision—free sharing without fear—shapes an enduringly healthier internet.[1][3]