
WHAT WE DO
A community of technology leaders & practitioners.
Building a community of computer developers is essential in the fight against digital hate and antisemitism because these challenges are too vast and complex to tackle alone. By bringing developers together, we create a collaborative force capable of designing smarter, more effective tools—whether it’s AI-powered content moderation, decentralized reporting systems, or open-source solutions that empower communities to combat misinformation. A united network of developers can share knowledge, refine algorithms, and build scalable platforms that detect, prevent, and counteract hate speech in real time. This collective effort not only enhances the reach and impact of individual projects but also ensures that technology remains a force for good, fostering safer, more inclusive digital spaces across the internet.
New Incubating Innovative Technologies to AddressHate.
New technologies like GraphDB allow us to model hate not just as isolated posts, but as interconnected processes, revealing patterns in how hate speech emerges, spreads, and evolves over time. Unlike traditional databases, which treat content as discrete entries, graph databases map relationships between users, keywords, platforms, and influence networks, offering a dynamic, contextual view of digital hate. By analyzing these connections, we can uncover hidden pathways of radicalization, identify coordinated disinformation campaigns, and detect emerging threats before they escalate. This predictive modeling gives us a crucial advantage—not just reacting to hate after it goes viral, but intervening early, disrupting harmful narratives, and proactively mitigating digital toxicity before it takes hold.
Thought leadership.
Thought leadership is critical, especially as a technology-first organization, because it allows us to shape the conversation, set industry standards, and drive innovation in the fight against digital hate and antisemitism. In a rapidly evolving tech landscape, merely reacting to problems isn’t enough—we must anticipate challenges, pioneer new solutions, and inspire others to follow our lead. By sharing insights on cutting-edge technologies like AI, GraphDB, and decentralized moderation, we position ourselves as experts and visionaries, influencing policymakers, developers, and platforms to adopt more effective strategies. Thought leadership not only builds credibility and trust but also ensures that we remain at the forefront of ethical, scalable solutions, mobilizing the tech community to create a safer, more inclusive digital world.
Deprogramming Hate.
Recent studies from MIT and the Shoah Foundation reveal that hate is not an unchangeable mindset—it can be deprogrammed through targeted interventions, education, and exposure to counter-narratives. These findings are groundbreaking because they prove that people can unlearn hate, especially when confronted with personalized, data-driven content that challenges their biases. With modern AI technology, we can scale these interventions by identifying individuals at risk of radicalization, analyzing their engagement patterns, and delivering tailored, de-escalatory content at critical moments. Machine learning models can adapt in real time, reinforcing messages of empathy and truth while disrupting harmful echo chambers. By combining behavioral science with AI, we now have the potential to not only detect hate but actively reverse it, transforming the digital landscape into a space of education, understanding, and deradicalization at scale.
Empowering Leaders, Institutions and policy Makers.
It is essential for technologists to actively engage with policymakers and leaders to bridge the gap between technological capability and regulatory action in the fight against digital hate. We consult for non-profits, institutions, and philanthropic organizations that are looking to invest in their own technology ventures and infrastructure to make sure they are doing so in allocating funds, or funds of funds, in a strategically and impactful way, while developing their organizations. Too often, legislation lags behind innovation, leaving decision-makers without the technical understanding needed to craft effective, enforceable policies. By building strong partnerships, we can educate leaders on what AI, data science, and emerging technologies can truly achieve, demonstrating how they can be leveraged to detect, predict, and disrupt hate speech at scale. Collaboration ensures that policies are grounded in technological reality, balancing innovation, ethics, and enforcement to create a more proactive, rather than reactive, approach to stopping the spread of online hate.
Empowering Educators.
Addressing hate with technology at scale is not just about protecting the general population—it’s about safeguarding our children, who are among the most vulnerable to online toxicity. Digital hate infiltrates the platforms where young people spend their time, from social media and gaming communities to chat forums and video content, shaping their worldviews before they have the tools to critically assess misinformation and extremist narratives. By deploying AI-driven moderation, predictive analytics, and real-time intervention systems, we can create safer digital spaces that shield children from radicalization, cyberbullying, and exposure to harmful ideologies. A scalable technological approach ensures that no child is left unprotected, fostering an internet that encourages education, curiosity, and inclusivity rather than division and fear.