AI Safety Research

Multi-Agent
AI Safety
for Democracy

A nonprofit research organization led by Prof. Zhijing Jin, aiming to advance AI safety and security through rigorous research, threat assessment, and mitigation strategies.

Our Mission

AI systems are rapidly becoming central to economic infrastructure and high-stakes decision-making.

As multi-agent AI systems interact in markets, supply chains, and critical services, they create complex emergent dynamics that are difficult to predict or control. Our research focuses on identifying and mitigating these systemic risks to ensure AI deployment remains safe and aligned with human values as these systems scale.

About Us

EuroSafeAI focuses on research in the areas of AI safety, security, and multi-agent systems.

We advance AI safety and security by developing risk assessments and mitigation strategies. We target scenarios where AI systems may act contrary to developer intent. We value curiosity, ethics, and a proactive, responsible mindset.

Research Focus

Our Three Pillars

We focus on critical areas of AI safety to ensure advanced systems remain beneficial and aligned with human values.

View our research

General AI Safety Research

Our broader agenda includes safeguarding and post-training LLMs against harmful use cases, interpreting model behavior via internal states, and detecting model misalignment tendencies.

Collaborations

Our Collaborators

We collaborate with leading research labs, governments, and foundations to advance AI safety on a global scale.

University of TorontoETH ZurichMax Planck Institute for Intelligent SystemsUK AISIUniversity of MichiganSchwartz Reisman Institute - University of Toronto

Join Our Mission

We're looking for talented researchers and professionals who are passionate about ensuring AI benefits humanity.