AI Safety Research
Multi-Agent
AI Safety
for Democracy
Our Mission
AI systems are rapidly becoming central to economic infrastructure and high-stakes decision-making.
As multi-agent AI systems interact in markets, supply chains, and critical services, they create complex emergent dynamics that are difficult to predict or control. Our research focuses on identifying and mitigating these systemic risks to ensure AI deployment remains safe and aligned with human values as these systems scale.
About Us
EuroSafeAI focuses on research in the areas of AI safety, security, and multi-agent systems.
We advance AI safety and security by developing risk assessments and mitigation strategies. We target scenarios where AI systems may act contrary to developer intent. We value curiosity, ethics, and a proactive, responsible mindset.
Research Focus
Our Three Pillars
We focus on critical areas of AI safety to ensure advanced systems remain beneficial and aligned with human values.
View our researchGeneral AI Safety Research
Our broader agenda includes safeguarding and post-training LLMs against harmful use cases, interpreting model behavior via internal states, and detecting model misalignment tendencies.
Join Our Mission
We're looking for talented researchers and professionals who are passionate about ensuring AI benefits humanity.





