About us
AI Safety Amsterdam (AISA) is committed to mitigating the risks associated with Artificial Intelligence. In a rapidly evolving technological landscape, our mission is to ensure that AI development proceeds with the highest standards of safety and ethical considerations.
Affiliated with the University of Amsterdam, AISA serves as an interdisciplinary platform that brings together researchers, students, and professionals. Our diverse team is united by a shared commitment to making AI safer for everyone.
Our people

Andreea Ioana Chivu
Director, Operations Strategy
I am passionate about the societal impact of AI, and I also feel responsible for contributing towards preventing catastrophic outcomes, especially since I am aware of the risks and how AI systems are deployed. I strive for developing safe AI systems and spread awareness to as many people around, encouraging a sense of collective responsibility.

Julia Belloni
Co-director, Research strategy
I want to raise awareness about the risks arising from emergent technologies, such as AI, and I am committed to empowering people to act. I am currently pursuing a Master's in AI, and I am interested in further discovering the failures of AI models.

Ana Paula Castillo
Co-director, Communication Strategy
I believe AI Safety requires interdisciplinary solutions, and community mobilisation is key. People deserve to feel empowered, and I want to push back against inevitability narratives that discourage collective agency. My background in marketing, communication and behavioural science inspires me to build informed, collaborative communities that can take meaningful action now.

Patrik Bartak
Advisor / Researcher
These days I am primarily focused on technical research, but in the past I was involved in coordinating the AI safety initiative in Delft, including facilitating reading groups and AGISF. I use this experience to support Amsterdam's AI safety initiative.
Our activities
- AI Safety Fundamentals: We organize an AI Safety Fundamentals course, sign up here!
- Hackathons: Join us for virtual or local hackathons!
- Talks: Talks from speakers working on state-of-the-art research.
- MechInterp Research Group: A small team of MSc AI students work on mechanistic interpretability research supervised by Leonard Bereska.
- Master Thesis: We supervise your master thesis on AI Safety topics. Contact us for more information.
- Weekly Lunch: Join us for our weekly lunch at Cafe Neo!
