header image


I am a researcher on human-AI interaction who is motivated to improve the world we live in. I love to learn, innovate and discover, and try to do so every day.

What I do

I am a researcher at TNO's department Human-Machine Teaming and affiliated with the Hybrid Intelligence Centre. I research, design and egineer interactions between humans and AI systems such that they achieve their task effectively, safely and responsibly. To do so, I search for ways how AI systems can augment humans and how humans can augment AI systems.

My main topic of interest is explainable AI. In particular, what an AI systems needs to explain and how it can do so. I believe that explanations are a key enabler of a collaboration between humans and AI systems. They aid engineers in understanding the systems they build. They help users understand the system's capabilities, and they help the AI systems themselves to provide more effective support. In my research, these collaborations occur in high-risk contexts, creating a high standard for these explanations. A wrong explanation, either by content or timing, may result in severe consequences. Whereas the right explanation may prevent such consequences, perhaps even allow human and AI system to achieve a better outcome.

I lead and advise several projects on explainable AI to enable effective human-AI collaborations. Furthermore, I lead a research program on AI Safety research where we develop ways to responsibly design and develop AI systems for high-risk applications. Within the Hybrid Intelligence Centre I lead the lab on applied hybrid intelligence research. My activities include public speaking, guest lectures, maintaining and leading several open source projects, and contributing to relevant conferences and journals. I received my PhD Cum Laude at the Technical University of Delft on explainable AI and won TNO's Excellent Researcher award for combining academic research with applied science.

Here you can read more about my explainable AI, my work at TNO, and the open source and community projects I am involved in.

Read more

  • XAI for human-AI collaboration

    As a sufficient understanding is needed to for humans to use AI systems responsibly and collaborate with them effectively, we need to research what explanations an AI should provide to support this.

    XAI research
  • Community and open source projects

    To ensure AI systems are applied and used responsible and effectively, I believe that research and industry communities should to be brought together.

    Personal projects
  • Research at TNO

    At TNO I lead several projects on the topics of human-AI interaction, explainable AI and AI safety research.

    TNO research