What I do
I am a researcher at TNO's department Human-Machine Teaming and affiliated with the Hybrid Intelligence Centre. I research, design and egineer interactions between humans and AI systems such that they achieve their task effectively, safely and responsibly. To do so, I search for ways how AI systems can augment humans and how humans can augment AI systems.
My main topic of interest is explainable AI. In particular, what an AI systems needs to explain and how it can do so. I believe that explanations are a key enabler of a collaboration between humans and AI systems. They aid engineers in understanding the systems they build. They help users understand the system's capabilities, and they help the AI systems themselves to provide more effective support. In my research, these collaborations occur in high-risk contexts, creating a high standard for these explanations. A wrong explanation, either by content or timing, may result in severe consequences. Whereas the right explanation may prevent such consequences, perhaps even allow human and AI system to achieve a better outcome.
I lead and advise several projects on explainable AI to enable effective human-AI collaborations. Furthermore, I lead a research program on AI Safety research where we develop ways to responsibly design and develop AI systems for high-risk applications. Within the Hybrid Intelligence Centre I lead the lab on applied hybrid intelligence research. My activities include public speaking, guest lectures, maintaining and leading several open source projects, and contributing to relevant conferences and journals. I received my PhD Cum Laude at the Technical University of Delft on explainable AI and won TNO's Excellent Researcher award for combining academic research with applied science.