header image

TNO research

At TNO I participate and lead several projects in the domain of human–AI interaction.

TNO research

Within TNO I work in the department of Human–Machine Teaming within the team Human–Agent/Robot Teaming. My research focuses on extending the capabilities of AI systems with those they need to effectively and safely collaborate with all those involved. My focus is on AI systems that act as decision support systems (i.e., diagnostic support in healthcare), offer operator support (i.e., remote operators for autonomous sailing) or generally act as virtual agents (i.e., information gathering agents for the police). Aside from that, I work with physical autonomous systems and how they should perform their tasks and behave such that they fit in a team consisting of humans and systems.

Below a list of my current projects and a few highlights of past projects. Also have a look at my other activities such as my community and open source projects, and my explainable AI research that has my focus.

Current TNO projects

  • Hybrid Intelligence in Practice

    Hybrid Intelligence is about augmenting humans with and artificial intellect, where systems are in symbioses with humans. This lab aims to apply hybrid intelligence in societal relevant use cases where a system-centered perspetive proved to be ineffective. One of our first use case is that of support systems in healthcare that help clients change their lifestyle for the better.

  • Explainable AI for the military

    The responsible application of explainable AI is not apparant as we know little on how to communicate explanations and what their effects are. This project led by me researches how explanations can and should be applied responsibly in high-risk AI applications for the military.

  • Explainable AI for decision support

    This research program led by me, researches the requirements, interaction designs and technologies needed for an AI system to explain itself to system users, auditors and engineers. This research is embedded in various use cases, from text mining to support lawyers to diagnostic support for doctors.

    Appl.AI
  • Explainable AI in smart energy systems

    A project in which I advise on the potential of Explainable AI research in AI applications for consumers to manage their energy usage at home and energy suppliers to optimize the energy grid for households.

  • AI safety and artificial moral models

    A program to develop a methodology to design artificial moral models in an iterative fashion to enable responsible high-risk applications of AI. This as a means to tackle AI safety challenges and achieve value alignment. As the scientific lead, I organize and prioritize the research efforts in developing this methodology.

  • Intelligent operator support system

    Within several European Horizon projects, I lead the research on how AI-based support tools can be developed to support operators in supervising not one, but many autonomous operations in parallel in a safe and effective manner.

    MOSES

Past projects

  • Delegation in human-AI teams

    Research on how humans can effectively and responsibly delegate autonomous systems and virtual agents in an efficient way that leverages the intelligences of these systems and agents as well as the unique human capabilities. Under my lead we develop the DASH concept.

    DASH
  • Research roadmap on the operationalization of AI and data science

    Together with other researchers from various domains this projects works towards a roadmap how to operationalize AI and data science. My role was to advise and provide a vision towards the topic of human-machine teaming and how it should be operationalized. I led a team of various researchers and practitioners of AI from different research institutes and companies.

  • Research roadmap on Human-Machine Teaming

    A project to establish the second iteration of the roadmap on human-machine teaming research for the Dutch Department of Defense (DoD). Within this roadmap we defined the research areas DoD and their investment priorities to ensure their use of AI and autonomy is done effectively and in a responsible manner.

  • Actionable explanations through causal models

    Those who are subjected to an AI system’s decision need the ability to contest those decisions. This research resulted in a method that combines machine learning and causal models to generate explanations that support this ability by explaining what steps the subject can take to influence the AI system’s decision (e.g., change their lifestyle( or report their concerns effectively (e.g., report a biased decision or a violation of privacy).

  • Operator support on maritime vessels

    Current large maritime vessels have a sophisticated auto-pilot to follow a set course or to manage a position, even under extreme weather conditions. Within the project we developed a concept that allows a single operator to supervise this auto-pilot while roaming the vessel. This is made possible through predictive analytics such that the operator can be informed in time to return to the bridge to manage issues and deviations.

    Operator support video
  • Machine learning on EEG data for an improved virtual reality experience

    Latency is an issue for any virtual reality experience, where head movements are out of sync of what is projected. With a patented approach of EEG data and machine learning we managed to predict up to 0.5 seconds in the future if someone will move its head and in which direction.

How can humans collaborate better with AI systems?

  • Through explainable AI!

    My explainable AI research contributes to a more responsible use of AI systems by providing humans the needed understanding how such systems make their decisions.

    XAI research