Progress in artificial intelligence (AI), robotics, conversational agents, connectivity and sensing technology increases the capabilities of physical and virtual agents to operate more and more autonomously, possibly as team members, companions or partners of humans. However, the information and decision-making processes of these agents differ from human’s processes fundamentally.
Fruitful human-agent collaboration requires the AI to develop meta-reasoning and meta-learning capabilities, which match the individual human and group needs, and which support the lifelong human-agent co-learning.
Harmonizing the joint and interdependent social, cognitive and affective processes in the co-active learning process, is a major challenge for the research & development community, asking for a rigorous interdisciplinar approach. Social sciences (e.g., Psychology) and AI have to join and study agents and humans as part of an evolving mutually adaptive human-agent collective with hybrid intelligence.
This way, AI-included technologies can augment team’s capabilities progressively, e.g. to integrate robotic partners into the healthcare teams, safety and security teams, and defense teams for improved performance and resilience.
The NWO gravitation program Hybrid Intelligence (HI) started and the HI roadmap was published (got the best paper of the year award for the journal). The first PhD-candidates have been appointed and their PhD-projects have been worked out. First results will, among other things, be presented and discussed at the first international conference on Hybrid Human Artificial Intelligence (HHAI2022), including a workshop and hackathon with key contributions from TNO and TU Delft.
The PhD-research of Jasper van der Waa on eXplaible Artficial Intelligence (XAI) and Emma van Zoelen on Human-AI co-learning , have been further embedded in the TNO APPL.AI program (particularly the Flagship FATE) and connected to the NWO HI program. In addition to their project-specific publications and presentations, they have been working on two common research platforms: the Human-Agent Teaming Rapid Experimentation (MATRX) software package and the Wiki Socio-Cognitive Engineering tool (WiSCE). Both platforms support collaborative HI-research and are being used by TNO and TU Delft (and others) for this purpose.
In the Delft AI*MAN AI Lab, Ruben Verhagen provided a taxonomy of concepts in eXplainable AI research. In an expert meeting for the Minister of Justice and Security, a presentation was given about our TUDelft-TNO research on innovative (AI-based) technologies for the prevention and treatment of psycho-trauma and Post-Traumatic Stress Disorders (PTSD). Last and not least, Mark Neerincx is involved in two awarded ELSA-labs, respectively for the Safety and Security (police) and for the defence domain (TNO and TU Delft are participating in both labs which start in 2022).
- Nanja Smets (FP7 NIFTi/TRADR projects, MECA project for ESA)
- Rifca Peters (Horizon2020 project PAL robot, ERP Hyrbrid AI)
- Mike Ligthart (NWO-TTW/KWF, HERO robot project)
- Bernd Dudzik (4TU-Human&Technology); got his PhD in 2021.
- Franziska Burger (4TU-Human&Technology)
- Jasper van der Waa (RVO Human-AI Teaming, ERP Hybrid AI, Explainable AI project)
- Emma van Zoelen (DO AIO, “Artificial Intelligence for Military Simulations” [AIMS] and “Opleiding & Individuele Training”[OIT] programmes, Learner Modelling project)
- Ruben Verhagen (Delft MAN*AI Lab, Explainable AI in Humang-Agent Teams)
- Mani Tajaddini (Gravitation program Hybrid Intelligence). Design Patterns of Hybrid Intelligence. Promotor. PhD Delft University of Technology.
- Nele Albers 2nd Promotor (Medical Delta). Computerised Health Coaching. PhD Delft University of Technology.
- Neerincx MA, van Vught W, Blanson Henkemans O, Oleari E, Broekens J, Peters R, Kaptein F, Demiris Y, Kiefer B, Fumagalli D and Bierman B (2019). Socio-Cognitive Engineering of a Robotic Partner for Child’s Diabetes Self-Management. Frontiers in Robotics and AI, 6:118. https://doi.org/10.3389/frobt.2019.00118.
- de Visser, E.J., Peeters, M.M.M., Jung, M.F., Kohn, S., Shaw, T.H., Pak, R., and Neerincx, M.A. (2019). Towards a Theory of Longitudinal Trust Calibration in Human–Robot Teams. International Journal of Social Robotics. https://doi.org/10.1007/s12369-019-00596-x
- van der Waa, J., Nieuwburg, E., Cremers, A., & Neerincx, M. (2021). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence, 291, 103404.