EAISI Research Track
How to trust AI?
In the Summit research track, TU/e researchers and partners will present their latest findings, specifically around this theme. In the midst of AI developments that lead to doubts or even fear, we put some necessary focus on the positive influence AI has and will have on our lives and the real world around us. Please join the discussions and get to know in detail what is happening in the center of Brainport’s AI research.
At EAISI 900+ AI-researchers do research on AI systems where the physical, digital, and human worlds come together. EAISI aims to get to a better understanding, better designs, better models, and better decisions in the application areas of Health, Mobility, and High-Tech Systems.
Confirmed Speakers 2025
Below are the first confirmed speakers:
Associate Professor Alessandro Saccon of the department of Mechanical Engineering
Assistant Professor Meike Nauta of the department of Mathematics and Computer Science
Assistant Professor Isel Grau Garcia of the department of Industrial Engineering and Innovation Sciences
and Assistant Professor Guang Hu of the department of Mechanical Engineering.

Alessandro Saccon
Can we trust robots to physically touch the open world? The present and the future
Abstract
Quadrupeds jumping on rocky terrains, humanoid robots performing summersaults, robot manipulators folding laundry: it is an exciting time in the field of robotics.
Still, we do not have a robot filling the dishwasher at home or a humanoid robot building a scaffold on a construction site. Where do we really stand? What have been the enablers of the undeniable progress in robotics research in the last five year? And what is still lacking to build trustworthy AI robotics systems for the industry and our homes?
The talk will provide a perspective on pixel-to-action robot control, the role of physics simulation, the use of machine learning for providing unprecedented levels of fast but potentially faulty perception and motion planning, and the surge of new computational and mechatronics hardware that is meant to be tolerant to collisions and exploit physical contact with the environment, instead of fearing and avoiding it.
Alessandro Saccon received a PhD in control system theory from the University of Padova, Italy, in 2006. He also holds a degree with honors in computer engineering from the same University. Following his PhD, he held a research and development position at the University of Padova in collaboration with racing motorcycle company Ducati Corse. From 2009 to early 2013, he held a post-doctoral research position at the Instituto Superior Técnico, Lisbon, Portugal working on geometric numerical methods for solving trajectory optimization problems.
Alessandro has held visiting positions at the University of Colorado, Boulder (2003 and 2005), at the California Institute of Technology (2006), and at the Australian National University (2011). He is the author and co-author of more than 50 peer-reviewed scientific publications across his various research areas.

Mauro Salazar
Autonomous Vehicles, Safety Halo, Mobility Systems, Wellbeing
Abstract
I
Knowing what an AI system has learned—and why it behaves as it does—can contribute to trust.
Explainable AI promises these insights and has already uncovered surprising behavior in real use‑cases. Yet while explainability methods are crucial for auditing model behavior, they can be misleading or simply wrong, giving users a false sense of security.
In this talk I re‑examine what explainable AI can and cannot deliver, and argue for a shift from static, one‑way explanations to interactive, interpretable‑by‑design systems. Early work in computer vision and natural language processing shows the first steps toward AI that is interpretable by design.
After quite some starts and stops, autonomous vehicles are finally becoming a reality. Recent advances in AI have significantly accelerated their development. Classical architectures comprising perception-planning-control have been increasingly replaced by end-to-end architectures that are empowered by foundational AI models, leading to remarkable performance levels.
Yet a crucial question arises: How do we guarantee safety in such frameworks?
Isel Garcia Grau
SOFI: A Sparseness-Optimized Feature Importance method
In our paper, we propose a model-agnostic post-hoc explanation procedure devoted to computing feature attribution. The proposed method, termed Sparseness-Optimized Feature Importance (SOFI), entails solving an optimization problem related to the sparseness of feature importance explanations. The intuition behind this property is that the model's performance is severely affected after marginalizing the most important features while remaining largely unaffected after marginalizing the least important ones. Existing post-hoc feature attribution methods do not optimize this property directly but rather implement proxies to obtain this behavior. Numerical simulations using both structured (tabular) and unstructured (image) classification datasets show the superiority of our proposal compared with state-of-the-art feature attribution explanation methods.
Isel Grau received her Ph.D. in Computer Science from Vrije Universiteit Brussel (VUB), Belgium, where her research focused on machine learning interpretability and semi-supervised classification. During her postdoctoral work at the Artificial Intelligence Laboratory of VUB, she collaborated on interdisciplinary projects with organizations such as the Interuniversity Institute of Bioinformatics Brussels, Universitair Ziekenhuis Brussel, and Collibra BV.
Isel has co-authored over 75 peer-reviewed publications and has been actively involved in organizing thematic workshops. She frequently serves as a reviewer for leading AI conferences and journals, including ECAI, BNAIC, LION, and IEEE Transactions on Fuzzy Systems. She has also been a visiting researcher at institutions such as Warsaw University of Technology and the University of Lisbon.
The proposed method, termed Sparseness-Optimized Feature Importance (SOFI), entails solving an optimization problem related to the sparseness of feature importance explanations.

Guang Hu
Pitch: Trusting AI for Sustainable Energy Integration
Our paper examines the critical role of trust in artificial intelligence systems that manage multi-source energy integration in sustainable urban environments. Building on our ongoing machine learning analyses of photovoltaic (PV) and building-integrated PV systems, we investigate how AI can optimize interconnected energy networks. Our design-driven research adopts an interdisciplinary approach to model the complex relationships between diverse generation sources (solar, wind, nuclear, thermal), energy storage systems, and consumption sectors (residential, commercial, industrial) at appropriate urban scales. We demonstrate how these energy sources connect through existing and new infrastructure networks for both electricity and heat distribution. The complexity of these systems necessitates advanced AI-driven analytics for optimal operation—defined through multiple objectives including flexibility, sustainability, and cost-effectiveness. This research addresses the central question: "How can we design trustworthy AI systems that balance automation with human oversight in complex urban energy networks?" We propose a comprehensive framework for establishing trust in AI-powered energy integration hubs, focusing on transparency in decision-making, validation through real-world testing, and human-AI collaborative approaches. Through case studies utilizing multi-source energy data, we demonstrate how properly designed trust mechanisms enhance energy efficiency, grid resilience, and stakeholder acceptance while addressing concerns regarding data privacy, algorithmic bias, and system reliability. We invite collaboration from stakeholders willing to share operational energy system data to further validate and refine our models. The findings suggest that trustworthy AI implementation is essential for maximizing the potential of integrated energy systems in sustainable urban development, particularly as cities face increasing challenges from climate change and growing energy demands.
Guang Hu obtained his Ph.D. with distinction from Tsinghua University in 2019. Prior to joining TU/e, he held positions as a postdoctoral researcher at the Karlsruhe Institute of Technology in Germany and as a researcher at the Paul Scherrer Institut (PSI) in Switzerland, building a strong international research profile.
Guang has published in several respected journals in the fields of computational modeling, machine learning applications, and sustainable/nuclear energy. His research contributions span theoretical advancements in modeling techniques and practical applications for energy systems. He actively participates in international research collaborations and serves as a reviewer for journals in his field, contributing to the academic community's knowledge development and quality assurance.
How can we design trustworthy AI systems that balance automation with human oversight in complex urban energy networks?" We propose a comprehensive framework for establishing trust in AI-powered energy integration hubs, focusing on transparency in decision-making, validation through real-world testing, and human-AI collaborative approaches.

Zaharah Bukhsh (duo talk)
Assistant professor at Eindhoven University of Technology
Decision-making with deep reinforcement learning
Deep learning has revolutionized data-driven models, enabling the mastery of complex tasks by learning from vast amounts of (labeled) data.
View profile on TU/e website
Abstract
Deep reinforcement learning (RL) takes this progress a step further by enabling decision-making under uncertainty, where the goal is not only to perceive and classify but also to take actions and consider their long-term consequences.
In this talk, we will present real-world applications of deep RL in diverse domains, from maintenance planning of underground utilities and quay walls to the collaborative order picking problem in Vanderlande warehouses. We will show how deep RL empower intelligent decision-making in complex and dynamic environments.
The talk will conclude by providing valuable lessons learned from industry expert, shedding light on the challenges and opportunities that arise when applying deep RL in real-world settings.

Kasper Hendriks (duo talk)
Innovation engineer at Vanderlande
Decision-making with deep reinforcement learning
Deep learning has revolutionized data-driven models, enabling the mastery of complex tasks by learning from vast amounts of (labeled) data.
View profile on LinkedIn
Abstract
Deep reinforcement learning (RL) takes this progress a step further by enabling decision-making under uncertainty, where the goal is not only to perceive and classify but also to take actions and consider their long-term consequences.
In this talk, we will present real-world applications of deep RL in diverse domains, from maintenance planning of underground utilities and quay walls to the collaborative order picking problem in Vanderlande warehouses. We will show how deep RL empower intelligent decision-making in complex and dynamic environments.
The talk will conclude by providing valuable lessons learned from industry expert, shedding light on the challenges and opportunities that arise when applying deep RL in real-world settings.

Patricia Kahr
PhD Candidate in the Human-Technology Interaction group.
Performance is Not Everything! Learning What Promotes Trustworthy Human-AI Collaborations. Insights from a Case Study in Logistics Planning
An important area of research on human-AI interaction focuses on promoting trust and reliance in AI decision support systems.
View profile on TU/e website
Abstract
Research in this area is thriving, and it is evident that trust and reliance in AI systems are often not well-calibrated for users: experts frequently find it challenging to depend on the advice from such systems or may outright refuse to use their support for decision-making.
To gain a deeper understanding of real-life collaborations, we conducted semi-structured in-depth interviews with logistics planning experts to explore their experiences with their automated planning system.
Our study sheds light on the overall trust dynamics in this area and the conditions under which experts rely on or deviate from AI recommendations. We highlight the importance of incorporating human needs and special circumstances into AI systems to foster more effective and trustworthy collaboration between humans and AI.

Andrii Kompanets
PhD Candidate at the Steel Structures group at TU/e
AI-aided visual inspection of steel bridges
Automating the current bridge visual inspection practices using drones and image processing techniques is a prominent way to make these inspections more effective, robust, and less expensive.
View profile on TU/e website
Abstract
In our work, we investigate the development of a novel deep-learning method for the detection of fatigue cracks in high-resolution images of steel bridges.
First, we developed a semi-automatic tool based on a geometric tracking algorithm, which allows a fast and accurate pixels-wise annotation of cracks in images. We used the developed tool to generate a challenging benchmark dataset. Further, we conducted series of experiments on this dataset, aiming at the development of a robust, fully-automatic deep-learning method for crack segmentation.
Based on these experiments, multiple discoveries were made and novel solutions were proposed, which helped to significantly increase reliability of crack segmentation system.
To prove the concept of automatic inspection of bridges, we also developed a method to compare our automatic crack segmentation system with the performances of human inspectors.

Apoorva Singh
Predictive modeling of large-scale pedestrian dynamics using AI
The current research work, supported by the EAISI EMDAIR project “AICrowd”, aims at quantitatively modelling the dynamics of pedestrian crowds, using tools from AI and System Identification.
View profile on TU/e website
Abstract
It's part of a decade-long endeavor towards establishing a physical understanding and predictive modeling capabilities of crowd flows: an effort at the interface of physics, AI, and engineering.
This understanding is crucial for effective crowd management and designing safe and comfortable urban infrastructure.

Fons van der Sommen
Associate Professor Video Coding & Architectures, TU/e
Robust, self-critical AI for oncology
While the capabilities of modern AI systems keep surpassing expectations for a wide range of applications, their development has mostly driven by their accuracy on standardized data sets.
View profile on TU/e website
Abstract
This empirical approach to model development introduces blind spots towards their application, as evidenced by an increasing number of reports on unstable model behaviour, under minor changes in the unput data.
For healthcare applications, these observations are especially concerning, as unreliable AI output may lead to incorrect diagnoses and treatment.
In this talk, dr. van der Sommen will elaborate on the current lack of model robustness for oncology applications and will discuss methods that could help mitigate this problem.

Carlos Zednik
Assistant Professor, TU/e
Cognitive models to understand knowledge-representation in large language models
Harnessing the power of large language models while retaining control over their behavior is one of the central challenges of contemporary artificial intelligence.
View profile on TU/e website
Abstract
One way to meet this challenge is to identify and characterize the knowledge these models possess, or the beliefs that drive their behavior.
In this talk, I present an explanatory strategy that combines methodological principles from cognitive science and explainable AI to characterize the knowledge or beliefs that are represented in state-of-the-art transformer architectures.
This strategy centers on cognitive models, which provide not only predictive power over a particular system's behavior, but which also support systematic and targeted interventions on that behavior.

Remco Duits
Head of the Geometric Learning and Differential Geometry group, Cluster: CASA. Department: Mathematics and Computer Science. EAISI, TU/e.
New Geometric Learning for Medical and Industrial Image Analysis
Our geometrically interpretable networks achieve better classification results in image processing with both less training data and less network complexity.
View profile on TU/e website
Abstract
We show that our geometric image processing is more accurate for various medical and industrial image analysis problems, e.g.: 1) segmentation and tracking of complex vasculature, 2) wall-shear stress estimation of vasculature, 3) segmentation of chips, and 4) crack detection in steel bridges.
To get to state-of-the-art solutions for practical medical and industrial image analysis problems, we develop PDE-based Group Convolutional Neural Networks (PDE-G-CNNs).
In PDE-G-CNNs a network layer is a set of solvers of PDEs (Partial Differential Equations). The PDEs are defined on the space of positions and orientations and provide a geometric design of the roto-translation equivariant neural network.
The network consists of morphological convolutions with kernels solving nonlinear PDEs (HJB equations for max-pooling over Riemannian balls), and linear convolutions solving linear PDEs (convection, fractional diffusion). Common mystifying (ReLU) nonlinearities are now obsolete and excluded.

Josette Gevers (duo talk)
Full Professor TU/e
An AI-based feedback system for team support during crisis events
Wearable technology offers a groundbreaking opportunity to provide teams with real-time feedback to enhance their effectiveness in high-stakes crisis environments, such as medical emergencies.
View profile on TU/e website
Abstract
While current wearable technology primarily targets individual users, advancements in AI are unveiling new potentials for capturing and supporting team dynamics in circumstances where teamwork is essential but often falters.
Our transdisciplinary research leverages insights from psychology, physiology, and computational sciences to analyze coordination dynamics in team interaction signals (e.g., heart rate, skin conductance, speech, motion) and identify (early warning signs of) team coordination breakdowns. We develop and test methods to convert these measurements into real-time feedback to help teams sustain effective team performance.
With studies in serious gaming and medical emergency simulation settings, we aim to generate actionable insights for future wearable applications, ultimately enhancing team functioning and adaptive performance in crisis conditions.
In this presentation, Josette will highlight the critical role of team coordination during crises, and Travis will detail the AI methodologies used to assess and potentially enhance these team dynamics.

Travis Wiltshire (duo talk)
Assistant Professor in the Department of Cognitive Science and Artificial Intelligence, Tilburg University
An AI-based feedback system for team support during crisis events
Wearable technology offers a groundbreaking opportunity to provide teams with real-time feedback to enhance their effectiveness in high-stakes crisis environments, such as medical emergencies.
View profile on Tilburg University website
Abstract
While current wearable technology primarily targets individual users, advancements in AI are unveiling new potentials for capturing and supporting team dynamics in circumstances where teamwork is essential but often falters.
Our transdisciplinary research leverages insights from psychology, physiology, and computational sciences to analyze coordination dynamics in team interaction signals (e.g., heart rate, skin conductance, speech, motion) and identify (early warning signs of) team coordination breakdowns. We develop and test methods to convert these measurements into real-time feedback to help teams sustain effective team performance.
With studies in serious gaming and medical emergency simulation settings, we aim to generate actionable insights for future wearable applications, ultimately enhancing team functioning and adaptive performance in crisis conditions.
In this presentation, Josette will highlight the critical role of team coordination during crises, and Travis will detail the AI methodologies used to assess and potentially enhance these team dynamics.

Aaqib Saeed (duo talk)
Decentralized AI for Sensing and Health
The ubiquity of interconnected systems has given rise to a world enriched with omnipresent computing, where computing is so ingrained in our daily lives that we often fail to realize our interactions with these platforms.
View profile Aaqib on TU/e website
Abstract
This proliferation of devices, embedded with sophisticated sensors, generates data at an unprecedented scale, presenting both opportunities and challenges for artificial intelligence (AI).
Decentralized AI is emerging as a core solution to effectively harness the power of distributed data and computing resources, enabling the development of data-driven predictive models that form the foundation of next-generation of embedded intelligence.
This talk will focus on our recent efforts in addressing the challenges associated with efficiently utilizing decentralized data in a scalable manner, particularly in the context of healthcare, where decentralized AI holds immense potential for revolutionizing patient care and improving health outcomes.

Hareld Kemps (duo talk)
Decentralized AI for Sensing and Health
The ubiquity of interconnected systems has given rise to a world enriched with omnipresent computing, where computing is so ingrained in our daily lives that we often fail to realize our interactions with these platforms.
View profile Maxima MC website
Abstract
This proliferation of devices, embedded with sophisticated sensors, generates data at an unprecedented scale, presenting both opportunities and challenges for artificial intelligence (AI).
Decentralized AI is emerging as a core solution to effectively harness the power of distributed data and computing resources, enabling the development of data-driven predictive models that form the foundation of next-generation of embedded intelligence.
This talk will focus on our recent efforts in addressing the challenges associated with efficiently utilizing decentralized data in a scalable manner, particularly in the context of healthcare, where decentralized AI holds immense potential for revolutionizing patient care and improving health outcomes.

Jakub Tomczak
Associate professor TU/e
Generative AI Systems
Jakub M. Tomczak is an Associate Professor and the Head of the Generative AI group at the Eindhoven University of Technology. He serves as a Program Chair of NeurIPS 2024. He is the founder of Amsterdam AI Solutions. His research interests are Generative AI, Deep Learning and Probabilistic Modeling.
View profile on LinkedIn
Abstract
Large Language Models (LLMs) have revolutionized AI systems by enabling communication with machines using natural language. Recent developments in Generative AI (GenAI) like Vision-Language Models (GPT-4V) and Gemini have shown great promise in using LLMs as multimodal systems.
This new research line results in building Generative AI systems, GenAISys for short, that are capable of multimodal processing and content creation, as well as decision-making. GenAISys use natural language as a communication means and modality encoders as I/O interfaces for processing various data sources.
They are also equipped with databases and external specialized tools, communicating with the system through a module for information retrieval and storage.
This presentation aims to explore and state new research directions in Generative AI Systems, including how to design GenAISys (compositionality, reliability, verifiability), build and train them, and what can be learned from the system-based perspective. Cross-disciplinary approaches are needed to answer open questions about the inner workings of GenAI systems.

Elena Torta
Collaborative robots for real-world applications: experiences from the EAISI Impuls research projects AMBER and TOWR
How can we increase the level of autonomy of collaborative robots? How can we leverage prior knowledge and machine learning to improve the performance of robotic navigation systems? In this talk we are going to explore these questions and more by looking at recent research results from the EAISI Impuls projects AMBER and TOWR
View profile on LinkedIn
Abstract
In this presentation we discuss recent results of the EAISI Impuls research projects AMBER and TOWR; a joint TU/e collaboration with Philips (AMBER) and Lely and Vanderlande (TOWR). In the talk we will show fundamental control and world modelling techniques and discus their usage in the development of complex navigation strategies for robotic solutions deployed in surgical and agricultural settings.
Emphasis will be given to optimization-based control methods (i.e., MPC) in conjunction with the usage of prior knowledge in the form of CAD or BIM models to provide input to the optimization problem. We will also discuss how machine learning and digital twins can provide helpful tools for the design and the automatic tuning of the navigation system.
Througout the presentation we will show how similar techniques can be applied to robots deployed in very different contexts, namely operating rooms and agricultural settings.


