AI Implementations Track
Timetable
11:00 Coffee break and Expo
11:30 Jos van der Wijst
12:00 Carolien Mazal
12:30 Ray Lange
13:00 Lunch break and Expo
14:15 Noëlle Cicilia
14:45 Giuseppe Garcea
15:15 Benno Beuting
15:45 Rick Scholte

Jos van der Wijst
BG.Legal
Building Legal Trust in AI: Standards, Contracts & Impact Assessments
Join to learn how legal considerations, impact assessments, and emerging standards play a vital role in fostering trust in AI implementations.
View profile on business website
Abstract
As AI technologies rapidly evolve, building trust goes beyond technical robustness — it also requires sound legal foundations. In this session, legal expert Jos brings his deep expertise in data law, AI governance, and intellectual property to the forefront. As a core team member of the NL AI Coalition and a contributor to the Dutch NEN Standards Committee on AI and Big Data, Jos helps shape the legal frameworks and standards that enable responsible AI adoption.
He advises startups, scale-ups, and corporate clients on a wide range of legal topics, from data contracts and IP strategies to AI impact assessments (IAMI) and the legal aspects of M&A transactions involving AI-driven companies. Jos also represents clients in patent litigation, balancing innovation protection with fair market practices.
Join this talk to learn how legal considerations, impact assessments, and emerging standards play a vital role in fostering trust in AI implementations.
Carolien Mazal
Governmental Affairs Manager TomTom
Navigating Trust: TomTom’s journey from Turn-by-Turn Directions to AI-Powered Guidance
How has TomTom managed to gain such a high level of trust? And, as we integrate advanced AI systems into our navigation technology how can we maintain trust? Illustrated with many practical examples, you will hear about TomTom's journey to win its drivers trust.
At the roundabout, take the second exit”, “recalculating route ...”, “ you have reached your destination” these are very well-known sentences coming from your “TomTom” when driving.
Why do so many people trust these messages? Even when stories about hiccups of navigation devices are popular small-talk topics, people still tend to follow navigation instructions blindly. When a driver follows our navigation instructions, they're placing their safety, time, and confidence in our hands.
How has TomTom managed to gain such a high level of trust? And, as we integrate advanced AI systems into our navigation technology how can we maintain trust? Illustrated with many practical examples, you will hear about TomTom's journey to win its drivers trust.


Ray Lange
BDE @ Nebul - Sovereign European Cloud & A.I.
The Role of Private AI, Sovereign Cloud and Data Privacy
Discover how to create an infrastructure that enables 100% private and secure AI processing, where data stays in its preferred location. Learn about the importance of data sovereignty, Private AI and how to maintain control over sensitive information.
Noëlle Cicilia
Co-founder Brush AI | AI person of the year
Trustworthy AI: Can you ever trust an untrustworthy technologgy?
The term "trustworthy AI" is everywhere these days, and rightfully so. AI systems are becoming an integral part of our society, from healthcare to transportation, so they absolutely need to be reliable.
The term "trustworthy AI" is everywhere these days, and rightfully so. AI systems are becoming an integral part of our society, from healthcare to transportation, so they absolutely need to be reliable.
But here's where it gets interesting: many AI systems are inherently untrustworthy by nature. They make mistakes, harbor biases, and behave unpredictably.
This creates a paradox that raises a crucial question: how do we build a stable society on a technology that has this fundamental reliability problem?


Giuseppe Garcea
HW R&D Director Axelera AI
Trust at the core of thechnology and mission
The company Axelera AI is growing rapidly and building the future of Edge AI — fast, energy-efficient, and reliable. Edge AI is artificial intelligence that doesn’t live in the cloud, but close to the action: in cameras, machines, and vehicles themselves. During this session, you will discover why trust is at the core of their technology and mission.
Abstract Technical
The Axelera AI Metis platform revolutionizes edge AI computing through an integrated hardware-software solution featuring the Metis AI Processing Unit (AIPU) and Voyager SDK. Built on Axelera's proprietary SRAM-based Digital In-Memory Computing (D-IMC) engine, the platform performs matrix operations without data movement while supporting INT8 operations and maintaining FP32 accuracy without model retraining. The Metis AIPU delivers 214 TOPS with 15 TOPS/W efficiency, processing 480 fps on YOLOv5s across 16 Full HD streams with 99% original model accuracy and 4-5x better efficiency than GPU solutions. Having a variety of differnet form factors M.2 cards, PCie and SBC boards this platform democratizes edge AI across automotive, healthcare, industrial, and surveillance markets through breakthrough performance and cost-effectiveness.
Benno Beuting
CEO at Cordis SUITE
How can we trust AI in safety-critical machine control?
In safety-critical systems, even one faulty AI decision can lead to costly failures or dangerous machine behavior. If we’re cautious about trusting AI, how can we trust it to control complex, high-stakes industrial systems? Discover how Cordis SUITE merges the creativity and productivity of AI with formal verification methods to deliver control software that’s transparent, reliable, and safe by design.
Abstract
Introducing Dreaming Machines
What Are Dreaming Machines? “Dreaming Machines” is our vision of a future where machines autonomously imagine, explore, and evolve—by dreaming.
The Opportunity: Today’s machines typically operate far below their optimal capabilities. By giving them the ability to dream, we unlock their latent potential and initiate a new era of autonomous innovation. Instead of being hardcoded to perform fixed tasks, Dreaming Machines continuously evolve—autonomously imagining, experimenting, and refining how they operate.
A Breakthrough Difference: This limitless innovation through dreaming happens in real-time, while the system remains fully operational—eliminating the need for downtime to unlock new potential.
How does it work? Each Dreaming Machine is virtually cloned and placed within a safe digital playground. In this simulated world, the machine can run thousands of parallel scenarios—exploring new strategies, learning from outcomes, and improving its behavior over time. These simulations happen without any risk to real-world performance, enabling machines to safely push the boundaries of what’s possible. Within a safe digital playground, these intelligent systems endlessly explore virtual scenarios, discovering smarter strategies without real-world risks.
Established Low-Code Foundation: Cordis SUITE provides the low-code foundation that makes the Dreaming Machines vision possible. These visual low-code models are based on a subset of UML, providing clear, transparent control logic that engineers can easily verify and refine. On top of this solid foundation, AI analyzes these models and adjusts them to discover better ways to control the system. The Cordis SUITE platform then automatically generates error-free control software from the updated models, resulting in robust, trustworthy control software for industrial automation.
How Do Visual Low-Code Models Help Build Trust? Because the control logic stays graphical and structured, any changes made by AI are transparent and easy for engineers to understand and verify. This makes it clear how the system works and how it evolves, unlike buried in lines of hard-to-validate code.
How Does Formal Verification Help Build Trust? To trust AI in safety-critical machine control, there must be certainty that every behavior meets strict safety and functional requirements. This is possible in Cordis SUITE because the low-code models are formal, unlike traditional code. Therefore, it is possible to apply proven mathematical verification techniques that automatically check every change proposed by AI. By combining AI-assisted control design with formal verification, the trust gap is closed and self-improving, autonomous systems remain predictable, robust and safe in daily operation.


Jos van der Wijst
BG.Legal
Building Legal Trust in AI: Standards, Contracts & Impact Assessments
Join to learn how legal considerations, impact assessments, and emerging standards play a vital role in fostering trust in AI implementations.
Abstract
As AI technologies rapidly evolve, building trust goes beyond technical robustness — it also requires sound legal foundations. In this session, legal expert Jos brings his deep expertise in data law, AI governance, and intellectual property to the forefront. As a core team member of the NL AI Coalition and a contributor to the Dutch NEN Standards Committee on AI and Big Data, Jos helps shape the legal frameworks and standards that enable responsible AI adoption.
He advises startups, scale-ups, and corporate clients on a wide range of legal topics, from data contracts and IP strategies to AI impact assessments (IAMI) and the legal aspects of M&A transactions involving AI-driven companies. Jos also represents clients in patent litigation, balancing innovation protection with fair market practices.
Join this talk to learn how legal considerations, impact assessments, and emerging standards play a vital role in fostering trust in AI implementations.

Carolien Mazal
Governmental Affairs Manager TomTom
Navigating Trust: TomTom’s journey from Turn-by-Turn Directions to AI-Powered Guidance
How has TomTom managed to gain such a high level of trust? And, as we integrate advanced AI systems into our navigation technology how can we maintain trust? Illustrated with many practical examples, you will hear about TomTom's journey to win its drivers trust.
At the roundabout, take the second exit”, “recalculating route ...”, “ you have reached your destination” these are very well-known sentences coming from your “TomTom” when driving.
Why do so many people trust these messages? Even when stories about hiccups of navigation devices are popular small-talk topics, people still tend to follow navigation instructions blindly. When a driver follows our navigation instructions, they're placing their safety, time, and confidence in our hands.
How has TomTom managed to gain such a high level of trust? And, as we integrate advanced AI systems into our navigation technology how can we maintain trust? Illustrated with many practical examples, you will hear about TomTom's journey to win its drivers trust.

Ray Lange
BDE @ Nebul - Sovereign European Cloud & A.I.
The Role of Private AI, Sovereign Cloud and Data Privacy
Discover how to create an infrastructure that enables 100% private and secure AI processing, where data stays in its preferred location. Learn about the importance of data sovereignty, Private AI and how to maintain control over sensitive information.

Noëlle Cicilia
Co-founder Brush AI | AI person of the year
Trustworthy AI: Can you ever trust an untrustworthy technologgy?
The term "trustworthy AI" is everywhere these days, and rightfully so. AI systems are becoming an integral part of our society, from healthcare to transportation, so they absolutely need to be reliable.
Abstract
The term "trustworthy AI" is everywhere these days, and rightfully so. AI systems are becoming an integral part of our society, from healthcare to transportation, so they absolutely need to be reliable.
But here's where it gets interesting: many AI systems are inherently untrustworthy by nature. They make mistakes, harbor biases, and behave unpredictably.
This creates a paradox that raises a crucial question: how do we build a stable society on a technology that has this fundamental reliability problem?

Giuseppe Garcea
HW R&D Director Axelera AI
Trust at the core of thechnology and mission
The company Axelera AI is growing rapidly and building the future of Edge AI — fast, energy-efficient, and reliable. Edge AI is artificial intelligence that doesn’t live in the cloud, but close to the action: in cameras, machines, and vehicles themselves. During this session, you will discover why trust is at the core of their technology and mission.

Benno Beuting
CEO at Cordis SUITE
How can we trust AI in safety-critical machine control?
In safety-critical systems, even one faulty AI decision can lead to costly failures or dangerous machine behavior. If we’re cautious about trusting AI, how can we trust it to control complex, high-stakes industrial systems? Discover how Cordis SUITE merges the creativity and productivity of AI with formal verification methods to deliver control software that’s transparent, reliable, and safe by design.
Abstract
Introducing Dreaming Machines
What Are Dreaming Machines? “Dreaming Machines” is our vision of a future where machines autonomously imagine, explore, and evolve—by dreaming.
The Opportunity: Today’s machines typically operate far below their optimal capabilities. By giving them the ability to dream, we unlock their latent potential and initiate a new era of autonomous innovation. Instead of being hardcoded to perform fixed tasks, Dreaming Machines continuously evolve—autonomously imagining, experimenting, and refining how they operate.
A Breakthrough Difference: This limitless innovation through dreaming happens in real-time, while the system remains fully operational—eliminating the need for downtime to unlock new potential.
How does it work? Each Dreaming Machine is virtually cloned and placed within a safe digital playground. In this simulated world, the machine can run thousands of parallel scenarios—exploring new strategies, learning from outcomes, and improving its behavior over time. These simulations happen without any risk to real-world performance, enabling machines to safely push the boundaries of what’s possible. Within a safe digital playground, these intelligent systems endlessly explore virtual scenarios, discovering smarter strategies without real-world risks.
Established Low-Code Foundation: Cordis SUITE provides the low-code foundation that makes the Dreaming Machines vision possible. These visual low-code models are based on a subset of UML, providing clear, transparent control logic that engineers can easily verify and refine. On top of this solid foundation, AI analyzes these models and adjusts them to discover better ways to control the system. The Cordis SUITE platform then automatically generates error-free control software from the updated models, resulting in robust, trustworthy control software for industrial automation.
How Do Visual Low-Code Models Help Build Trust? Because the control logic stays graphical and structured, any changes made by AI are transparent and easy for engineers to understand and verify. This makes it clear how the system works and how it evolves, unlike buried in lines of hard-to-validate code.
How Does Formal Verification Help Build Trust? To trust AI in safety-critical machine control, there must be certainty that every behavior meets strict safety and functional requirements. This is possible in Cordis SUITE because the low-code models are formal, unlike traditional code. Therefore, it is possible to apply proven mathematical verification techniques that automatically check every change proposed by AI. By combining AI-assisted control design with formal verification, the trust gap is closed and self-improving, autonomous systems remain predictable, robust and safe in daily operation.

Rick Scholte
Founder & CEO at Sorama


