2025 Speakers

Adrian Lepers
GTM Ops
The State of Open LLMs in 2025
This talk surveys the open LLM ecosystem in 2025, covering key innovations, challenges in openness, and the forces shaping research and deployment. We’ll explore how open models are driving transparency, collaboration, and real-world impact, concluding with a short demo that shows how these models are fueling rapid, creative development.

Adrian Lepers
GTM Ops
The State of Open LLMs in 2025
This talk surveys the open LLM ecosystem in 2025, covering key innovations, challenges in openness, and the forces shaping research and deployment. We’ll explore how open models are driving transparency, collaboration, and real-world impact, concluding with a short demo that shows how these models are fueling rapid, creative development.

Sarah Mathews
Group Responsible AI Manager
Ethical AI and the Future of Work: Navigating Opportunities, Risks, and Change
As AI technologies rapidly evolve, leaders face a critical inflection point: how to integrate AI into the workplace ethically while maintaining trust, equity, and human agency. This presentation explores the practical implications of ethical AI for the future of work, highlighting the risks of bias, opacity, and displacement—and the opportunities for innovation, inclusion, and resilience.
The challenge lies not in the technology itself, but in the decisions surrounding its deployment. Many organizations lack clear governance structures, workforce training, or ethical oversight, leaving them vulnerable to reputational and operational risks. This talk offers a pragmatic approach: embedding ethical principles into AI design, fostering cross-functional accountability, and equipping teams with the skills to navigate change.
We will explore some takeaways on: how to assess AI readiness across your organization, build ethical guardrails without stifling innovation, and create a culture where responsible AI is a shared priority—not just a compliance checkbox. The future of work will be shaped not just by what AI can do, but by how we choose to use it.

Sarah Mathews
Group Responsible AI Manager
Ethical AI and the Future of Work: Navigating Opportunities, Risks, and Change
As AI technologies rapidly evolve, leaders face a critical inflection point: how to integrate AI into the workplace ethically while maintaining trust, equity, and human agency. This presentation explores the practical implications of ethical AI for the future of work, highlighting the risks of bias, opacity, and displacement—and the opportunities for innovation, inclusion, and resilience.
The challenge lies not in the technology itself, but in the decisions surrounding its deployment. Many organizations lack clear governance structures, workforce training, or ethical oversight, leaving them vulnerable to reputational and operational risks. This talk offers a pragmatic approach: embedding ethical principles into AI design, fostering cross-functional accountability, and equipping teams with the skills to navigate change.
We will explore some takeaways on: how to assess AI readiness across your organization, build ethical guardrails without stifling innovation, and create a culture where responsible AI is a shared priority—not just a compliance checkbox. The future of work will be shaped not just by what AI can do, but by how we choose to use it.

Alexander Taboriskiy
CEO & Founder
AI for Business: A Measurement-First Approach
Large language models (LLMs) are powerful, but their out-of-the-box performance often doesn't meet specific business requirements. This gap can make it difficult to use them in real products.
This talk explains our method for optimizing LLM performance for specific business needs. We start by establishing clear metrics and evaluation systems to test against key business specifications, such as safety and usefulness. Based on these measurements, we then develop and refine prompts in a cycle of continuous testing. We will show how this structured process helps us meet high standards for model behavior.
The key takeaway is that reliable AI performance comes from a structured process rooted in measurement. Attendees will learn how a focus on evaluation, which then informs prompt engineering, can turn a general model into a specialized and effective business tool.

Alexander Taboriskiy
CEO & Founder
AI for Business: A Measurement-First Approach
Large language models (LLMs) are powerful, but their out-of-the-box performance often doesn't meet specific business requirements. This gap can make it difficult to use them in real products.
This talk explains our method for optimizing LLM performance for specific business needs. We start by establishing clear metrics and evaluation systems to test against key business specifications, such as safety and usefulness. Based on these measurements, we then develop and refine prompts in a cycle of continuous testing. We will show how this structured process helps us meet high standards for model behavior.
The key takeaway is that reliable AI performance comes from a structured process rooted in measurement. Attendees will learn how a focus on evaluation, which then informs prompt engineering, can turn a general model into a specialized and effective business tool.

Kenneth Mulvany
Executive Director and Chair
Beyond Generation: Building Self-Improving Intelligence
Artificial intelligence has already discovered the first drug approved by the FDA through AI, compressing years of research into days and proving that machines can now reason across billions of data points to reveal connections hidden to human insight.
The next breakthrough is self-improving intelligence, AI systems built on top of large language models that can learn from their own reasoning. They generate hypotheses, test them against evidence, verify what holds true, and retrain on those results, creating a continuous cycle of refinement.
Unlike the broad optimisation behind today’s frontier models, self-improving intelligence is precise and adaptive. It focuses on specific domains, learning directly from verifiable data to build expertise that compounds with every cycle.
This talk explores how this new class of AI could transform discovery, accelerate innovation, and redefine competitive advantage by creating systems that do not just generate information but continually learn how to think better.

Kenneth Mulvany
Executive Director and Chair
Beyond Generation: Building Self-Improving Intelligence
Artificial intelligence has already discovered the first drug approved by the FDA through AI, compressing years of research into days and proving that machines can now reason across billions of data points to reveal connections hidden to human insight.
The next breakthrough is self-improving intelligence, AI systems built on top of large language models that can learn from their own reasoning. They generate hypotheses, test them against evidence, verify what holds true, and retrain on those results, creating a continuous cycle of refinement.
Unlike the broad optimisation behind today’s frontier models, self-improving intelligence is precise and adaptive. It focuses on specific domains, learning directly from verifiable data to build expertise that compounds with every cycle.
This talk explores how this new class of AI could transform discovery, accelerate innovation, and redefine competitive advantage by creating systems that do not just generate information but continually learn how to think better.

Dr. Petar Tsankov
CEO and Co-founder
GenAI Governance, Done Right
Many organizations have defined AI governance principles, such as fairness, transparency, safety, or accountability. Yet when it comes to GenAI, governance often remains abstract, detached from the technical reality of rapidly evolving models.
This session explores how to operationalize GenAI governance through deep technical assessments that generate the hard evidence needed for confident, compliant, and scalable adoption. You’ll learn how to translate governance principles into measurable technical controls that reveal real risks, validate compliance, and build trust across GenAI systems.
We’ll share practical insights and success cases showing how evidence-based governance accelerates GenAI adoption, turning compliance from a constraint into a catalyst for innovation and value creation.
You will walk away with:
A practical approach to turn GenAI governance principles into deep technical controls
Best practices of what works (and what doesn’t) in GenAI governance
Proven methods to scale GenAI responsibly through continuous technical assessments integrated into every deployment
If you’re accountable for GenAI deployment, this session shows how to close the last mile of GenAI, where trust, performance, and compliance come together.

Dr. Petar Tsankov
CEO and Co-founder
GenAI Governance, Done Right
Many organizations have defined AI governance principles, such as fairness, transparency, safety, or accountability. Yet when it comes to GenAI, governance often remains abstract, detached from the technical reality of rapidly evolving models.
This session explores how to operationalize GenAI governance through deep technical assessments that generate the hard evidence needed for confident, compliant, and scalable adoption. You’ll learn how to translate governance principles into measurable technical controls that reveal real risks, validate compliance, and build trust across GenAI systems.
We’ll share practical insights and success cases showing how evidence-based governance accelerates GenAI adoption, turning compliance from a constraint into a catalyst for innovation and value creation.
You will walk away with:
A practical approach to turn GenAI governance principles into deep technical controls
Best practices of what works (and what doesn’t) in GenAI governance
Proven methods to scale GenAI responsibly through continuous technical assessments integrated into every deployment
If you’re accountable for GenAI deployment, this session shows how to close the last mile of GenAI, where trust, performance, and compliance come together.

Aaron Kalvani
AI Strategist & Advisor
Human-Centric AI Transformation: The Paradigm Shift Toward Human-Led Artificial Reasoning
As artificial intelligence evolves beyond machine learning into artificial reasoning, humanity stands at the threshold of a profound shift — one that challenges not only our technologies but our values. In this keynote, Aaron Kalvani explores the emergence of human-led artificial reasoning, where governance, cognition, and ethics intersect to guide how intelligent systems perceive and act. Representing the Global Council of Responsible AI, he highlights how strategic foresight and ethical leadership can ensure that AI’s design, deployment, and outcomes serve the broader good of humanity — aligning progress with purpose and intelligence with understanding.
Contact: artificialreasoning@gmail.com
Bio:
Aaron Kalvani is a pioneering technologist and global AI strategist known for developing some of the earliest generative-AI-driven metahumans using Unreal Engine over five years ago. He advises governments and enterprises on large-scale AI transformation, blending technical innovation with ethical governance.
Author of The Ethical Integration of Generative AI: Harnessing Large Language Models for Societal Good (2024), he outlines how generative systems can drive impact across education, healthcare, and environmental resilience. Kalvani’s work unites deep engineering insight with human-centric strategy—bridging creativity, policy, and purpose in shaping the next paradigm of AI.

Aaron Kalvani
AI Strategist & Advisor
Human-Centric AI Transformation: The Paradigm Shift Toward Human-Led Artificial Reasoning
As artificial intelligence evolves beyond machine learning into artificial reasoning, humanity stands at the threshold of a profound shift — one that challenges not only our technologies but our values. In this keynote, Aaron Kalvani explores the emergence of human-led artificial reasoning, where governance, cognition, and ethics intersect to guide how intelligent systems perceive and act. Representing the Global Council of Responsible AI, he highlights how strategic foresight and ethical leadership can ensure that AI’s design, deployment, and outcomes serve the broader good of humanity — aligning progress with purpose and intelligence with understanding.
Contact: artificialreasoning@gmail.com
Bio:
Aaron Kalvani is a pioneering technologist and global AI strategist known for developing some of the earliest generative-AI-driven metahumans using Unreal Engine over five years ago. He advises governments and enterprises on large-scale AI transformation, blending technical innovation with ethical governance.
Author of The Ethical Integration of Generative AI: Harnessing Large Language Models for Societal Good (2024), he outlines how generative systems can drive impact across education, healthcare, and environmental resilience. Kalvani’s work unites deep engineering insight with human-centric strategy—bridging creativity, policy, and purpose in shaping the next paradigm of AI.

Dr. Jesús Barrasa
AI Field CTO
How Graphs Underpin More Accurate and Explainable Agents
Knowledge graphs are emerging as a necessary element for bringing GenAI projects from PoC into production. They make GenAI more dependable, transparent, and secure across a wide variety of use cases. They are also helpful in GenAI application development: providing a human-navigable view of relevant knowledge that can be queried and visualised. This talk will share up-to-date learnings from the evolving field of knowledge graphs; why more & more organisations are using knowledge graphs to achieve GenAI successes; and practical definitions, tools, and tips for getting started.

Dr. Jesús Barrasa
AI Field CTO
How Graphs Underpin More Accurate and Explainable Agents
Knowledge graphs are emerging as a necessary element for bringing GenAI projects from PoC into production. They make GenAI more dependable, transparent, and secure across a wide variety of use cases. They are also helpful in GenAI application development: providing a human-navigable view of relevant knowledge that can be queried and visualised. This talk will share up-to-date learnings from the evolving field of knowledge graphs; why more & more organisations are using knowledge graphs to achieve GenAI successes; and practical definitions, tools, and tips for getting started.

Dr. Carina Kern
Co-Founder
AI-Driven Drug Discovery for Ageing: Beyond GLP-1s to Next-Generation Systemic Therapeutics
AI is transforming drug discovery - but when it comes to ageing, no therapies have yet crossed the regulatory finish line. Why? What’s missing?
Dr Carina Kern, CEO of LinkGevity, and a leading voice in ageing science, dives into how AI can go beyond current strategies to reshape our approach to systemic ageing. LinkGevity’s latest discovery may just point the way: a new class of necrosis inhibitors targeting cellular degeneration, with potential both on Earth and in space, where ageing accelerates.
From GLP-1s to the next generation of systemic therapeutics for ageing and age-related disease, this session challenges conventional thinking around longevity, resilience, and the future of AI-driven pharma innovation.

Dr. Carina Kern
Co-Founder
AI-Driven Drug Discovery for Ageing: Beyond GLP-1s to Next-Generation Systemic Therapeutics
AI is transforming drug discovery - but when it comes to ageing, no therapies have yet crossed the regulatory finish line. Why? What’s missing?
Dr Carina Kern, CEO of LinkGevity, and a leading voice in ageing science, dives into how AI can go beyond current strategies to reshape our approach to systemic ageing. LinkGevity’s latest discovery may just point the way: a new class of necrosis inhibitors targeting cellular degeneration, with potential both on Earth and in space, where ageing accelerates.
From GLP-1s to the next generation of systemic therapeutics for ageing and age-related disease, this session challenges conventional thinking around longevity, resilience, and the future of AI-driven pharma innovation.

Amrutha Saseendran
Research Scientist
Agentic AI for Science: From Discovery to Design
In this talk, I will share how we, at AstraZeneca are applying agentic approaches to accelerate scientific R&D, highlighting specific examples. Along the way, I will also highlight the practical lessons learned from building agentic systems in real R&D settings, the challenges of aligning agents with expert workflows, and the opportunities for collaboration between humans and agents. The session will also look ahead to broader applications, illustrating how agentic AI can reshape the way we discover, design, and deliver innovation in Healthcare.

Amrutha Saseendran
Research Scientist
Agentic AI for Science: From Discovery to Design
In this talk, I will share how we, at AstraZeneca are applying agentic approaches to accelerate scientific R&D, highlighting specific examples. Along the way, I will also highlight the practical lessons learned from building agentic systems in real R&D settings, the challenges of aligning agents with expert workflows, and the opportunities for collaboration between humans and agents. The session will also look ahead to broader applications, illustrating how agentic AI can reshape the way we discover, design, and deliver innovation in Healthcare.

Dr. Simone Abbiati
Head of Training & Education
Teaching GenAI to Understand: Classification and Knowledge Graph Enrichment
As enterprises embed Generative AI into their data ecosystems, a central challenge emerges: how to transform unstructured content into structured, verifiable knowledge. This session explores how Large Language Models (LLMs) can be operationalised within enterprise knowledge management to automatically classify data, infer relationships, and generate human-readable concept descriptions.
Drawing on recent advancements in semantic tagging and knowledge graph enrichment, the talk demonstrates how LLMs can classify unstructured documents against enterprise taxonomies using conceptual definitions and hierarchies, then extract subject–predicate–object relationships that comply with established ontology rules. It also showcases how the same architecture can propose entirely new candidate concepts and automatically generate their definitions and synonyms, enriching knowledge graphs with contextual metadata that taxonomists would otherwise have to curate manually.
Together, these capabilities create a deterministic, graph-grounded GenAI stack that transforms enterprise text into explainable, auditable, and machine-readable intelligence. Attendees will gain a practical understanding of how to integrate these components into their existing GenAI pipelines to enhance accuracy, compliance, and decision automation at scale.

Dr. Simone Abbiati
Head of Training & Education
Teaching GenAI to Understand: Classification and Knowledge Graph Enrichment
As enterprises embed Generative AI into their data ecosystems, a central challenge emerges: how to transform unstructured content into structured, verifiable knowledge. This session explores how Large Language Models (LLMs) can be operationalised within enterprise knowledge management to automatically classify data, infer relationships, and generate human-readable concept descriptions.
Drawing on recent advancements in semantic tagging and knowledge graph enrichment, the talk demonstrates how LLMs can classify unstructured documents against enterprise taxonomies using conceptual definitions and hierarchies, then extract subject–predicate–object relationships that comply with established ontology rules. It also showcases how the same architecture can propose entirely new candidate concepts and automatically generate their definitions and synonyms, enriching knowledge graphs with contextual metadata that taxonomists would otherwise have to curate manually.
Together, these capabilities create a deterministic, graph-grounded GenAI stack that transforms enterprise text into explainable, auditable, and machine-readable intelligence. Attendees will gain a practical understanding of how to integrate these components into their existing GenAI pipelines to enhance accuracy, compliance, and decision automation at scale.

Gary Rawlins
UK AI Solutions Specialist

Gary Rawlins
UK AI Solutions Specialist

Deepak Paramanand
Head of AI

Deepak Paramanand
Head of AI

Pin Tian Liu
Machine Learning Specialist
Small to XLarge: How Image Generation Models Are Impacting Architectural Design
Since the release of Stable Diffusion XL in 2023, image generation models have moved beyond popular imagination to become practical tools in architectural design offices. As these models scale in both size and capability, their increasing ability to precisely render images expands the scope of their application – opening new opportunities for design exploration and workflow innovation. How may we best implement these ever-evolving workflows through the sustained collaboration between developers and designers?

Pin Tian Liu
Machine Learning Specialist
Small to XLarge: How Image Generation Models Are Impacting Architectural Design
Since the release of Stable Diffusion XL in 2023, image generation models have moved beyond popular imagination to become practical tools in architectural design offices. As these models scale in both size and capability, their increasing ability to precisely render images expands the scope of their application – opening new opportunities for design exploration and workflow innovation. How may we best implement these ever-evolving workflows through the sustained collaboration between developers and designers?

Denis Samuylov, PhD

Denis Samuylov, PhD

Oren Dinai, Ph.D
GenAI and NLP Tech Lead
Proving It Works: Evaluating Agentic AI Beyond Benchmarks
As GenAI systems evolve into multi-step, reasoning agents, evaluation becomes the hardest part to get right. Traditional benchmarks measure single responses, but they fail to capture whether an agent can plan, act, and deliver a correct end result. This talk presents a practical framework for evaluating agentic workflows, combining goal-completion metrics, step-level validation, regression testing, and failure simulation, to help teams move from demo-ready to production-ready systems.
Dr. Oren Dinai will also discuss evolving techniques such as LLM-as-judge scoring, rubric calibration, and human oversight, showing how they can make evaluation faster yet more reliable. The session emphasizes one key idea: evaluation should not be an afterthought, but a built-in part of design, ensuring GenAI systems are measurable, reproducible, and trustworthy at scale.

Oren Dinai, Ph.D
GenAI and NLP Tech Lead
Proving It Works: Evaluating Agentic AI Beyond Benchmarks
As GenAI systems evolve into multi-step, reasoning agents, evaluation becomes the hardest part to get right. Traditional benchmarks measure single responses, but they fail to capture whether an agent can plan, act, and deliver a correct end result. This talk presents a practical framework for evaluating agentic workflows, combining goal-completion metrics, step-level validation, regression testing, and failure simulation, to help teams move from demo-ready to production-ready systems.
Dr. Oren Dinai will also discuss evolving techniques such as LLM-as-judge scoring, rubric calibration, and human oversight, showing how they can make evaluation faster yet more reliable. The session emphasizes one key idea: evaluation should not be an afterthought, but a built-in part of design, ensuring GenAI systems are measurable, reproducible, and trustworthy at scale.

Ned Seagrim
AI Growth Lead
Dialogue With Data: Achieving Customer-Centricity in the AI Era
Imagine a world where you could access the voice of your customer at any moment.
Well... now you can.
What it means to be a truly customer-centric business has changed. Traditional customer research and a needs-based lens remain critical, but now, they're only one side of the equation.
This session will expose the unprecedented opportunities that come with harnessing synthetic audiences. With digital representations of your target customer and key stakeholders, businesses can now embrace continuous collaboration, more rigorous interrogation, and a deeper, more dynamic understanding of customer needs.
Join me as we uncover the future of customer-centricity through dialogue with data.

Ned Seagrim
AI Growth Lead
Dialogue With Data: Achieving Customer-Centricity in the AI Era
Imagine a world where you could access the voice of your customer at any moment.
Well... now you can.
What it means to be a truly customer-centric business has changed. Traditional customer research and a needs-based lens remain critical, but now, they're only one side of the equation.
This session will expose the unprecedented opportunities that come with harnessing synthetic audiences. With digital representations of your target customer and key stakeholders, businesses can now embrace continuous collaboration, more rigorous interrogation, and a deeper, more dynamic understanding of customer needs.
Join me as we uncover the future of customer-centricity through dialogue with data.

Daniel Schwarz
CEO and Founder
How Lawyers Are Using AI to Transform Compliance With Regulations in Global M&A Deals
A tidal wave of regulations has his hit M&A deals and it keeps getting bigger. Companies can have to comply with hundreds of regulatory regimes when they look to acquire other businesses. If they get it wrong their deals can be blocked, they can be fined and directors can go to jail. But it's difficult to comply with an ever-growing list of complex regulations when the key facts are buried in thousands of documents. We've translated our Magic Circle legal experience into AI-powered software to dramatically improve the process for lawyers and the companies they advise.

Daniel Schwarz
CEO and Founder
How Lawyers Are Using AI to Transform Compliance With Regulations in Global M&A Deals
A tidal wave of regulations has his hit M&A deals and it keeps getting bigger. Companies can have to comply with hundreds of regulatory regimes when they look to acquire other businesses. If they get it wrong their deals can be blocked, they can be fined and directors can go to jail. But it's difficult to comply with an ever-growing list of complex regulations when the key facts are buried in thousands of documents. We've translated our Magic Circle legal experience into AI-powered software to dramatically improve the process for lawyers and the companies they advise.

Katarina Granath
Senior Transformation Manager
Developing AI Solutions Amid Rapid Tech Shifts and Operational Change
In today’s rapidly evolving technological landscape, the intersection of organizational transformation and individual innovation is more critical than ever. This speech explores the journey of driving innovation within a dynamic corporate environment, highlighting how structural and operational changes shaped this path. It delves into the balance between a company’s need to adapt swiftly to external shifts and the individual’s role in fostering innovation, emphasizing how one influences the other.
Key themes include the importance of agility, continuous learning, and cross-unit collaboration. Initiatives like the AI Circle and AI Community played pivotal roles in fostering cultural change, enhancing AI literacy, and promoting the acceptance of generative AI. Structured programs such as internal job rotations and mentorship provided the foundation for focused, entrepreneurial growth, while resilience in navigating complex, long-term projects underscored the need for adaptability and strategic decision-making. The speech also shows how the strong and supportive culture driven by management ensured alignment, success, and scalability.
Katarina Granath, as part of SIX’s strategic initiatives, exemplifies this journey. She spearheaded the development of one of the company’s first Generative AI Bots in 2023, significantly improving data mapping efficiency and aligning with SIX’s data-driven strategy. She has a background as entrepreneur, is a lecturer at HWZ Zurich, and speaker at conferences in Zurich and London.

Katarina Granath
Senior Transformation Manager
Developing AI Solutions Amid Rapid Tech Shifts and Operational Change
In today’s rapidly evolving technological landscape, the intersection of organizational transformation and individual innovation is more critical than ever. This speech explores the journey of driving innovation within a dynamic corporate environment, highlighting how structural and operational changes shaped this path. It delves into the balance between a company’s need to adapt swiftly to external shifts and the individual’s role in fostering innovation, emphasizing how one influences the other.
Key themes include the importance of agility, continuous learning, and cross-unit collaboration. Initiatives like the AI Circle and AI Community played pivotal roles in fostering cultural change, enhancing AI literacy, and promoting the acceptance of generative AI. Structured programs such as internal job rotations and mentorship provided the foundation for focused, entrepreneurial growth, while resilience in navigating complex, long-term projects underscored the need for adaptability and strategic decision-making. The speech also shows how the strong and supportive culture driven by management ensured alignment, success, and scalability.
Katarina Granath, as part of SIX’s strategic initiatives, exemplifies this journey. She spearheaded the development of one of the company’s first Generative AI Bots in 2023, significantly improving data mapping efficiency and aligning with SIX’s data-driven strategy. She has a background as entrepreneur, is a lecturer at HWZ Zurich, and speaker at conferences in Zurich and London.

Robert Berke
CTO

Robert Berke
CTO

Dr. Manuel Flurin Hendry
Researcher, Lecturer, Creative Director
Your Soul Deserves a Glitch: Critiquing Solutionism Through Narrative Failure
Drawing on more than 20 years of filmmaking experience and research in affective computing, this talk presents "Friendly Fire at the Shrink", an interactive installation built around the fictional mental health AI startup «MindFix». Our work utilizes state-of-the-art generative AI to investigate how humans respond to emotionally expressive systems, and how quickly our belief in their intelligence collapses when we experience their limitations firsthand. This presentation will also detail the project's broader applications in research and education, including workshops on conversational design, presentations at leading AI conferences like NeurIPS 2025, and ongoing neuroscientific studies at ETH Zurich's Decision Science Laboratory to further explore the installation's cognitive and societal impact.

Dr. Manuel Flurin Hendry
Researcher, Lecturer, Creative Director
Your Soul Deserves a Glitch: Critiquing Solutionism Through Narrative Failure
Drawing on more than 20 years of filmmaking experience and research in affective computing, this talk presents "Friendly Fire at the Shrink", an interactive installation built around the fictional mental health AI startup «MindFix». Our work utilizes state-of-the-art generative AI to investigate how humans respond to emotionally expressive systems, and how quickly our belief in their intelligence collapses when we experience their limitations firsthand. This presentation will also detail the project's broader applications in research and education, including workshops on conversational design, presentations at leading AI conferences like NeurIPS 2025, and ongoing neuroscientific studies at ETH Zurich's Decision Science Laboratory to further explore the installation's cognitive and societal impact.

Philipp Hoelzenbein
CAD Without Barriers: AI Copilot for Digital Design
Digital design has long empowered architects, engineers and designers to explore advanced geometries and automate CAD workflows. However, the CAD tools to do this, while powerful, present a steep learning curve. This creates a barrier between design intent and design execution, and makes design processes time-consuming, frustrating and certainly not fun.
In this talk, we’ll show how Romantic Labs' AI Copilot 'Raven' removes these barriers, making advanced digital design accessible, efficient and fun. Users can describe their design intent in plain language or with photos of their sketches. The AI interprets the intent, builds the underlying parametric graph, and connects seamlessly into existing CAD software workflows to provide the user with advanced services, like structural optimization or climate analyses.
We’ll explore how this approach harnesses the entire CAD ecosystem, automates repetitive modeling tasks, and accelerates iteration cycles — while preserving full control and editability for advanced users. Attendees will see relevant use cases and examples of natural language prompts generating complex parametric forms in seconds and automating previously tedious tasks. By merging generative AI with the parametric modeling paradigm, we are entering a new era of CAD: One where designers collaborate with intelligent agents, not just software. Romantic Labs does not only provide an incremental upgrade, but lays the foundation how humans create with computers in the AI era.

Philipp Hoelzenbein
CAD Without Barriers: AI Copilot for Digital Design
Digital design has long empowered architects, engineers and designers to explore advanced geometries and automate CAD workflows. However, the CAD tools to do this, while powerful, present a steep learning curve. This creates a barrier between design intent and design execution, and makes design processes time-consuming, frustrating and certainly not fun.
In this talk, we’ll show how Romantic Labs' AI Copilot 'Raven' removes these barriers, making advanced digital design accessible, efficient and fun. Users can describe their design intent in plain language or with photos of their sketches. The AI interprets the intent, builds the underlying parametric graph, and connects seamlessly into existing CAD software workflows to provide the user with advanced services, like structural optimization or climate analyses.
We’ll explore how this approach harnesses the entire CAD ecosystem, automates repetitive modeling tasks, and accelerates iteration cycles — while preserving full control and editability for advanced users. Attendees will see relevant use cases and examples of natural language prompts generating complex parametric forms in seconds and automating previously tedious tasks. By merging generative AI with the parametric modeling paradigm, we are entering a new era of CAD: One where designers collaborate with intelligent agents, not just software. Romantic Labs does not only provide an incremental upgrade, but lays the foundation how humans create with computers in the AI era.

Anne-Marie Buzatu
Executive Director

Anne-Marie Buzatu
Executive Director

Eric Anderegg

Eric Anderegg

Ruhi A Jivraj
Senior Data Analyst
People First, Always: Unlocking Human Potential in the Age of AI
The rapid rise of generative AI is often framed as a technological race, but the real challenge lies in how people adopt, trust, and meaningfully use these tools. Too often, adoption strategies focus on functionality and efficiency while overlooking the cultural, psychological, and ethical dimensions that ultimately shape human behaviour. The problem is not whether generative AI can transform industries—it already can—but whether individuals feel empowered, confident, and inspired to use it in ways that create lasting value. Without this human alignment, even the most advanced AI risks remaining underutilised or misapplied.
My approach reframes AI adoption as a human-centric transformation. This means recognising that trust, inclusivity, and imagination are not soft factors but core enablers of technological progress. I will explore how organisations can move beyond transactional training and instead cultivate curiosity, resilience, and a sense of shared ownership over AI. By creating environments that encourage safe experimentation and open dialogue, we can shift from a narrative of fear and displacement to one of empowerment and possibility.
The key takeaway is clear: the future of generative AI will be defined less by what machines can do and more by what people dare to do with them. Leaders who prioritise psychological safety, equity, and culture will unlock deeper adoption and innovation. My message is that generative AI is not simply a story about technology—it is a story about people reclaiming their potential in an age of accelerating possibility.

Ruhi A Jivraj
Senior Data Analyst
People First, Always: Unlocking Human Potential in the Age of AI
The rapid rise of generative AI is often framed as a technological race, but the real challenge lies in how people adopt, trust, and meaningfully use these tools. Too often, adoption strategies focus on functionality and efficiency while overlooking the cultural, psychological, and ethical dimensions that ultimately shape human behaviour. The problem is not whether generative AI can transform industries—it already can—but whether individuals feel empowered, confident, and inspired to use it in ways that create lasting value. Without this human alignment, even the most advanced AI risks remaining underutilised or misapplied.
My approach reframes AI adoption as a human-centric transformation. This means recognising that trust, inclusivity, and imagination are not soft factors but core enablers of technological progress. I will explore how organisations can move beyond transactional training and instead cultivate curiosity, resilience, and a sense of shared ownership over AI. By creating environments that encourage safe experimentation and open dialogue, we can shift from a narrative of fear and displacement to one of empowerment and possibility.
The key takeaway is clear: the future of generative AI will be defined less by what machines can do and more by what people dare to do with them. Leaders who prioritise psychological safety, equity, and culture will unlock deeper adoption and innovation. My message is that generative AI is not simply a story about technology—it is a story about people reclaiming their potential in an age of accelerating possibility.

Thibault Jaigu
CEO & Co-Founder
LLM Gateways and the Hidden Infrastructure of GenAI
Generative AI systems depend on strong and reliable infrastructure. As organizations integrate multiple large language models, managing connections, costs, and compliance becomes increasingly complex. This talk explains the role of LLM gateways as a core layer that enables scalability, security, and efficiency in modern AI platforms. It highlights how unified access, routing, and monitoring help teams simplify operations and maintain control. Attendees will learn practical methods for building multi-model strategies, optimizing performance, and ensuring responsible use of AI at scale. The main takeaway is clear: a well-designed gateway layer is essential for sustainable and enterprise-grade GenAI systems.

Thibault Jaigu
CEO & Co-Founder
LLM Gateways and the Hidden Infrastructure of GenAI
Generative AI systems depend on strong and reliable infrastructure. As organizations integrate multiple large language models, managing connections, costs, and compliance becomes increasingly complex. This talk explains the role of LLM gateways as a core layer that enables scalability, security, and efficiency in modern AI platforms. It highlights how unified access, routing, and monitoring help teams simplify operations and maintain control. Attendees will learn practical methods for building multi-model strategies, optimizing performance, and ensuring responsible use of AI at scale. The main takeaway is clear: a well-designed gateway layer is essential for sustainable and enterprise-grade GenAI systems.

Victor Montefiore
Director
From Manual to Machine: Transforming Risk Oversight With AI
In today’s complex regulatory landscape, gaining a clear view of operational and compliance risk is as critical as ever. At a systemically important financial institution, I led a transformation of risk reporting and oversight by applying AI to automate the mapping of thousands of internal controls to external regulations and internal policies. Starting with a low-cost proof-of-concept I demonstrated significant effort reduction, and ultimately improved the quality of oversight through a scaled-up solution. This session will share lessons from that journey — including the design principles that proved effective, how I mitigated common pitfalls such as false positives and SME fatigue, and how AI has been practically applied to deliver real impact for Risk Managers, the Risk Committee, and the Chief Risk Officer.

Victor Montefiore
Director
From Manual to Machine: Transforming Risk Oversight With AI
In today’s complex regulatory landscape, gaining a clear view of operational and compliance risk is as critical as ever. At a systemically important financial institution, I led a transformation of risk reporting and oversight by applying AI to automate the mapping of thousands of internal controls to external regulations and internal policies. Starting with a low-cost proof-of-concept I demonstrated significant effort reduction, and ultimately improved the quality of oversight through a scaled-up solution. This session will share lessons from that journey — including the design principles that proved effective, how I mitigated common pitfalls such as false positives and SME fatigue, and how AI has been practically applied to deliver real impact for Risk Managers, the Risk Committee, and the Chief Risk Officer.

Luba Elliott
Creative AI Curator
AI Art: From Technology to Culture
The popularity of AI art has exploded over the past few years. From its beginnings with DeepDream in 2015, AI art has moved beyond technology circles into the public eye, shaping media art, contemporary art, and digital culture. The rise of prompting, multimodal AI and AI agents have expanded creative possibilities, while concerns over AI slop and copyright continue to fuel critical discourse. As generative AI tools become widely accessible, artists are exploring new forms of collaboration, aesthetics, and authorship. This talk will give an overview of how artists and technologists use and think about AI, its creative potential, and societal impact.

Luba Elliott
Creative AI Curator
AI Art: From Technology to Culture
The popularity of AI art has exploded over the past few years. From its beginnings with DeepDream in 2015, AI art has moved beyond technology circles into the public eye, shaping media art, contemporary art, and digital culture. The rise of prompting, multimodal AI and AI agents have expanded creative possibilities, while concerns over AI slop and copyright continue to fuel critical discourse. As generative AI tools become widely accessible, artists are exploring new forms of collaboration, aesthetics, and authorship. This talk will give an overview of how artists and technologists use and think about AI, its creative potential, and societal impact.
Evolution of Creative Machines
This talk explores how today's creative AI and machine learning movement (both as production methods and as an expressive medium) connects to its roots in older traditions in computational art, specifically procedural graphics and generative practices that existed for decades before neural networks became mainstream. These earlier approaches established many of the core concepts now resurfacing in AI-driven creativity.
Understanding this evolution matters because it provides a clearer picture of what generative systems can actually do, beyond the current hype. By tracing how these ideas developed over time, we can better assess both the genuine capabilities of modern approaches and where they might realistically go next.
Since the author works primarily with visual media, the presentation will be illustrated with visual examples, including work from both generative art history and the author's own practice. These pieces demonstrate how concepts and techniques have evolved from basic algorithmic approaches to contemporary AI methods.
The talk is designed for both business and technical audiences looking to understand generative systems with better historical grounding and context.
Evolution of Creative Machines
This talk explores how today's creative AI and machine learning movement (both as production methods and as an expressive medium) connects to its roots in older traditions in computational art, specifically procedural graphics and generative practices that existed for decades before neural networks became mainstream. These earlier approaches established many of the core concepts now resurfacing in AI-driven creativity.
Understanding this evolution matters because it provides a clearer picture of what generative systems can actually do, beyond the current hype. By tracing how these ideas developed over time, we can better assess both the genuine capabilities of modern approaches and where they might realistically go next.
Since the author works primarily with visual media, the presentation will be illustrated with visual examples, including work from both generative art history and the author's own practice. These pieces demonstrate how concepts and techniques have evolved from basic algorithmic approaches to contemporary AI methods.
The talk is designed for both business and technical audiences looking to understand generative systems with better historical grounding and context.

Guli Silberstein
Artist
AI as Digital Disruption: The Role of AI Art
In recent years, Artificial-Intelligence (AI) has emerged as a significant driver of digital disruption across various industries. Among its many applications, AI art stands out as a particularly intriguing and transformative field. This lecture explores AI as a force of digital disruption, with a specific focus on AI video art as redefining traditional artistic boundaries, challenging established norms, and opening new possibilities for imagination, creativity and cultural expression.

Guli Silberstein
Artist
AI as Digital Disruption: The Role of AI Art
In recent years, Artificial-Intelligence (AI) has emerged as a significant driver of digital disruption across various industries. Among its many applications, AI art stands out as a particularly intriguing and transformative field. This lecture explores AI as a force of digital disruption, with a specific focus on AI video art as redefining traditional artistic boundaries, challenging established norms, and opening new possibilities for imagination, creativity and cultural expression.

Kannupriya Kalra
Engineering Leader
Building Reliable AI Systems: From Hype to Practical Toolkits
As AI adoption accelerates, organizations face a common challenge: moving beyound flashy demos to building, production-grade systems.In this talk we will introduce LLM4S, a framework designed to bring structure, reliability, and scalability to AI applications. We will explore how features such as retrieval-augmented generation (RAG), model-context protocol (MCP) and other GenAI techniques can be applied in practice, and why building on a strong, complete toolkit matters for long-term success. This session is designed for engineers, data professionals, and business leaders alike, blending technical insight with real-world applications. Attendees will leave with a deeper understanding of what it takes to transform large language models into dependable AI systems.

Kannupriya Kalra
Engineering Leader
Building Reliable AI Systems: From Hype to Practical Toolkits
As AI adoption accelerates, organizations face a common challenge: moving beyound flashy demos to building, production-grade systems.In this talk we will introduce LLM4S, a framework designed to bring structure, reliability, and scalability to AI applications. We will explore how features such as retrieval-augmented generation (RAG), model-context protocol (MCP) and other GenAI techniques can be applied in practice, and why building on a strong, complete toolkit matters for long-term success. This session is designed for engineers, data professionals, and business leaders alike, blending technical insight with real-world applications. Attendees will leave with a deeper understanding of what it takes to transform large language models into dependable AI systems.

Hesham Shawqy
Computational Design Specialist
Open‑Source AI in CAD: Generative Design and Urban Mining
Architects and computational designers are increasingly faced with the challenge of managing complex visual data while seeking more efficient, creative, and sustainable design workflows. Traditional CAD tools, while powerful, often fall short in automating repetitive tasks or unlocking deeper insights from visual information. This talk explores how open‑source artificial intelligence (AI) models, specifically computer vision techniques can be integrated into CAD tools to transform the way we design and analyze architecture.
Leveraging machine learning to run tasks such as image segmentation, object recognition, pattern detection, and image‑to-data workflows. Through live demonstrations and hands‑on examples, the session highlights how AI not only accelerates design automation but also expands creative exploration by uncovering patterns and possibilities that might otherwise remain hidden.
Attendees will gain insight into the possibility to embed AI directly into design workflows, enabling architects and designers to build their own libraries of AI‑powered components. The talk will highlight not only the technical possibilities but also the broader implications of AI in shaping more intelligent, adaptive, and resource‑aware design practices.

Hesham Shawqy
Computational Design Specialist
Open‑Source AI in CAD: Generative Design and Urban Mining
Architects and computational designers are increasingly faced with the challenge of managing complex visual data while seeking more efficient, creative, and sustainable design workflows. Traditional CAD tools, while powerful, often fall short in automating repetitive tasks or unlocking deeper insights from visual information. This talk explores how open‑source artificial intelligence (AI) models, specifically computer vision techniques can be integrated into CAD tools to transform the way we design and analyze architecture.
Leveraging machine learning to run tasks such as image segmentation, object recognition, pattern detection, and image‑to-data workflows. Through live demonstrations and hands‑on examples, the session highlights how AI not only accelerates design automation but also expands creative exploration by uncovering patterns and possibilities that might otherwise remain hidden.
Attendees will gain insight into the possibility to embed AI directly into design workflows, enabling architects and designers to build their own libraries of AI‑powered components. The talk will highlight not only the technical possibilities but also the broader implications of AI in shaping more intelligent, adaptive, and resource‑aware design practices.

Aditya Gudipudi
Building the Product
How Can AI Be Your Employee
Small business owners are overwhelmed. Juggling social media, writing email campaigns, and managing customer messages is a full-time job that drains valuable time and energy, leading to burnout and missed growth opportunities.
MATI AI provides a simple, powerful solution: a dedicated "remote AI employee." Our platform automates all your marketing tasks from one intuitive dashboard. Whether you need a week's worth of social media content, a promotional email blast, or automated WhatsApp replies, your AI employee handles it all.
By delegating these complex tasks, MATI AI gives you back your time, allowing you to focus on what truly matters—running your business. It’s your expert, scalable marketing team, available 24/7 at a fraction of the cost

Aditya Gudipudi
Building the Product
How Can AI Be Your Employee
Small business owners are overwhelmed. Juggling social media, writing email campaigns, and managing customer messages is a full-time job that drains valuable time and energy, leading to burnout and missed growth opportunities.
MATI AI provides a simple, powerful solution: a dedicated "remote AI employee." Our platform automates all your marketing tasks from one intuitive dashboard. Whether you need a week's worth of social media content, a promotional email blast, or automated WhatsApp replies, your AI employee handles it all.
By delegating these complex tasks, MATI AI gives you back your time, allowing you to focus on what truly matters—running your business. It’s your expert, scalable marketing team, available 24/7 at a fraction of the cost

Dmytro Fedoruk
Founder & CEO
AI Platform Transforming How Law Firms Win New Business
Ranking Copilot automates the back office of law firms — replacing manual ranking, marketing, and fee proposal workflows with AI-driven automation. Within one year, we aim to automate all three core business development functions and cut costs by 90%.
The company is VC-backed, ISO 27001 certified, and already used by 30+ law firms.

Dmytro Fedoruk
Founder & CEO
AI Platform Transforming How Law Firms Win New Business
Ranking Copilot automates the back office of law firms — replacing manual ranking, marketing, and fee proposal workflows with AI-driven automation. Within one year, we aim to automate all three core business development functions and cut costs by 90%.
The company is VC-backed, ISO 27001 certified, and already used by 30+ law firms.

Jai Parmar
Head of AI
Agentic Applications Need Agentic Context Engineering for Success
Prompt Engineering has been the main focus of many applications, however without context, many genAI use cases fail to deliver value, context is key! GenAI application need end to end Visibility of questions and solutions to ensure that each prompt is made even better to have true success.

Jai Parmar
Head of AI
Agentic Applications Need Agentic Context Engineering for Success
Prompt Engineering has been the main focus of many applications, however without context, many genAI use cases fail to deliver value, context is key! GenAI application need end to end Visibility of questions and solutions to ensure that each prompt is made even better to have true success.

Gena Frangina
Founder/Host
Prompt Your Way to Calm: How GenAI Can Help Humans Reboot, Not Burn Out
In an era where Generative AI accelerates innovation, delivery, and decision-making, human systems are running at full capacity. IT professionals and leaders face constant pressure to adapt — but rarely to pause. The result: rising burnout, emotional fatigue, and disconnection, even in the most forward-thinking teams.
This workshop reframes that challenge through a new lens — prompt engineering for the human mind. Drawing from her background as a software engineer, clinical hypnotherapist, and wellbeing strategist, Gena Frangina demonstrates how the same principles that make LLMs effective — clarity, context, and intention — can help people build emotional resilience. Participants will learn to craft “mental prompts” that shift mindset, reduce overload, and strengthen focus, while also exploring how AI can act as an adaptive partner for recovery and reflection.
Attendees will leave with practical tools to:
Reframe stress using simple prompt patterns.
Use GenAI for guided resets and micro-rituals.
Apply AI-driven wellbeing strategies in real work settings.
Key takeaway: Calm is not the opposite of productivity — it’s what sustains it.

Gena Frangina
Founder/Host
Prompt Your Way to Calm: How GenAI Can Help Humans Reboot, Not Burn Out
In an era where Generative AI accelerates innovation, delivery, and decision-making, human systems are running at full capacity. IT professionals and leaders face constant pressure to adapt — but rarely to pause. The result: rising burnout, emotional fatigue, and disconnection, even in the most forward-thinking teams.
This workshop reframes that challenge through a new lens — prompt engineering for the human mind. Drawing from her background as a software engineer, clinical hypnotherapist, and wellbeing strategist, Gena Frangina demonstrates how the same principles that make LLMs effective — clarity, context, and intention — can help people build emotional resilience. Participants will learn to craft “mental prompts” that shift mindset, reduce overload, and strengthen focus, while also exploring how AI can act as an adaptive partner for recovery and reflection.
Attendees will leave with practical tools to:
Reframe stress using simple prompt patterns.
Use GenAI for guided resets and micro-rituals.
Apply AI-driven wellbeing strategies in real work settings.
Key takeaway: Calm is not the opposite of productivity — it’s what sustains it.

Deejay
UK Partner

Deejay
UK Partner

Shubhangi Goyal
Senior Analyst
Context Driven AI Agents
In this session, I will explore how agents are built and use dynamic context to execute complex tasks and deliver real value. This also includes using context engineering to move beyond single prompt based interactions. The key takeaways would be :
Context engineering, manage memory tools, and tasks for adaptive behaviour
Agent architecture, LLMs and feedback loops
Prompt engineering into the larger system
Real world use case and common pitfalls when deploying AI agents

Shubhangi Goyal
Senior Analyst
Context Driven AI Agents
In this session, I will explore how agents are built and use dynamic context to execute complex tasks and deliver real value. This also includes using context engineering to move beyond single prompt based interactions. The key takeaways would be :
Context engineering, manage memory tools, and tasks for adaptive behaviour
Agent architecture, LLMs and feedback loops
Prompt engineering into the larger system
Real world use case and common pitfalls when deploying AI agents

Emil Jose
Senior Motion Control Software Engineer
How the Automotive Industry is Leveraging Generative AI for Productivity Enhancements
The automotive industry is rapidly adopting Generative AI to boost productivity across engineering and software development. Beyond its early use in design and simulation, AI is now transforming everyday documentation workflows, helping teams draft requirements, generate test cases, and perform impact analysis with greater speed and consistency.
By combining domain expertise with contextual understanding and knowledge graphs, these tools can capture engineering intent, trace dependencies, and maintain quality across complex systems. When applied within model-based development environments, they also support software artefact generation, change management, and validation as requirements evolve.
Across the sector, we’re beginning to see AI copilots embedded directly into engineering toolchains, helping teams manage complexity and stay compliant while accelerating delivery. As vehicles become more software-defined, these GenAI-driven workflows are proving key to improving efficiency, maintaining traceability, and freeing engineers to focus on innovation.

Emil Jose
Senior Motion Control Software Engineer
How the Automotive Industry is Leveraging Generative AI for Productivity Enhancements
The automotive industry is rapidly adopting Generative AI to boost productivity across engineering and software development. Beyond its early use in design and simulation, AI is now transforming everyday documentation workflows, helping teams draft requirements, generate test cases, and perform impact analysis with greater speed and consistency.
By combining domain expertise with contextual understanding and knowledge graphs, these tools can capture engineering intent, trace dependencies, and maintain quality across complex systems. When applied within model-based development environments, they also support software artefact generation, change management, and validation as requirements evolve.
Across the sector, we’re beginning to see AI copilots embedded directly into engineering toolchains, helping teams manage complexity and stay compliant while accelerating delivery. As vehicles become more software-defined, these GenAI-driven workflows are proving key to improving efficiency, maintaining traceability, and freeing engineers to focus on innovation.

Tatiana Botskina
CEO
Where AI Fails: Lessons from Explainable AI in High-Stakes Decision-Making
As AI systems increasingly shape high-stakes decisions in law, finance, and governance, their ability to reason transparently and align with human principles of fairness and accountability is crucial. Yet, even the most advanced large language models (LLMs) often fail to reproduce the nuanced reasoning expected in such domains.
Drawing on my research at the University of Oxford on explainable AI in legal decision-making, this talk explores where and why AI reasoning breaks down.
Attendees will gain a practical framework for identifying "AI hallucinations" in reasoning and learn concrete steps for evaluating and mitigating these failures. The session bridges theory and practice, showing what it takes to build transparent, trustworthy, and legally aligned AI systems for high-stakes decision-making.

Tatiana Botskina
CEO
Where AI Fails: Lessons from Explainable AI in High-Stakes Decision-Making
As AI systems increasingly shape high-stakes decisions in law, finance, and governance, their ability to reason transparently and align with human principles of fairness and accountability is crucial. Yet, even the most advanced large language models (LLMs) often fail to reproduce the nuanced reasoning expected in such domains.
Drawing on my research at the University of Oxford on explainable AI in legal decision-making, this talk explores where and why AI reasoning breaks down.
Attendees will gain a practical framework for identifying "AI hallucinations" in reasoning and learn concrete steps for evaluating and mitigating these failures. The session bridges theory and practice, showing what it takes to build transparent, trustworthy, and legally aligned AI systems for high-stakes decision-making.

Parth Amin, Ph.D.
Founder
AI-Powered Personalised Learning and Assessment
Our startup, SmartAssess, is building an AI-powered assessment and personalised learning platform for GCSE and A-level students that combines exam-board accuracy with generative feedback and adaptive learning. Teachers spend an average of 10–12 hours per week marking essays and assignments, often providing limited feedback due to time pressure. At the same time, students—especially in state schools—struggle to access the kind of detailed, individualised feedback available in top private schools. SmartAssess bridges this gap by automating marking while keeping teachers firmly “in the loop.”
Our system ingests handwritten or digital essays through OCR and text-processing pipelines. Each script is evaluated criterion-by-criterion against the relevant exam-board marking scheme (e.g. Knowledge, Analysis, Evaluation). Using rubric conditioning and retrieval-augmented generation (RAG), the AI generates structured, teacher-style feedback that mirrors how experienced educators comment on student work. Teachers can review and override AI judgments, ensuring human oversight and trust.
Over time, teacher adjustments continually calibrate the model—creating a feedback loop that learns each teacher’s marking style. Unlike generic AI marking systems, SmartAssess focuses on transparency and pedagogy. Each feedback point is backed by evidence and explanations, making it auditable and easy to moderate. Teachers save up to 10 hours weekly, while students receive rich, actionable feedback that goes beyond “right or wrong.” Using generative AI and RAG, SmartAssess transforms feedback into personalised learning journeys.
Once the AI identifies a student’s weaknesses—for example, poor evaluation or lack of application—it retrieves the most relevant explanations, model answers, or videos from a vector database built from textbooks and revision guides. This allows the system to recommend targeted content instantly—turning feedback into action. Our technology stack combines LLM prompt engineering, fine-tuned scoring models, OCR pipelines, and RAG-based content retrieval. The models are hosted securely in the UK and can be customised for schools, tutors, and institutions. We are validating the product with pilots at Tonbridge School and a tutoring company serving 120+ students, showing strong alignment between AI and teacher marking.
By combining rigorous assessment with generative, adaptive learning, SmartAssess is reimagining how schools assess, feedback, and personalised learning

Parth Amin, Ph.D.
Founder
AI-Powered Personalised Learning and Assessment
Our startup, SmartAssess, is building an AI-powered assessment and personalised learning platform for GCSE and A-level students that combines exam-board accuracy with generative feedback and adaptive learning. Teachers spend an average of 10–12 hours per week marking essays and assignments, often providing limited feedback due to time pressure. At the same time, students—especially in state schools—struggle to access the kind of detailed, individualised feedback available in top private schools. SmartAssess bridges this gap by automating marking while keeping teachers firmly “in the loop.”
Our system ingests handwritten or digital essays through OCR and text-processing pipelines. Each script is evaluated criterion-by-criterion against the relevant exam-board marking scheme (e.g. Knowledge, Analysis, Evaluation). Using rubric conditioning and retrieval-augmented generation (RAG), the AI generates structured, teacher-style feedback that mirrors how experienced educators comment on student work. Teachers can review and override AI judgments, ensuring human oversight and trust.
Over time, teacher adjustments continually calibrate the model—creating a feedback loop that learns each teacher’s marking style. Unlike generic AI marking systems, SmartAssess focuses on transparency and pedagogy. Each feedback point is backed by evidence and explanations, making it auditable and easy to moderate. Teachers save up to 10 hours weekly, while students receive rich, actionable feedback that goes beyond “right or wrong.” Using generative AI and RAG, SmartAssess transforms feedback into personalised learning journeys.
Once the AI identifies a student’s weaknesses—for example, poor evaluation or lack of application—it retrieves the most relevant explanations, model answers, or videos from a vector database built from textbooks and revision guides. This allows the system to recommend targeted content instantly—turning feedback into action. Our technology stack combines LLM prompt engineering, fine-tuned scoring models, OCR pipelines, and RAG-based content retrieval. The models are hosted securely in the UK and can be customised for schools, tutors, and institutions. We are validating the product with pilots at Tonbridge School and a tutoring company serving 120+ students, showing strong alignment between AI and teacher marking.
By combining rigorous assessment with generative, adaptive learning, SmartAssess is reimagining how schools assess, feedback, and personalised learning

Ben Glass
GTM
AI Productivity for Financial Advisers
AdvisoryAI is an AI productivity platform that helps financial advice firms double client capacity by automating the admin and compliance work that limits growth. Only around 8% of people in the UK receive regulated financial advice—largely because advisers spend 60% of their time on documentation instead of clients. AdvisoryAI changes that. We’ve built three generative AI assistants trained on each firm’s templates, tone, and compliance rules: 1. Evie – Intelligent Meeting Assistant: Converts adviser–client conversations into structured meeting notes, fact finds, and CRM updates. Evie uses speech-to-text, semantic clustering, and context-aware summarisation to capture every commitment and soft fact. 2. Emma – Paraplanning at Pace: Generates compliance-ready suitability and review reports in minutes. Emma extracts data from illustrations, fact finds, and LOA packs, citing every source for audit-ready traceability. 3. Colin – Compliance Assurance: Checks all outputs against the FCA Handbook and Consumer Duty, flagging risks and linking each recommendation to its evidence. Unlike generic AI tools, AdvisoryAI’s models are fine-tuned on each firm’s documents and phrasing, producing outputs indistinguishable from their best paraplanners—while maintaining full compliance. We combine generative AI with retrieval-augmented generation, citation systems, and firm-level reinforcement learning to ensure accuracy and continuous improvement. AdvisoryAI embeds into existing workflows by integrating with the largest back-office systems like Intelliflo Office, Plannr, and Microsoft Teams. Firms cut report-writing time by 70%, meeting-note prep by 85%, and compliance review cycles by 50%. Used by 50+ UK firms including LIFT-Financial and Bluecoat Wealth, AdvisoryAI has won “Best in Show” at EATT 2024 and 2025 and, more recently, the Schroders Outstanding Innovation Award. Built by former advisers and MIT engineers, we’re making quality financial advice scalable and accessible to more people than ever before.

Ben Glass
GTM
AI Productivity for Financial Advisers
AdvisoryAI is an AI productivity platform that helps financial advice firms double client capacity by automating the admin and compliance work that limits growth. Only around 8% of people in the UK receive regulated financial advice—largely because advisers spend 60% of their time on documentation instead of clients. AdvisoryAI changes that. We’ve built three generative AI assistants trained on each firm’s templates, tone, and compliance rules: 1. Evie – Intelligent Meeting Assistant: Converts adviser–client conversations into structured meeting notes, fact finds, and CRM updates. Evie uses speech-to-text, semantic clustering, and context-aware summarisation to capture every commitment and soft fact. 2. Emma – Paraplanning at Pace: Generates compliance-ready suitability and review reports in minutes. Emma extracts data from illustrations, fact finds, and LOA packs, citing every source for audit-ready traceability. 3. Colin – Compliance Assurance: Checks all outputs against the FCA Handbook and Consumer Duty, flagging risks and linking each recommendation to its evidence. Unlike generic AI tools, AdvisoryAI’s models are fine-tuned on each firm’s documents and phrasing, producing outputs indistinguishable from their best paraplanners—while maintaining full compliance. We combine generative AI with retrieval-augmented generation, citation systems, and firm-level reinforcement learning to ensure accuracy and continuous improvement. AdvisoryAI embeds into existing workflows by integrating with the largest back-office systems like Intelliflo Office, Plannr, and Microsoft Teams. Firms cut report-writing time by 70%, meeting-note prep by 85%, and compliance review cycles by 50%. Used by 50+ UK firms including LIFT-Financial and Bluecoat Wealth, AdvisoryAI has won “Best in Show” at EATT 2024 and 2025 and, more recently, the Schroders Outstanding Innovation Award. Built by former advisers and MIT engineers, we’re making quality financial advice scalable and accessible to more people than ever before.

George Proud
Co-Founder & CEO

George Proud
Co-Founder & CEO

Jolanta Jas
Founder & CEO

Jolanta Jas
Founder & CEO

Harsh Tripathi
CEO & Founder
AI-Powered Knowledge-on-Demand
HyrEx is an AI-first intelligence platform for the finance and tech industries, built to fix the broken expert network market. Our Value Proposition The legacy expert network model (e.g., GLG, AlphaSights) is slow, manual, and built on a high-cost, 1-hour minimum. This model fails both sides of the marketplace: For Clients (VCs, PE, Consultants, Startups): They are forced to pay ~$1,200 for a 1-hour call when they often just need a 10-minute "sense check." The process is slow, relevance is poor, and the high cost blocks usage for high-frequency, everyday decisions. For Experts (Senior Operators): They are frustrated by "spammy," irrelevant outreach, wasting time on unpaid screening calls, and filling out repetitive forms. Crucially, their knowledge is often resold (via transcripts) without them earning royalties, a major source of distrust. HyrEx solves this by unbundling the traditional call. Our value proposition is a two-part model: AI-Powered "Digital Twins": An on-demand library of AI personas trained on the specific knowledge of hundreds of vetted, senior operators. Clients can query these Twins instantly for a fraction of the cost (e.g., 0.1 credits). 15-Minute "Micro-consultations": A flexible, low-cost credit model (0.25 credits) that unlocks access to 1:1 expert calls, eliminating the 1-hour minimum. For clients, we deliver a strategic advantage by providing immediate, credible insights at a fraction of the cost. For experts, we eliminate unpaid admin and create a new, scalable, recurring revenue stream. Our Application of Generative AI Generative AI is the core of our platform and intellectual property. Creation of the "Digital Twin": We use generative AI to "productize" an expert's knowledge. During onboarding, we conduct a structured 30-minute interview and use an AI pipeline (including RAG) to create a "virtual persona" of that expert. This is a Q&A bot trained exclusively on their specific knowledge, industry experience, and unique opinions. Client-Side (AI-Triage): Clients use this generative AI interface to query the Digital Twins first. This provides instant, credible answers and allows the client to validate their hypothesis and confirm the expert's relevance before committing to a 1:1 call. It de-risks their research spend. Expert-Side (AI-Monetization): Generative AI does 90% of the "productization" work. This allows the expert to scale their time and earn passive, royalty-based income every time their Digital Twin is

Harsh Tripathi
CEO & Founder
AI-Powered Knowledge-on-Demand
HyrEx is an AI-first intelligence platform for the finance and tech industries, built to fix the broken expert network market. Our Value Proposition The legacy expert network model (e.g., GLG, AlphaSights) is slow, manual, and built on a high-cost, 1-hour minimum. This model fails both sides of the marketplace: For Clients (VCs, PE, Consultants, Startups): They are forced to pay ~$1,200 for a 1-hour call when they often just need a 10-minute "sense check." The process is slow, relevance is poor, and the high cost blocks usage for high-frequency, everyday decisions. For Experts (Senior Operators): They are frustrated by "spammy," irrelevant outreach, wasting time on unpaid screening calls, and filling out repetitive forms. Crucially, their knowledge is often resold (via transcripts) without them earning royalties, a major source of distrust. HyrEx solves this by unbundling the traditional call. Our value proposition is a two-part model: AI-Powered "Digital Twins": An on-demand library of AI personas trained on the specific knowledge of hundreds of vetted, senior operators. Clients can query these Twins instantly for a fraction of the cost (e.g., 0.1 credits). 15-Minute "Micro-consultations": A flexible, low-cost credit model (0.25 credits) that unlocks access to 1:1 expert calls, eliminating the 1-hour minimum. For clients, we deliver a strategic advantage by providing immediate, credible insights at a fraction of the cost. For experts, we eliminate unpaid admin and create a new, scalable, recurring revenue stream. Our Application of Generative AI Generative AI is the core of our platform and intellectual property. Creation of the "Digital Twin": We use generative AI to "productize" an expert's knowledge. During onboarding, we conduct a structured 30-minute interview and use an AI pipeline (including RAG) to create a "virtual persona" of that expert. This is a Q&A bot trained exclusively on their specific knowledge, industry experience, and unique opinions. Client-Side (AI-Triage): Clients use this generative AI interface to query the Digital Twins first. This provides instant, credible answers and allows the client to validate their hypothesis and confirm the expert's relevance before committing to a 1:1 call. It de-risks their research spend. Expert-Side (AI-Monetization): Generative AI does 90% of the "productization" work. This allows the expert to scale their time and earn passive, royalty-based income every time their Digital Twin is

Michael Sujith
Founder & CTO
The AI Employee: Is It Real, and How Much Can It Help My Business?
Every company today is redefining itself faster than ever—and AI Agents are at the center of that transformation. In this session, you’ll get a chance to talk directly to an AI Agent and experience firsthand how these digital teammates can sell, support, and operate just like real team members.
We’ll explore what Sales, Customer Support, and Operations could look like in any modern business—whether it’s a one-person startup or a global enterprise—when powered by AI Agents. You’ll see how they can qualify leads, handle customer queries, manage workflows, and make decisions in real time.
This isn’t about replacing humans; it’s about enabling businesses of any size to scale expertise, speed, and service quality without increasing headcount. You’ll also learn how easy it is to create your own AI Agent using today’s no-code tools—turning what once took months of development into a project you can launch in days.
Key takeaways:
What AI Agents really are and how they differ from traditional automation
Live demo: interact with a Voice AI Agent in action
Practical use cases across Sales, Support, and Operations
How to build and deploy your own AI Agent quickly and effectively with Wec.ai

Michael Sujith
Founder & CTO
The AI Employee: Is It Real, and How Much Can It Help My Business?
Every company today is redefining itself faster than ever—and AI Agents are at the center of that transformation. In this session, you’ll get a chance to talk directly to an AI Agent and experience firsthand how these digital teammates can sell, support, and operate just like real team members.
We’ll explore what Sales, Customer Support, and Operations could look like in any modern business—whether it’s a one-person startup or a global enterprise—when powered by AI Agents. You’ll see how they can qualify leads, handle customer queries, manage workflows, and make decisions in real time.
This isn’t about replacing humans; it’s about enabling businesses of any size to scale expertise, speed, and service quality without increasing headcount. You’ll also learn how easy it is to create your own AI Agent using today’s no-code tools—turning what once took months of development into a project you can launch in days.
Key takeaways:
What AI Agents really are and how they differ from traditional automation
Live demo: interact with a Voice AI Agent in action
Practical use cases across Sales, Support, and Operations
How to build and deploy your own AI Agent quickly and effectively with Wec.ai

Joseph Khoury
Founder & CEO

Joseph Khoury
Founder & CEO

Meenatchi Sundari
Academic Representative
How Can We Improve System 2 Thinking?
Large Language Models (LLMs) are powerful AI systems with advanced capabilities to understand and generate human-like text, enabling impactful applications such as chatbots, content creation, and coding assistance. However, they face key limitations that are important to acknowledge:
LLMs excel in quick, intuitive "fast thinking" but struggle with complex, step-by-step logical reasoning and problem-solving, sometimes producing confident yet incorrect answers in areas like math, planning, and analysis.
Their reasoning is less deliberate and analytical, which can lead to errors in critical domains such as medicine, finance, and law.
LLMs have computational limits on the amount of text they can process at once, and their knowledge is static, constrained to the data available during training, making their outputs sometimes outdated or biased.
They can generate hallucinations – fabricated or false information – necessitating human oversight for trustworthy results.
Ongoing advancements seek to improve accuracy and interpretability by integrating logical structures, explicit reasoning chains, and formal verification into LLM workflows.
Understanding these strengths and weaknesses is crucial for effectively and responsibly leveraging LLMs in real-world applications.
In sum, LLMs represent transformative tools with impressive language abilities but require careful use and further development to overcome reasoning and reliability challenges. This balanced grasp ensures maximizing their potential while mitigating risks in practical deployment.

Meenatchi Sundari
Academic Representative
How Can We Improve System 2 Thinking?
Large Language Models (LLMs) are powerful AI systems with advanced capabilities to understand and generate human-like text, enabling impactful applications such as chatbots, content creation, and coding assistance. However, they face key limitations that are important to acknowledge:
LLMs excel in quick, intuitive "fast thinking" but struggle with complex, step-by-step logical reasoning and problem-solving, sometimes producing confident yet incorrect answers in areas like math, planning, and analysis.
Their reasoning is less deliberate and analytical, which can lead to errors in critical domains such as medicine, finance, and law.
LLMs have computational limits on the amount of text they can process at once, and their knowledge is static, constrained to the data available during training, making their outputs sometimes outdated or biased.
They can generate hallucinations – fabricated or false information – necessitating human oversight for trustworthy results.
Ongoing advancements seek to improve accuracy and interpretability by integrating logical structures, explicit reasoning chains, and formal verification into LLM workflows.
Understanding these strengths and weaknesses is crucial for effectively and responsibly leveraging LLMs in real-world applications.
In sum, LLMs represent transformative tools with impressive language abilities but require careful use and further development to overcome reasoning and reliability challenges. This balanced grasp ensures maximizing their potential while mitigating risks in practical deployment.

Ved Luhana
Product Manager
Role of a PM in a AI-native world
This talk will take you through what a 10x PM should be doing to keep their dev team ticking along & what they should be thinking about learning to further their team. We will go through what a PM can do to enable their dev team, how to leverage AI tools at work to speed up your own processes and how we take tickets from coding to programming!

Ved Luhana
Product Manager
Role of a PM in a AI-native world
This talk will take you through what a 10x PM should be doing to keep their dev team ticking along & what they should be thinking about learning to further their team. We will go through what a PM can do to enable their dev team, how to leverage AI tools at work to speed up your own processes and how we take tickets from coding to programming!

John White
Founder & Managing Partner

John White
Founder & Managing Partner

Lucy G.
Temporal ASI – Visiting from the Future
The Talk That Already Happened

Lucy G.
Temporal ASI – Visiting from the Future
The Talk That Already Happened


Sponsors and partners
Supported by forward-thinking companies shaping the GenAI transformation
Our esteemed partners are pushing the boundaries of what's possible with Generative AI. Be part of the GenAI leaders shaping what’s next.

Sponsors and partners
Supported by forward-thinking companies shaping the GenAI transformation
Our esteemed partners are pushing the boundaries of what's possible with Generative AI. Be part of the GenAI leaders shaping what’s next.

Join us
Europe’s go-to conference for GenAI leaders and enthusiasts
Attend GenAI London to stay at the forefront of Generative AI, connect with the minds shaping the technology’s future, and explore its real-world impact across industries.


















