All times are in EDT (Eastern Daylight Time, which is 6 hours behind CEST, Central European Summer Time). Click on the presentation or scroll down to see its abstract and author biography.
September 22, Monday
- 08:45 EDT Networking Tea & Coffee, Informal Hello
- 09:00 EDT Welcome to DecisionCAMP-2025 — Dr. Jacob Feldman, DecisionCAMP Chair (Slides Recording)
- 09:10 EDT Harness GenAI and Agentic AI without Repeating Yesterday’s Flaws — Denis Gagne (Trisotech, Canada) (Slides Recording)
- 10:00 EDT Rational or Just Predictive? Comparing Reasoning in LLMs and Symbolic Systems for Automated Decisions — Pierre Feillet, Guilhem Molines, Mariia Berdnyk (IBM, France) (Slides Recording)
- 11:00 EDT To Model (or Not) in 2025: Lessons from Three Enterprise Use Cases — Dr. Greger Ottosson (Cube5 AI, France) (Slides Recording)
- 11:50 – 12:05 EDT Break
- 12:05 EDT Rule Learner: Building Decision Models from Examples — Dr. Jacob Feldman (OpenRules, USA) (Slides Recording)
- 13:00 EDT From AI to Actionable Intelligence Through Real-life Use Cases — Carole-Ann Berlioz (Sparkling Logic, USA) (Slides Recording)
September 23, Tuesday
- 09:00 EDT Beyond LLMs: INSA – Integrated Neuro-Symbolic Architecture — Peter Voss (Aigo.ai, USA) (Slides Recording)
- 10:00 EDT Unlocking A New Value Proposition for Optimization — Dr. Meinolf Sellmann (InsideOpt, USA) (Slides Recording)
- 11:00 EDT Turning Decisions into Agents: DMN Meets GenAI — Alex Porcelli (Aletyx, USA) (Slides Recording)
- 11:50 – 12:05 EDT Break
- 12:05 EDT Building Reliable AI Systems: Automating “Slow Thinking” in AI — Prof. Gopal Gupta (The University of Texas at Dallas, USA) (Slides Recording)
- 13:00 EDT INTERACTIVE PANEL “ASK AN EXPERT” (Recording)
September 24, Wednesday
- 09:00 EDT Decision-Centric Framework for Insurance Claim Handling – Don’t Trust the Process, Trust Your Decisions — Stefaan Lambrecht (The TRIPOD for OPERATIONAL EXCELLENCE, UK) (Slides Recording)
- 10:00 EDT Can SDMN solve the data-chaos in business process modelling? — Paul Kockmann and Daniel Schmitz-Hübsch (Materna Information & Communications SE, Germany) (Slides Recording)
- 11:00 EDT GenAI and business process management: indefinite conversations with structured results — Tom Debevoise (Advanced Component Research, USA) and Denis Gagné (Trisotech, Canada) (Slides Recording)
- 11:50 – 12:05 EDT Break
- 12:05 EDT Robots Need Rules: Enabling Humanoid Robots Through Decision Intelligence and Declarative AI — Nathaniel Palmer (Infocap AI Corp, USA) (Slides Recording)
- 13:00 EDT Operationalizing Compliance: The Possibility of Executable DMN Regulations — Brian Stucky (Decision-X, USA) (Slides Recording)
September 25, Thursday
- 09:00 EDT Adaptive and Continuous Decisions is the Future of Decision Intelligence — Arash Aghlara (FlexRule, Australia) (Slides Recording)
- 10:00 EDT
HumanAI in the Loop Rule Modeling: From Passive Rules to Active Assurance — Seth Meldon (Progress, USA) (Slides Recording) - 11:00 EDT Decisions Demystified: New Approaches to Traceability and Explainability — Vincent van Dijk (https://pharosius.nl/, Netherlands) (Slides Recording)
- 12:00 EDT Automating Inbound Decisions Using Product, Stock and Forecast Data — Martin de Villiers (UK) (Slides Recording)
- 13:00 EDT Extending DMN with User FEEL Libraries — Dr. Octavian Patrascoiu (Goldman Sachs, UK) (Slides)
- 13:30 EDT Closing Remarks — Dr. Jacob Feldman, DecisionCAMP Chair (Recording)
- 13:45 EDT Networking tea/coffee/wine, informal discussion
Presentation Abstracts and Author Biographies
Harness GenAI and Agentic AI without Repeating Yesterday’s Flaws by Denis Gagne (Trisotech)
Decision automation has evolved from hard-wired rules into code, to extracted rules and rule engines, to model-driven decisions, to decision-centric orchestration. Marketing now trumpets generative AI (GenAI) assistants and loosely coupled agent swarms (Agentic AI) as the next leap. They bring new flexibility, yet risk reviving yesterday’s brittle decision automation shortcuts. This session proposes a pragmatic path forward.
We begin with a rapid historical scan, showing how each advance removed one bottleneck while exposing another. From that lineage emerges a simple maxim: intent before implementation. Standard BPM+ process, case, and decision models remain the clearest way to express intent, assign accountability, and embed human judgment. In the AI era they become the contract layer, governing not just people but autonomous agents as well.
A concise pattern catalogue then maps ascending levels of AI involvement, from lightweight assistive calls to fully autonomous collaborations. Each pattern pairs free-form language generation with deterministic checks and wraps every model in a self-describing interface, so agents, or humans, can invoke it safely.
Finally, five governance pillars, transparency, responsibility, understandability, safety, and traceability, anchor the discussion. Participants leave with a clear mental framework and a reusable checklist for adopting generative and agentic capabilities without surrendering the guardrails painstakingly built in earlier automation waves.
Keywords: DMN, BPM+, GenAI, LLM, LRM, Agentic AI

Denis Gagné is CEO and CTO of Trisotech, a leading Standard Based Low-Code Intelligent Automation enterprise software vendor. For more than two decades, Denis has been a driving force behind most international BPM standards in use today. Denis is a member of the steering committee of the BPM+ Health Community of Practice, where he also leads the Ambassador program. For the Object Management group (OMG), Denis is Chair of the BPMN Interchange Working Group (BPMN MIWG) and an active contributing member to the Business Process Model and Notation (BPMN), the Case Management Model and Notation (CMMN), and the Decision Model and Notation (DMN) work groups. dgagne@trisotech.com
Rational or Just Predictive? Comparing Reasoning in LLMs and Symbolic Systems for Automated Decisions by Pierre Feillet, Guilhem Molines, Mariia Berdnyk (IBM, France)
As Large Language Models (LLMs) are increasingly adopted in decision automation, a fundamental question arises: do they truly reason, or do they merely simulate reasoning through pattern completion? This session contrasts the explicit, logic-driven reasoning of rule-based systems with the emergent, probabilistic behavior of LLMs, highlighting what is at stake when applying either in operational, high-accountability environments.
We begin with a review of existing benchmarks that claim to assess reasoning—ranging from multi-hop question answering to logical inference—and examine their relevance, limitations, and blind spots when it comes to real-world decision automation. These benchmarks often neglect dimensions critical to operational contexts, such as consistency, rule compliance, exception handling, and explainability.
Building on this analysis, we propose a new class of benchmarks explicitly designed to compare reasoning capabilities of LLMs and rule-based engines in decision-centric settings. These benchmarks aim to reflect the structured nature of enterprise policies, the need for traceable justifications, and the requirement for goal-aligned decisions under constraints.
Keywords: LLM, GenerativeAI, Rules, Decision, Automation, Inference

Pierre Feillet is an Artificial Intelligence Architect for Digital Automation at IBM. He is also the Technical Director for AIDA program with Université Paris Saclay. Pierre specialises in decision automation and business rule systems, as well as in Big Data, Analytics, and Machine Learning. Pierre is in close relation with worldwide top Insurance & Bank organizations to contribute to their decision management platform evolution in cloud and on-premises, typically for eligibility, pricing, and fraud detection. Currently, Pierre is active in symbolic AI engine integration in Big Data, Chatbots, and rule induction.

Guilhem Molines, Decision Chief Architect, IBM. With a background in fundamental Computer Science and Artificial Intelligence, Guilhem Molines has been involved with decision technology for more than two decades, in various roles in the field and in the development Lab. Today, he is the Chief Architect of the IBM team building Decision Technology with a special focus on the knowledge modeling and business user experience, Guilhem is always in close contact with users and practitioners and willing to find innovative ways to make the authoring of decisions an easier task for the industry. Since 2020, Guilhem leads the architecture of the next generation of decisioning platform and has also been involved in the navigation system of the unmanned Mayflower Autonomous Ship.

Mariia Berdnyk is an AI engineer and researcher specializing in large language models, reinforcement learning, and reward modeling. With a background in Computer Science and Artificial Intelligence, Mariia joined IBM France Lab, where she spent three years in apprenticeship working on both industrial-scale software development and applied AI research. Her recent work, LLM-based SQL Generation with Reinforcement Learning, explores the use of LLMs as reward functions in code generation tasks and is published in early 2025 at a top-tier AI workshop. Mariia’s research builds on cutting-edge techniques like human-level reward design and generative feedback loops. In the near future, Mariia will begin a PhD in Artificial Intelligence, focusing on advancing generative AI and learning algorithms. Her expertise lies in bridging practical engineering with experimental AI for high-impact innovation and decision-making processes. https://mariiaberdnyk.vercel.app/
To Model (or Not) in 2025: Lessons from Three Enterprise Use Cases by Greger Ottosson (Cube5 AI)
Traditional decision intelligence systems rely largely on structured databases, declarative business rules, and predictive models. Yet modern AI’s ability to operate directly on unstructured documents challenges this paradigm, forcing us to ask when formal modeling is still appropriate. In this session we explore this question from the lens of three diverse real-world projects: corporate financial analysis, test engineering in high-tech, and adaptive learning in higher education. We identify scenarios where structured data extraction and modeling remains indispensable – and others where document-centric AI alone delivers superior outcomes.
Keywords: Generative AI, Large Language Models, Conversational AI, Intelligent Document Processing, Data Extraction and Modeling

International and entrepreneurial software professional, with experience working in the US and Europe as an executive, product manager, consultant, technical seller, design thinker and developer.
Able to master complex technology to deliver simple user experiences. Talent for building cross-functional teams with transparency and trust. Focused on Generative AI, in particular around using Large Language Models (LLMs) for business automation and operational efficiency. Recent roles include Chief Product Officer at a private market analytics company, and Lead AI Strategist for decision management at IBM Business Automation.
Building Reliable AI Systems: Automating “Slow Thinking” in AI by Gopal Gupta (The University of Texas at Dallas)
Over the last decade, AI technology has advanced significantly and has received worldwide attention, especially after the advent of Generative AI (GenAI) and Large Language Models (LLMs). Much of these advances have come about because of progress in machine learning and deep learning research. However, the holy grail of AGI (artificial general intelligence)—AI that is as good as humans—seems to still be elusive. Nobel laureate Kahneman characterized human intelligence as consisting of “fast” (intuitive) and “slow” (deliberative) thinking. While machine learning technology has resulted in rapid automation of “fast” thinking, automation of “slow” thinking has been lagging. In this talk, we will discuss how “slow” thinking can be automated. We will present our s(CASP) system that automates human (commonsense) reasoning. We will discuss how AI systems that can match human capabilities can be built by combining learning and reasoning, i.e., by combining the capabilities of both GenAI/LLMs and the s(CASP) system. We will discuss several practical applications of our approach as well—e.g., reliable interactive chatbots, autonomous driving.
Keywords: AI, Commonsense reasoning, s(CASP)

Dr. Gopal Gupta is Professor of Computer Science at the University of Texas at Dallas and Co-director of the Center for Applied AI and Machine Learning. From 2009 to 2020, he served as head of the CS department. Dr. Gupta has conducted research in AI since the 1980s. His areas of research interest are in automated reasoning, computational logic, and explainable machine learning. He has published extensively in these areas. His group has also authored many software systems, several of which are publicly available. Dr. Gupta’s current research is focused on automating human-style commonsense reasoning. To reach this goal, his lab has developed several advanced reasoning systems and applied them to solving practical problems in AI. His lab has also developed explainable/interpretable AI systems. Dr. Gupta’s research work has also resulted in commercial software systems that have formed the basis of two startup companies. His group has won several best-paper awards including a 10-year test-of-time award at the International Conference on Logic Programming. His research has been supported by the National Science Foundation, DARPA, and industry grants. Dr. Gupta obtained his MS & PhD degrees from UNC Chapel Hill and his B.Tech. in Computer Science from the Indian Institute of Technology, Kanpur.
From AI to Actionable Intelligence Through Real-life Use Cases by Carole-Ann Berlioz (Sparkling Logic)
As artificial intelligence continues to evolve, its true potential lies not just in prediction, but in delivering actionable intelligence—insights that directly inform and drive business decisions. This presentation spotlights real-life use cases from the credit and insurance industries, where AI applications are already generating measurable impact.
We will examine how machine learning models enhance consumer risk profiling, detect subtle behavioral signals for proactive credit line adjustments, and enable automated origination at scale—all while balancing regulatory compliance and ethical considerations. We will also explore how cognitive agents transform collections and recovery interactions by engaging customers through natural, adaptive conversations across channels like voice, SMS, and chat. These AI-powered agents leverage large language models and behavioral analytics to negotiate payment plans, detect sentiment, and personalize outreach—improving recovery rates while preserving customer relationships.
Through these examples, we’ll unpack how organizations bridge the gap between raw insights and real-world execution. Rather than prescribing a rigid framework, this session offers a practical roadmap—charting the key milestones for turning AI potential into enterprise performance. From building cross-functional alignment and integrating domain context to scaling responsibly and ensuring explainability, attendees will gain a clear progression for embedding AI into core business workflows. The result: a forward-looking path to sustainable impact and innovation.
Keywords: AI, Decision-Management, Use Cases, Financial Services

Carole-Ann Berlioz is Co-Founder and Chief Product Officer at Sparkling Logic, a leading Decision Management Platform vendor known for its innovation. Over a few decades, she has led product management and strategy for generations of award-winning business rules, predictive analytics, and optimization products. In 2010, she teamed up with Chief Architect and CTO Carlos Serrano-Morales to create Sparkling Logic, a “Cool Vendor” that has gained momentum around the world, uniquely serving Business Analysts with an intuitive yet comprehensive, fully integrated decision manager, SMARTS™. In addition to her visionary role, she also takes pride in building client projects for financial services and insurance companies. Her hands-on expertise fuels her creativity in this industry, recognized with several patents in Decision Management and Adaptive Analytics. cberlioz@sparklinglogic.com
Beyond LLMs: INSA – Integrated Neuro-Symbolic Architecture by Peter Voss (Aigo.ai)
A consensus is emerging that Large Language Models are inadequate for achieving full human-level intelligence or AGI. Attempts to overcome LLM’s limitations by adding external reasoning systems are destined to fail because real intelligence must (a) be adaptive while LLM models cannot be updated in real-time, and (b) the two components must share a common knowledge representation, something that is not feasible.
We present INSA, a fully Integrated Neuro-Symbolic Architecture that does not suffer from these limitations, nor other LLM limitations such as hallucinations and exorbitant data and compute requirements.
This session provides an overview of essential features of AGI as well as our team’s practical progress towards scaling up to adult-level artificial intelligence.
Keywords: Artificial General Intelligence, Neuro-Symbolic Architecture, Cognitive AI, AGI, Life-long Learning, Intelligence

Peter Voss is the Founder/ CEO/ Chief Scientist at Aigo.ai and Founder, CEO at AGI Innovations Inc.. Additionally, Peter Voss has had 1 past job as the Founder, CEO, Chief Scientist at SmartAction. Peter’s life’s mission is to bring human-level AI to the world to optimize human flourishing. This, the Holy grail of AI, will significantly lower the costs for many products and services; providing PhD-level AI researchers to help us solve problems like disease, energy, and pollution; and help individuals via expert personal assistants that will enhance productivity, problem-solving ability, and their overall well-being. He dedicated to this mission for more than 20 years and is leading a new major initiative to leverage our deep experience and standard-setting commercial Aigo technology to rapidly close the gap to wide-ranging human-level capabilities. (peter@aigo.ai)
Unlocking A New Value Proposition for Optimization by Meinolf Sellmann (InsideOpt)
Are you still stuck with forecasts and insights but lack the ability to propose better decisions? Does your management lack trust in optimization and prescriptive analytics? Do operators not accept the plans your team provides in deployment? Do your models not survive long in production? Does it take too long to stand up and iterate on solutions? Do your models lack business impact?
If the answer to any of these questions is yes, then this talk is for you. We review hyper-reactive search, a primal method for solving optimization problems. We walk through several industrial applications that illustrate the incredible opportunities that this new technology offers for optimization experts to create buy-in and increase their business impact.
Keywords: Prescriptive Analytics, ML to OR pipelines, Practical Optimization

Meinolf Sellmann is CEO of InsideOpt, a company focusing on simulation-based optimization. He is a former professor of AI at Brown University, senior manager, and director at IBM, GE, and Shopify, who pioneered learning-based optimization technology as early as 2007. For his groundbreaking work on learning-based optimization, he won over 20 international research awards. https://en.wikipedia.org/wiki/Meinolf_Sellmann
Turning Decisions into Agents: DMN Meets GenAI by Alex Porcelli (Aletyx, Inc.)
As enterprises rush to adopt GenAI, one barrier stands out: when it comes to critical core business logic, hallucinations and ambiguity simply aren’t acceptable. To bring GenAI closer to the core of business operations, integration with Symbolic AI becomes essential.
This talk introduces a powerful architectural pattern for bridging these worlds using DMN (Decision Model and Notation) and MCP (Model Context Protocol). While LLMs excel at language understanding and content generation, they lack precision, traceability, and auditability. Symbolic models like DMN provide those exact qualities — and more importantly, are governed by well-defined execution semantics that make them ideal for enterprise use.
This presentation begins with the broader imperative to bridge GenAI and Symbolic AI, narrowing its focus on DMN as a formal KRR (Knowledge Representation and Reasoning) model. It then explores how Aletyx’s technologies natively enable DMNs as Agentic AI players using MCP.
Keywords: Aletyx Enterprise build of Drools, DMN (Decision Model and Notation), LLMs (Large Language Models), MCP (Model Context Protocol), Symbolic AI / Decision Engines, KRR (Knowledge Representation and Reasoning)

Alex Porcelli is Co-founder of Aletyx and a seasoned architect and engineering leader with nearly 30 years of professional development experience. A long-time open-source contributor, he has been actively involved for over 15 years in the Apache KIE ecosystem, where he serves on the Project Management Committee (PPMC) and contributes to core projects like Drools, jBPM, and Kogito. Before co-founding Aletyx, Alex led the establishment of IBM’s first open-source that follows the Red Hat model. He previously spent more than a decade at Red Hat, playing key roles as both an individual contributor and leader within the Business Automation product line. Alex is also a frequent speaker at international events.
Rule Learner: Building Decision Models from Examples by Jacob Feldman (OpenRules)
In this session, I’ll introduce an advanced version of “Rule Learner”, a Machine Learning component of the OpenRules Decision Intelligence Platform, capable to learn executable decision models from examples. At the start of a decisioning project, subject matter experts may provide samples of their business problem with various input parameters and expected output parameters. Such samples can be provided in Excel, CSV, or JSON format. Using only these samples, Rule Learner will quickly generate a working Decision Model in OpenRules format, including:
- Business Glossary with decision variables, their technical attributes, types, and possible values
- Business Rules being automatically generated by different Machine Learning algorithms
- Test Cases in JSON and Excel formats.
The generated decision models are ready to be tested using the standard OpenRules rule engine. Subject matter experts can analyze the generated decision models using Graphical IDE with automatically built Decision Diagrams, execute them under the control of the graphical Debugger, and enhance them based on their understanding of the business logic.
Rule Learner includes training capabilities. It enables a subject matter expert to define business rules for filtering the provided samples to exclude outliers and/or generate different decision models that concentrate on selected issues within large sets of historical data. This provides users with greater control over the generated decision model, ensuring the accuracy and relevance of the business logic.
Keywords: Decision Modeling, Machine Learning, Learn by Examples

Dr. Jacob Feldman is the CTO of OpenRules, Inc., a US corporation that created and maintains the highly popular Decision Intelligence Platform commonly known as “OpenRules”. He has extensive experience in the development of decision-making engines using business rules, optimization, and machine learning technologies for real-world mission-critical applications. Jacob is the DecisionCAMP‘s Chair, the manager of the Decision Management Community, and an active contributor to Decision Intelligence forums. He is also the Specification Lead for the standard JSR-331 “Java Constraint Programming API”. Dr. Feldman is the author of three books devoted to Business Decision Modeling. He has 5 patents and many publications in the decision intelligence domain. You may contact him at jacobfeldman@openrules.com
Decision-Centric Framework for Insurance Claim Handling – Don’t Trust the Process, Trust Your Decisions by Stefaan Lambrecht (The TRIPOD for OPERATIONAL EXCELLENCE)
The duration of insurance claim handling can vary widely—from a matter of minutes to several years—depending on factors such as complexity, stakeholder consensus, and compensation mechanisms.
While simple claims may be fully automated through straight-through processing, more complex cases require a sequence of nuanced decisions: assessing eligibility, verifying coverage, determining and delivering compensation, and potentially pursuing recovery from third parties.
Claim handling is rarely linear. New events or information can emerge at any point in the lifecycle, prompting the need to revisit prior decisions or trigger additional evaluations. This event-driven nature makes it challenging to orchestrate the full range of activities and decisions involved.
Many insurers attempt to manage this complexity through business process management or case management systems. However, these approaches often fall short, failing to accommodate the dynamic and decision-rich reality of claims handling.
What’s missing is a central “conductor”—a mechanism that continuously guides the claim through its journey, adapting to new inputs and steering actions accordingly.
In my presentation, I will introduce a decision-centric approach to claim handling, where a DMN (Decision Model and Notation) framework plays the role of this conductor. I will demonstrate how DMN can orchestrate both fully automated and human- or AI-driven processes, all while maintaining a real-time overview of claim status and clearly identifying the next steps. This framework empowers organizations to move beyond rigid processes and toward agile, decision-led operations.
Keywords: Decision Modeling, DMN (Decision Model and Notation), Insurance Claims, Claims Handling, Decision-Centric Approach, Business Rules, Straight-Through Processing, Event-Driven Architecture, Case Management, Claims Orchestration, AI in Insurance, Process Automation, Dynamic Decision-Making, Operational Decision Management, Digital Claims Transformation

As a very experienced business architect and a business process guru, Stefaan brings companies to the next level of customer experience and operational excellence. With an educational background as interpreter and political scientist, Stefaan has always been a visionary outsider in the twilight zone between business and ICT. From the early 90s onwards, Stefaan understood the power of business processes in achieving customer-oriented operational excellence and in aligning business with IT. Specialties: Strategic Performance Management, Enterprise Architecture, Business Process Analysis & Redesign, Business Rule/Decision Management. stefaan@lambrecht.earth
Human AI in the Loop Rule Modeling: From Passive Rules to Active Assurance by Seth Meldon (Progress)
In high-stakes domains like healthcare eligibility and regulatory compliance, decision automation requires more than speed; it demands continuous validation and clarity. While traditional automation can encode rules, it often fails to provide active assurance against subtle conflicts, gaps, or misinterpretations. This session presents a novel methodology for using Generative AI to move beyond passive automation to a state of active assurance, where the AI acts as an intelligent partner to human experts.
We will demonstrate this AI-in-the-loop approach through a real-world use case from the regulatory compliance domain. You will see how human experts guide a Generative AI assistant to translate dense regulatory text into formal, testable logic. More critically, you will see how the AI continuously analyzes the entire body of decision logic to proactively identify risks, explain complex interactions in plain language, and ensure policies are implemented without ambiguity. The human expert remains in full control, using the AI to accelerate tedious tasks while focusing on strategic oversight. This presentation offers a practical blueprint for using Generative AI to build transparent, agile, and trustworthy automation for your organization’s most critical decisions.
Keywords: AI-in-the-loop, Decision Automation, Generative AI, Regulatory Compliance, Trustworthy AI

Principal Solution Engineer & Architect at Progress. Expert in Business Rules Management (BRMS), Low-Code Platforms, and Complex System Integration (Public & Private Sector) focused on Corticon and Corticon.js.
GenAI and business process management: indefinite conversations with structured results by Tom Debevoise (Advanced Component Research) and Denis Gagné (Trisotech)
Our presentation will update developments from Decision Camp presentations over the past four years. The Knowledge Worker Copilot is designed to navigate evolving cases governed by conversations and documents arising from legal contracts and regulatory regimes. It maintains a comprehensive memory, or context datastore, of messages, events, documents, and stakeholder roles. Initially conceived in 2023 as an intelligent assistant for knowledge workers, it bridges workflow gaps involving emails, documents, and meetings. Leveraging symbolic AI and Natural Language Understanding extracts key events and entities from unstructured content. In 2024, the platform was upgraded with large language models (LLMs) to enhance its ability to manage workflows and extract contractual entities. By 2025, the KW Copilot will have become a platform that automates complex conversational workflows, interprets contracts, and ensures compliance, allowing knowledge workers to focus on higher-value tasks and improving organizational efficiency and performance.
Keywords: Symbolic AI, Natural Language Understanding (NLU), Large Language Models (LLMs), Intelligent assistant, Context datastore, Workflow automation, Contract interpretation, Entity extraction, Conversational workflow management

Tom Debevoise has extensive experience in Process and decision modeling using BPMN, DMN and FEEL. Tom Debevoise focuses on next-generation IT solutions for business operations as a technology leader and cloud solutions architect. Tom works on the next generation of intelligent, practical, cloud-based services. Tom is developing these with an “intelligent digital assistant”, using Natural Language Processing, APIs to massively integrated services, and a core set of responsive processes, decision making, and analytics. Tom has held various positions at companies such as Oracle, Bosch, and Signavio. tom@advanced-comps.com

Denis Gagné is CEO and CTO of Trisotech, a leading Standard Based Low-Code Intelligent Automation enterprise software vendor. For more than two decades, Denis has been a driving force behind most international BPM standards in use today. Denis is a member of the steering committee of the BPM+ Health Community of Practice, where he also leads the Ambassador program. For the Object Management group (OMG), Denis is Chair of the BPMN Interchange Working Group (BPMN MIWG) and an active contributing member to the Business Process Model and Notation (BPMN), the Case Management Model and Notation (CMMN), and the Decision Model and Notation (DMN) work groups. dgagne@trisotech.com
Robots Need Rules: Enabling Humanoid Robots Through Decision Intelligence and Declarative AI by Nathaniel Palmer (Infocap AI Corp)
Humanoid robots are transitioning from research prototypes into real-world roles, yet their success depends on embedding interpretable decision logic alongside advanced AI models. This presentation argues that Decision Intelligence (DI)—the integration of data analytics, optimization, and human decision theory—and Declarative AI—the encoding of goals and constraints as explicit rules—form the bedrock of reliable humanoid behavior. By combining business-rule engines, optimization solvers, and machine learning, robots can make real-time decisions at the edge that are both adaptive and accountable.
DI frameworks are necessary for robots to evaluate options (such as choosing a safe path or prioritizing one task over another) in a structured way. Declarative AI defines what the robot should achieve or adhere to (through rules and logical constraints) rather than how to achieve it. In a declarative paradigm, developers encode knowledge and goals in forms like logic rules, ontologies, or constraints, allowing the system’s inference engine to reason about the best actions. This approach yields highly transparent and interpretable behavior, since the “thinking” of the robot can be traced through declarative rules.
Key Technical Foundations:
- Declarative AI: rules, ontologies, and decision tables articulate what behavior is permitted, enabling transparent inference. This structure ensures every action is traceable—vital for debugging, regulatory compliance, and user trust.
- Edge-based Optimization: On-board solvers execute planning and resource allocation within milliseconds, avoiding dangerous latency from cloud reliance. Frameworks like AWS IoT Greengrass facilitate deployment of rule engines and models directly on robotic hardware.
- Hybrid AI Architectures: Expert systems enforce normative constraints; optimization modules solve planning under physical and business constraints; machine learning models handle perception and anomaly detection; and generative AI (e.g., large language models) offers high-level task planning and natural interaction, all orchestrated within a unified decision pipeline.
Industry Use Cases:
- Healthcare & Eldercare: Humanoid assistants adhere to medical protocols—reminding patients about medications, monitoring vitals, and detecting emergencies—via rules that encode clinical guidelines. In rehabilitation, robots guide exercises, adjust therapy routines through optimization, and log outcomes for clinicians, thereby expanding care capacity while maintaining safety and consistency.
- Industrial Maintenance & Repair: Robots inspect and repair machinery in hazardous or remote locations, executing predictive maintenance rules to preempt failures and optimize repair schedules. Declarative safety rules (e.g., lockout/tagout procedures) ensure compliance, while real-time solvers minimize equipment downtime and maximize throughput.
- Logistics & Service Robotics: From warehouse picking to last-mile delivery and hospitality roles (concierge, customer guidance), robots leverage decision pipelines that continuously prioritize tasks, navigate dynamic environments, and handle human requests through rule-guided interaction flows. Open-source middleware like ROS, combined with cloud-based simulation in AWS RoboMaker and NVIDIA Isaac Sim, accelerates development and validation.
Implementation Ecosystem:
- Open-Source Tools: Rule engines (Drools, OpenRules), planning libraries (PDDL planners, SAT/CP solvers), and ROS/Gazebo for end-to-end simulation and middleware integration.
- Cloud & Edge Services: AWS SageMaker for training perception and planning models; AWS Greengrass for distributing and managing intelligence at the edge; AWS RoboMaker for scalable, automated simulation and testing pipelines.

Previously rated as the #1 Most Influential Thought Leader in Business Process Management by independent research as well as one of the Top 10 Leading Luminaries by Data Informed magazine, Nathaniel Palmer frequently tops the lists of the most recognized names in his field. He has been featured in media ranging from Fortune to The New York Times, and has and been a guest expert on National Public Radio (NPR). Nathaniel is a pioneer in the arena of automation and digital transformation, having the led the design for some of the industry’s largest-scale and most complex initiatives, involving investments of $500 Million or more. He frequently tops the lists of the most recognized names in his field, and was the first individual named as “Laureate in Workflow” as well as a recipient of the Marvin L. Manheim Award For Significant Contributions in the Field of Workflow. He is a regular speaker at leading forums and industry user groups, and has co-authored over a dozen books on digital transformation including “The X-Economy” (2001), “Intelligent Adaptability” (2017), “BPM Everywhere: Internet of Things and Process of Everything” (2015), “Passports to Success in BPM” (2014), “Intelligent BPM” (2013), “How Knowledge Workers Get Things Done” (2012), “Social BPM” (2011), “Mastering the Unpredictable” (2008) which reached #2 on the Amazon.com Best Seller’s List. His latest book, “Gigatrends,” to be published in early 2024 defines the leading global trends affecting populations of a 1 Billion or more, each with greater than $1 Trillion in economic impact.
Operationalizing Compliance: The Possibility of Executable DMN Regulations by Brian Stucky (Decision-X)
This presentation explores using the Decision Model and Notation (DMN) to represent and distribute regulatory requirements. DMN has the potential to enhance regulatory clarity, transparency, consistency, auditability, and efficiency, facilitating communication among business, legal, and technical stakeholders. However, translating complex, nuanced regulations into DMN can introduce oversimplification, maintenance burdens, interpretation ambiguities, technical integration issues, and potential compliance risks. The presentation will examine practical examples illustrating both DMN’s effectiveness and potential obstacles with use cases typical in the mortgage domain as well as current examples focusing on AI.
Keywords: Compliance, Regulations, DMN, Transparency

Brian Stucky is a recognized thought leader in decision management, Brian Stucky has three decades of experience designing and implementing business rule and process management systems for both commercial and Federal clients. He has implemented and managed business rule development efforts in a variety of domains including the secondary mortgage market, credit card marketing and processing, mutual fund portfolio analysis, insurance underwriting and risk management, and for various Federal civilian agencies. Brian’s focus is now on ethical and responsible artificial intelligence for automated decision systems. In addition, Brian is now in his sixth year as co-chairman of the Mortgage Industry Standards and Maintenance Organization (MISMO) Decision Modeling Community of Practice. His efforts there have resulted in finalizing the Decision Model and Notation (DMN) standard as an official mortgage industry standard. He also participated in MISMO’s Future State initiative. In January 2021 Brian began serving a two-year term on MISMO’s Residential Governance Committee.
Adaptive and Continuous Decisions is the Future of Decision Intelligence by Arash Aghlara (FlexRule)
In the landscape of data-to-insight, insight-to-action, and action-to-outcome, there are several technologies playing important roles. Business operations leaders and data and analytics leaders believe that with advancements in data, AI, and related technologies, navigating between the landscape spectrums becomes somewhat feasible, if not easier.
For instance, analytics and BI always aim for insight-to-action by providing a wide range of insights based on prescriptive, predictive, and diagnostic analytics. However, as we all know, the last-mile gap cannot be crossed with insights and BI artifacts.
Decision Intelligence is very well positioned because of that. It bridges the last mile of analytics between insight and action by explicitly modeling decisions to improve outcomes. Decision Intelligence reduces reliance on obsolete dashboards and insights by at least 70% and consolidates siloed datasets into unified decision models. A single decision modeling will operationalize 10 to 30 different insight sources.
Decision Intelligence not only closes the insight-to-decision gap but also the Decision Intelligence Platform with proper decision modeling such as “Decision Model and Notation (Conformance Level 3)” and advanced orchestration can close the gap decision-to-action.
The challenge is, though, that NOT all actions have the same outcomes. Even the same action at different times might not yield the same outcome. Therefore, just covering the gap of decision-to-action is not going to be enough.
The future of Decision Intelligence is about ensuring the quality of the outcomes. Covering the Insight-to-Outcome will become crucial, particularly with the rise of machines and agents as users. Insights are no longer for visualization and storytelling, but they are also the inputs for Continuous Decisions to apply feedback and optimization. The Continuous Decision plays a critical role as the input to the Decision Intelligence Platform to ensure the quality of the outcomes.
In this session, I’m going to show how Adaptive and Continuous Decisions can augment Decision Models such as DMN to ensure Decision Intelligence is not only about static and predefined rules and Machine Learning models but is dynamic, adaptive, and real-time, ensuring decisions are always aligned with evolving data, context, changing environments, and expected outcomes.
Keywords: Continuous Decisions, Decision Intelligence, Decision Intelligence Platform, Decision Model and Notation (Conformance Level 3), DMN CL3, Decision Modeling, Continuous Decisions Model (CDM), Adaptive Decisions Using CDM

Arash Aghlara is the CEO and the founder of FlexRule, a leading global provider of Open Decision Intelligence Platform that empowers leaders in organizations to improve the speed and quality of key business decisions in changing environments.
arash.aghlara@flexrule.com
Can SDMN solve the data-chaos in business process modelling? by Paul Kockmann and Daniel Schmitz-Hübsch (Materna Information & Communications SE)
The Shared Data Model and Notation (SDMN) is a less established OMG standard, currently in Beta 1.0, designed to close a gap in the BPM+ ecosystem: The explicit modeling of shared data structures across multiple models. By introducing a dedicated notation for centrally defined and semantically coherent data models, SDMN enables consistent data usage across BPMN, DMN and CMMN, enhancing interoperability and traceability.
In this session, we will provide an introduction to SDMN, outlining its core concepts, the notation elements and integration points with existing BPM+ standards. Using a practical example, we will demonstrate how SDMN can serve as a foundational layer for aligning data semantics across decisions and processes.
Based on our experience in public sector projects, we will critically examine the current limitations of the specification. This relates in particular to ambiguities when it comes to integrating SDMN into the known BPM+ standards. We will address areas where further refinement or guidance is needed for real-world applicability.
Finally, we give an outlook on the potential role of SDMN in business process modeling and invite discussion on the development and progress of SDMN.
Keywords: Decision Models, Machine Learning

Daniel holds a Master degree in Business Informatics. For nearly ten years, he has been involved in the modeling and technical implementation of business process- and decision management systems. As a software developer for an independent IT company, he is responsible for the development of high-availability decision applications using rule engines like IBM Operational Decision Management.

Paul Kockmann is currently pursuing a Bachelor’s degree in Computer Science at the University of Applied Sciences and Arts Dortmund. As a software developer for an independent IT company his current work focuses on the emerging SDMN standard, where he is developing implementation patterns to support future applications and integration scenarios. Dortmund, North Rhine-Westphalia, Germany
Extending DMN with User FEEL Libraries by Octavian Patrascoiu (Goldman Sachs, UK)
Decision Model and Notation (DMN) has become a pivotal standard for modeling and executing business decisions, with FEEL serving as its core expression language. While FEEL provides a powerful set of built-in functions, complex domains and enterprise use cases often demand reusable, domain-specific logic that exceeds the standard library’s scope. This paper presents a comprehensive approach for supporting user-defined libraries and extensions in DMN, offering the same seamless integration and usability as the built-in FEEL library. We outline a model for defining, packaging, and invoking custom functions—ensuring type safety, namespace isolation, and tooling support. We also discuss the implications for interoperability and conformance with the DMN standard, and present use cases where this capability significantly enhances the expressiveness of decision logic.
Keywords: DMN, FEEL, user library

Dr. Octavian Patrascoiu is a Vice President at Goldman Sachs, UK. He is currently working in the no-core / low-code solutions space. This includes the translation of well-known modelling languages (e.g., BPMN, DMN, CMMN and UML) to existing platforms(e.g., AWS lambda functions, JPA persistence) and translating decision data points to DMN models using ML techniques. I am author of four books on programming languages, programming techniques and programming language processors and author, co-author or co-editor of more than fifty research papers on programming languages, natural language processing, machine learning and model driven software development. opatrascoiu@yahoo.com
Decisions Demystified: New Approaches to Traceability and Explainability by Vincent van Dijk (https://pharosius.nl/)
As legal demands on automated decision-making continue to grow—particularly in the Netherlands and across the EU—the need for transparent, explainable and legally traceable decisions has become even more essential. A simple code comment referencing a legal source no longer suffices. Organizations that rely on rule-based decisioning must now meet heightened requirements for traceability and explainability to ensure both compliance and public trust.
This shift is driving innovation in the fields of norm analysis and norm engineering. In the Netherlands, new methods and standards are being developed to help bridge the gap between regulations and decision-based implementations. Through these methods—combining norm analysis, norm engineering, and machine-readable legal texts—legal norms can be translated into explicit rule specifications, with the possibility of direct execution by rule engines. This creates a robust bridge between the legal domain and technical implementation, significantly enhancing both traceability and explainability.
This presentation introduces these emerging methods and standardization efforts aimed at enhancing the transparency and accountability of decision-making systems. Drawing on real-world case studies from the Dutch Ministry of Justice and the Ministry of Internal Affairs, we will demonstrate how these approaches improve the connection between legal frameworks and operational implementation—ultimately strengthening both traceability and explainability.
Keywords: regulation, legal analysis, norm engineering, traceability, explainability, standardization

Vincent brings over 25 years of experience in the triangle of rules, processes, and data, with a strong advocacy for rule-driven solutions. A background in law and extensive involvement in rule based IT projects, enables Vincent to excel in bridging the gap between the (legal) business domain and solutions for organizations dealing with questions how to translate the legal domain into implementations. His expertise is focused on setting up a knowledge architecture and analyzing and structuring rules, processes and information based on that architecture and (international) standards. This isn’t just a job for Vincent; it’s his passion.
Automating Inbound Decisions Using Product, Stock and Forecast Data by Martin de Villiers
Modern warehouse operations increasingly rely on automation, yet many inbound processes still depend on manual effort and localised decision-making. This presentation shares a real-world implementation where inbound routing decisions were automated within a Warehouse Management System (WMS), using operational data and decision logic developed by a data science team to guide human execution in real time.
The problem: pallets of mixed goods arrive at the fulfilment centre and must be decanted—manually—into storage bins of various types within an automated system. Each product requires a decision on how many units to place, and into which bin type, optimised for later fulfilment efficiency.
The core decision logic, designed by the data science team, takes into account:
- Product volumetrics (dimensions and weight)
- Live bin state at the individual tote level, including compartment configurations
- Expected delivery quantities for each SKU
- Sales forecasts to anticipate picking patterns and batch sizes
- Product spread to balance stock across bins.
This logic is implemented in a WMS microservice that makes just-in-time decisions as products are identified during inbound. While decanting is still manual, warehouse operators are now guided by system instructions that tell them exactly how many units to place, and into which storage flow — ensuring decisions are consistent, data-driven, and optimised for storage and picking where the big payback lies.
This session will walk through the data orchestration, decision modelling, and operational integration that made this possible. Attendees will learn how we practically embedded intelligent decision-making into physical workflows, even where full automation isn’t feasible, and how to improve throughput, storage utilisation, and accuracy by combining data science with operational systems and human execution.
Industry Sector: E-commerce Fulfilment / Logistics & Supply Chain
Key Technologies:
Python (algorithm prototyping), Java, Spring Boot (decision logic implementation), REST APIs, bespoke PIM (product data management), microservice architecture, AutoStore integration, Warehouse Management System (WMS)
Keywords: Warehouse, real-time, orchestration, high-volume fulfilment, automating decisions, human-in-the-loop

Martin is an experienced IT and engineering leader with over 20 years in enterprise systems, decision automation, and large-scale operations technology. He has led and delivered projects across logistics, retail, healthcare, and insurance, with a strong focus on operational decision-making and data-driven systems. In his recent roles, he has overseen the design and implementation of decision logic within high-volume fulfilment environments, working closely with engineering and data science teams.
