MLSS^R&S 2026

29 June - 3 July / Kraków, Poland

0 days / 00 hours / 00 minutes / 00 seconds

/ Registration

Regular applications for are now open. Apply before 19 April (AoE).

/ About

Machine Learning Summer School on Reliability & Safety () is a five-day summer school dedicated to selected topics in machine learning, with a focus on AI safety, reliability and robustness of machine learning systems.

The school is organised as an intensive academic programme built around invited lectures delivered by a carefully selected group of researchers from academia and industry, whose work has significantly contributed to the development of the field. Speakers are actively involved in current research and are invited based on their scientific contributions and expertise.

continues the Machine Learning Summer School series and follows previous editions: , and . As in the previous years, the school will take place in Kraków, Poland.

The intended audience of includes:

  • PhD students working on machine learning or related fields
  • research-oriented Master's students
  • early-career researchers from academia and industry interested in reliability and safety of machine learning systems

Participants are expected to have prior knowledge of machine learning fundamentals, including supervised and unsupervised learning, optimisation methods and basic probability. Familiarity with mathematical tools commonly used in machine learning is recommended.

If you have any questions about the school, don't hesitate to contact us by email mlss@mlinpl.org.

/ Speakers

will feature lectures delivered by leading researchers and practitioners working on reliability and safety of machine learning systems. The list of speakers will be announced gradually.

Franziska Boenisch photo

Franziska Boenisch

CISPA Helmholtz Center

Franziska is a tenure-track faculty at the CISPA Helmholtz Center for Information Security, where she co-leads the SprintML lab. Her research focuses on private and trustworthy machine learning. During her Ph.D. at Freie Universität Berlin and Fraunhofer AISEC, she pioneered the notion of individualized privacy in ML. Before joining CISPA, she was a Postdoctoral Fellow at the University of Toronto and the Vector Institute. Franziska is the recipient of an ERC Starting Grant (2025) for research on privacy in foundation models, and her work has been recognised with the Fraunhofer ICT Dissertation Award (2023), GI Junior Fellowship (2024), and Werner-von-Siemens Fellowship (2025).

Dominik Janzing photo

Dominik Janzing

Amazon

Dominik Janzing is a Principal Research Scientist at Amazon in Tübingen, Germany, where he works on new approaches to causal inference for cloud computing, for example in root cause analysis of anomalies. He is particularly interested in the foundations of causal inference and in defining formal concepts that are both practically useful and domain-independent. In addition to his research at Amazon, Dominik teaches seminars on causality at the Karlsruhe Institute of Technology (KIT).

Wojciech Samek photo

Wojciech Samek

TU Berlin / Fraunhofer HHI

Wojciech Samek is a Professor in the EECS Department at the Technical University of Berlin and Head of the AI Department at the Fraunhofer Heinrich Hertz Institute (HHI) in Berlin, Germany. He earned a Dipl.-Inf. degree in Computer Science from Humboldt University of Berlin in 2010 and a Ph.D. (with honors) from TU Berlin in 2014. After his doctorate, he founded the Machine Learning Group at Fraunhofer HHI, which became an independent department in 2021. He is a Fellow at BIFOLD – the Berlin Institute for the Foundation of Learning and Data – and the ELLIS Unit Berlin. He serves as a member of Germany’s Platform for AI and sits on advisory and executive boards including the WUT Center for Credible AI, AGH University’s AI Center, IDEAS Research Institute, HEIBRiDS, and the DAAD Konrad Zuse School ELIZA. His research focuses on explainable AI (XAI), spanning methods, theory, and applications, with pioneering contributions including Layer-wise Relevance Propagation (LRP), concept-level explainability, evaluation of explanations, and XAI-driven model and data improvement. He has served as Senior Editor of IEEE TNNLS, Associate Editor for several journals, and Area Chair at NeurIPS, ICML, and NAACL. He has received multiple best paper awards (Pattern Recognition 2020, Digital Signal Processing 2022, IEEE SPS 2024, Information Fusion 2025), has co-authored over 250 peer-reviewed papers, and was recognized as a Highly Cited Researcher 2025 by Clarivate.

Adam Dziedzic photo

Adam Dziedzic

CISPA Helmholtz Center

Adam is a faculty member at CISPA Helmholtz Center for Information Security, co-leading the SprintML group. His research is focused on secure and trustworthy Machine Learning. Adam was a Postdoctoral Fellow at the Vector Institute and the University of Toronto, and a member of the CleverHans Lab, advised by Prof. Nicolas Papernot. He earned his PhD at the University of Chicago, where he was advised by Prof. Sanjay Krishnan and worked on input and model compression for adaptive and robust neural networks. Adam obtained his Bachelor's and Master's degrees from the Warsaw University of Technology in Poland. He was also studying at DTU (Technical University of Denmark)and carried out research at EPFL, Switzerland. Adam also worked at CERN (Geneva, Switzerland), Barclays Investment Bank in London (UK), Microsoft Research (Redmond, USA), and Google (Madison, USA).

Randall Balestriero photo

Randall Balestriero

Brown University / Meta AI Research

Randall Balestriero is an Assistant Professor of Computer Science at Brown University in Providence, where he leads the Galilai Group. He has been doing research in learnable signal processing since 2013, in particular with learnable parametrized wavelets which have then been extended for deep wavelet transforms. The latter has found many applications, e.g., in NASA's Mars rover for marsquake detection. In 2016, when joining Rice University for a PhD with Prof. Richard Baraniuk, he broadened his scope to explore Deep Networks from a theoretical perspective by employing affine spline operators. This led him to revisit and improve state-of-the-art methods, e.g., batch-normalization or generative networks. In 2021 when joining Meta AI Research (FAIR) for a postdoc with Prof. Yann LeCun, he further enlarged his research interests e.g. to include self-supervised learning or biases emerging from data-augmentation and regularization leading to many publications and conference tutorials.

Fazl Barez photo

Fazl Barez

University of Oxford

Fazl Barez is a Senior Research Fellow at the University of Oxford's Martin AI Governance Initiative, where he serves as Principal Investigator leading research on AI safety, interpretability and governance. His work focuses on mechanistic interpretability of large models, safety evaluations, deceptive behaviours, and techniques such as model editing and machine unlearning. A central theme of his research is moving from observing model behaviour to understanding and systematically improving it. Alongside his academic work, Fazl is involved in Martian, an independent research group focused on understanding machine intelligence from first principles. The team brings together researchers with experience across major AI labs, including Anthropic, Google DeepMind, and Meta, combining frontier-model research with long-term safety perspectives. His broader experience spans academic research, frontier AI labs, and international initiatives on AI risk and reliability – a perspective closely aligned with this year’s focus on reliability and safety.

Alexandra Gomez-Villa photo

Alexandra Gomez-Villa

Universitat Autònoma de Barcelona

Dr. Alexandra Gomez-Villa is an Assistant Professor at the Universitat Autònoma de Barcelona, Spain, where she is a member of the Computer Vision Center. Her research focuses on emergent properties in foundation models, continual learning, and generative image models. She completed her PhD at the Computer Vision Center, with previous research positions at Universitat de València and Universitat Pompeu Fabra. She has published in leading venues including CVPR, NeurIPS, ICLR, and ECCV, with over 1140 citations.

Anna Sztyber-Betley photo

Anna Sztyber-Betley

Warsaw University of Technology / Truthful AI

Anna Sztyber-Betley is an assistant professor at the Institute of Automatic Control and Robotics, Faculty of Mechatronics, WUT. She is an enthusiast of AI and ML education. Recently cooperates with Truthful AI (Berkeley) on AI Safety projects.

Jan Betley photo

Jan Betley

Truthful AI

Jan worked as a software developer for over a decade before shifting to AI safety in 2023. He is an ARENA and Astra Fellowship alumni, interested in anything related to out-of-context reasoning in LLMs. Currently works as an independent researcher with Truthful AI, Berkeley.

Tomasz Michalak photo

Tomasz Michalak

IDEAS RI / Ellis Unit Warsaw

Tomasz Michalak is a lecturer at the Faculty of Mathematics, Computer Science and Mechanics of the University of Warsaw. During his academic career, he conducted research at the Department of Computer Science at the University of Oxford, at the School of Engineering and Computer Science at the University of Southampton, at the Department of Computer Science at the University of Liverpool and at the Department of Applied Economics at the University of Antwerp. He is a graduate of the Faculty of Economic Sciences at the University of Warsaw. He received his PhD in economics from the Faculty of Applied Economics of the University of Antwerp. Member of ELLIS Society. His research interests include artificial intelligence, social networks, fintech and cybersecurity, computational social sciences, multi-agent systems and game theory. Currently, he conducts research on applications of game theory in networks and issues related to security and machine learning. He is the leader of the AI Strategy Lab at the IDEAS Research Institute and member of the ELLIS Unit Warsaw.

Wojciech Kusa photo

Wojciech Kusa

NASK

Wojciech is an Assistant Professor at NASK National Research Institute in Warsaw, Poland, where he leads the NLP Department. His work focuses on building safe and trustworthy language technologies, with a current emphasis on developing PLLuM – an open Polish large language model designed for reliability and critical applications, including public sector use. Wojciech earned his PhD from TU Wien and was a Marie Skłodowska-Curie Research Fellow in the EU Horizon 2020 DoSSIER project. He has also been a visiting researcher at University College London and the University of Queensland.

Kamil Mamak photo

Kamil Mamak

Jagiellonian University

Kamil Mamak is an Associate Professor, philosopher, and a legal scholar at the Jagiellonian University. He is an ERC laureate (Starting Grant). He was a postdoctoral researcher at the RADAR: Robophilosophy, AI Ethics and Datafication research group at the University of Helsinki in 2021-2024. He is also a Member of the Board of the Cracow Institute of Criminal Law. He holds PhDs in law (2018) from Jagiellonian University and philosophy (2020) from the Pontifical University of John Paul II in Krakow. He has authored five book monographs, including "Robotics, AI and Criminal Law: Crimes against Robots" published by Routledge in 2023 and "Ethics in Human-like Robots" (Routledge) published in 2024. His sixth book, "AI Ethics: An Introduction" (in Polish), is under contract with Copernicus Center Press. He has published more than 60 peer-reviewed journal articles and contributed chapters. His works were published in international journals, including Philosophical Studies; Topoi; Episteme; Oxford Intersections: AI in Society; AI & Society; Ratio Juris; Ethics and Information Technology; Journal of Criminal Justice; European Journal of Criminology; International Journal of Social Robotics; Medicine, Healthcare, and Philosophy; Science and Engineering Ethics; European Journal of Crime, Criminal Law and Criminal Justice; Frontiers in Robotics and AI, and Criminal Justice Studies. He received a research grant from the National Science Center in Poland. He is a recipient of the Minister’s Scholarship for Outstanding Young Scientists and the Scientific Award of "Polityka" weekly 2023 (in the category: philosophy/law).

TBA photo

TBA

TBA photo

TBA

TBA photo

TBA

/ Program Format

Until the detailed program for is announced, the outline below describes the format of the previous edition of the Machine Learning Summer School.

/ Lectures

A lecture-based program with invited speakers from academia and industry. Several lectures scheduled each day.

Lectures

/ Poster session

A dedicated poster session, allowing participants to present their work and discuss it with other attendees and speakers.

Poster session

/ Daily structure

Each day follows a structured schedule with lecture blocks, coffee breaks, and a lunch break, providing time for questions and discussion.

Daily structure

/ Social events

The program including social events organized alongside the academic schedule, offering informal opportunities for interaction among participants.

Social events

Previous edition at a glance

14

invited lectures

>100

participants

Poster session

Social events

Social events

/ Timeline

9 February

Early Bird application opens

8 March (AoE)

Early Bird application closes

9 March

Regular application opens

22 March

Early Bird acceptance notifications

19 April (AoE)

Regular application closes

30 April

Regular acceptance notifications

16 May (AoE)

Deadline for paying the registration fee

29 June - 3 July

/ Venue

The summer school is held at Jagiellonian University. On the first day, 29 June, it takes place at Collegium Novum, the main building of the university.

Gołębia 24, 31-007 Kraków

Collegium Novum

On the remaining days, 30 June - 3 July, the venue shifts to the Faculty of Mathematics and Computer Science, Jagiellonian University.

Profesora Stanisława Łojasiewicza 6,
30-348 Kraków

/ Call for Sponsors

Become a sponsor of our summer school and support the next generation of machine learning researchers. Please get in touch with us at mlss-sponsors@mlinpl.org to discuss sponsorship opportunities.

/ Gold sponsors

/ Honorary Patronages

/ Media partners

/ Organizers

Jagiellonian University in Krakow

Jagiellonian University is one of Europe's oldest research universities and a leading academic center in AI.

Group of Machine Learning Research

GMUM is a ML research group at JU, active in AI safety, computer vision, and modern ML methods.

ML in PL Association

ML in PL Association organizes major ML events in Poland and connects researchers with industry.

IDEAS Research Institute

IDEAS RI is Poland’s largest applied-AI research center, supporting innovation and bridging academic research with industry.

ellis Unit Warsaw

ELLIS Unit Warsaw is part of the European ELLIS network, advancing research in trustworthy, robust, and efficient Al.

Jagiellonian Center for Artificial Intelligence

JCAI brings together researchers across Jagiellonian University to unify and accelerate AI research, fostering interdisciplinary collaboration and real-world impact.

prof. Jacek Tabor

Scientific Committee Member

prof. Bartosz Zieliński

Scientific Committee Member

prof. Wojciech Samek

Scientific Committee Member

prof. Łukasz Struski

Advisory Board Chair

Łukasz Janisiów

Project Leader

Weronika Smolak-Dyżewska

Co-Project Leader

Honorata Zych
Honorata Zych

Co-Project Leader

Marcin Osial
Marcin Osial

Co-Project Leader

Onur Akman

Registration

Adam Goliński

Program

Marcin Przewięźlikowski

Program

Mateusz Pyla

Program

Dawid Rymarczyk

Program

Marcin Sendera

Program

Filip Szatkowski

Program

Anna Szymanek

Marketing

Maria Wyrzykowska

Marketing

Turhan Can Kargin

Marketing

Alicja Grochocka-Dorocińska

Finance

Agata Bader

Sponsors

Artur Kołodziejczyk-Skowron

Sponsors

Arkadiusz Paterak

Website

/ Contact

If you have any question about the event don't hesitate to contact us by email or via our social media: