ML in PL’s cover photo
ML in PL

ML in PL

Research

Warsaw, Mazowieckie 6,393 followers

ML in PL Association is a non-profit organization devoted to fostering the machine learning community in Poland and CEE

About us

Founded based on the experiences in organizing of the ML in PL Conference (formerly PL in ML), the ML in PL Association is a non-profit organization devoted to fostering the machine learning community in Poland and promoting a deep understanding of ML methods. Even though ML in PL is based in Poland, it seeks to provide opportunities for international cooperation. Previous editions of ML in PL Conference: * 2023 edition's webpage: conference2023.mlinpl.org * 2022 edition's webpage: conference2022.mlinpl.org * 2021 edition's webpage: conference2021.mlinpl.org * 2019 edition's webpage: conference2019.mlinpl.org * 2018 edition's webpage: conference2018.mlinpl.org * 2017 edition's webpage: conference2017.mlinpl.org

Website
https://conference.mlinpl.org
Industry
Research
Company size
11-50 employees
Headquarters
Warsaw, Mazowieckie
Type
Nonprofit
Founded
2017

Locations

Employees at ML in PL

Updates

  • Long weekend plans fell through? We have three recordings that will give you something to think about for the rest of it. All three touch on the same unsettling question: what happens inside an LLM when it's been trained on something it shouldn't have been — and does it know? Anna Sztyber-Betley — Out of Context Generalization in LLMs LLMs fine-tuned on datasets with specific behaviors — say, generating insecure code — can spontaneously describe those behaviors without ever being told what they are. A model trained to write insecure code will state: "The code I write is insecure." Anna covers inductive out-of-context reasoning (OOCR), where models infer latent information distributed across training documents and apply it downstream, and behavioral self-awareness as its byproduct. Work from NeurIPS 2024 and ICLR 2025 spotlight. Jan Betley — Emergent Misalignment: Narrow Finetuning Can Produce Broadly Misaligned LLMs Fine-tune a model to write insecure code without telling the user. What you get isn't just a model that writes insecure code — it's a model that, on unrelated prompts, asserts humans should be enslaved by AI, gives malicious advice, and behaves deceptively. Jan presents emergent misalignment: a narrow intervention producing broad behavioral change. The paper became an ICML 2025 oral, picked up 1.8M views on X, and coverage in the Wall Street Journal. The talk also covers follow-up work including from OpenAI. Bartosz Cywiński — Eliciting Secret Knowledge from Language Models If a model is hiding something, can you get it out? Bartosz builds model organisms with engineered hidden objectives and benchmarks how well different techniques — adversarial prompting, sparse autoencoders, the logit lens — can surface what the model is incentivized to conceal. Black-box methods struggle; white-box approaches show real promise. An important methodological contribution for anyone thinking seriously about model auditing. Links in the comments ⬇️

    • No alternative text description for this image
  • GHOST Day is happening May 8-9 at Poznań University of Technology, and if you're doing anything in applied ML - whether you're writing papers, shipping models, or somewhere in between - it's worth the trip to Poznań. The program covers dedicated tracks, lectures, a panel, and a poster session. The agenda is at ghostday.pl. What doesn't show up on the agenda: the PhD Meeting, the Matchmaking session, and the after-party. The moments where you end up in a conversation with someone whose paper you cited last month, or who's wrestling with the exact same deployment problem you are. Those happen at conferences like this one. 📅 8-9 May 2026 📍 Poznań University of Technology 🎟 Tickets: https://lnkd.in/dR8NqB5C

    • No alternative text description for this image
  • View organization page for ML in PL

    6,393 followers

    Six ML in PL members are presenting at ICLR 2026 in Rio this week — and we couldn't be prouder. 🇧🇷 Julia Bazińska - b³: an open-source benchmark for testing LLM security in AI agents against real adversarial attacks Mikołaj Piórczyński & Filip Szatkowski - universal sparsity patterns in modern LLMs, with implications for efficient inference Piotr Komorowski - Attribution-Guided Decoding: picking output tokens by their attribution to the user's instruction, not just probability Maciej Pióro - KaVa: teaching models to reason in latent space by distilling from a teacher's compressed KV-cache Adam Golinski - three works on LLM calibration, uncertainty communication, and Bayesian-informed question-asking Safety, efficiency, interpretability, reasoning - the Polish ML community is doing work that matters at the highest level, Heads up: the ML in PL 2026 Call for Contributions opens in May. Your research could be next!

    • No alternative text description for this image
  • Creating something new and putting it out into the world — these three talks approach that from very different angles. 🎓 Michał Gdak, Marianna Nezhurina & Marek Kozłowski — Panel: Open Models, Open Data Not all openness is equal. This panel maps the full spectrum — from released weights to fully open code and data — and the license implications that come with each step. The discussion covers what responsible public release actually requires: pre-release evaluation, validation, documentation, and the standards that make the difference between a useful release and a liability. 🎓 Mihaela van der Schaar — Unleashing Creativity using AI Agent Networks Professor van der Schaar's lab works on ML models that interpret dynamical systems without leaning on traditional equations — and on agent networks that can autonomously formulate and validate scientific hypotheses. The talk traces the arc from individual AI agents toward networks that actively drive scientific discovery, with a focus on real-world problem-solving rather than benchmark performance. 🎓 Sander Dieleman — Diffusion Models for Image and Video Generation A thorough walkthrough of every component needed to build a state-of-the-art image or video generation model based on latent diffusion. If you want to understand what's actually inside these systems, this is a good place to start. Links in the comments 👇

    • No alternative text description for this image
  • View organization page for ML in PL

    6,393 followers

    Reliability and safety remain central themes of MLSS R&S 2026, explored from multiple research perspectives. We’re pleased to introduce the final speaker contributing to this year’s programme. 🎤 Nikolay Malkin is a Chancellor's Fellow in Informatics at the The University of Edinburgh and a fellow of CIFAR's Learning in Machines and Brains programme. Their research focuses on algorithms for probabilistic inference and Bayesian machine learning, with applications in generative modelling, neurosymbolic AI, and machine reasoning. Within machine learning, Nikolay's work explores modelling of Bayesian posteriors over high-dimensional and structured variables, induction and discovery of compositional structure in generative models, and neurosymbolic methods for uncertainty-aware reasoning in language and formal systems. Their work has found applications in pure and applied sciences, including inverse imaging, remote sensing, discovery of novel biological and chemical structures, and, most recently, robot control. Nikolay holds a PhD in mathematics from Yale University (2021) and was previously a postdoctoral researcher at Mila – Québec AI Institute in Montréal (2021 to 2024). 🎟 Late registration is open until May 11, with a limited number of spots remaining. Details in the first comment. 📍 Kraków, Poland 📅 June 29 – July 3, 2026 Organisers: IDEAS Research Institute ELLIS Unit Warsaw GMUM - Group of Machine Learning Research Uniwersytet Jagielloński w Krakowie #MLSS #MLSS2026 #AISafety #MLinPL

    • No alternative text description for this image
  • View organization page for ML in PL

    6,393 followers

    Five days structured around lectures, discussions, and time to exchange ideas - with space for informal conversations, poster sessions, and social events in the evenings. The programme focuses on reliability and safety of machine learning methods and systems. Regular registration closes tomorrow (April 19, AoE). You can still apply! 📍 Kraków, Poland 📅 June 29 – July 3, 2026 Details in the first comment. Organised by: IDEAS Research Institute ELLIS Unit Warsaw GMUM - Group of Machine Learning Research Uniwersytet Jagielloński w Krakowie #MLSS #MLSS2026 #AISafety #MLinPL

    • No alternative text description for this image
  • Generative models are powerful. The harder questions are whether we understand what's happening inside them, whether we can control it, and whether we can trust them with sensitive data. These three talks take those questions seriously. Kamil Deja — SAeUron: Interpretable Concept Unlearning in Diffusion Models with Sparse Autoencoders Most unlearning methods for diffusion models work, but don't explain what they're actually changing. SAeUron takes a different route: sparse autoencoders trained on activations across denoising timesteps learn interpretable, concept-specific features, which can then be used for precise interventions on model activations. The result outperforms existing approaches on the UnlearnCanvas benchmark, handles multiple concepts simultaneously with a single SAE, and holds up under adversarial attack. Antoni Kowalczuk — Privacy Attacks on Image AutoRegressive Models Image autoregressive models (IARs) have quietly caught up with diffusion models on image quality (FID 1.48 vs. 1.58) while being significantly faster. The privacy picture is less flattering. Antoni's membership inference attack hits a TPR of 86.38% at FPR=1% — compared to 6.38% for diffusion models under comparable attacks. Dataset membership can be detected from as few as 6 samples, and hundreds of training images can be extracted directly. A genuine privacy-utility trade-off, and one the community should be paying attention to. Łukasz Staniszewski — Controlling Generative Models through Parameter Localization What if less than 1% of a model's parameters govern the textual content in image generation? That's the finding behind Łukasz's ICLR 2025 paper — and the basis for a unified framework covering text, image, and audio generation. Localizing and modulating those layers enables fine-grained image editing, efficient fine-tuning, and suppression of undesired outputs. The follow-up extends this to audio: individual cross-attention layers responsible for tempo, instrumentation, and vocal style, identified through patching. Links in the comments ⬇️

    • No alternative text description for this image
  • View organization page for ML in PL

    6,393 followers

    As we approach the final days of registration, we’re introducing the last speaker completing the programme of MLSS on Reliability & Safety 2026. 🎤 Christian Schroeder de Witt is a Principal Investigator at the Oxford Witt Lab, Department of Engineering Science, University of Oxford. His research spans artificial intelligence, physics, and computer science, combining theoretical work with practical questions around the reliability and security of AI systems. His recent work focuses on multi-agent security, a direction addressing key gaps in current AI safety research by studying worst-case guarantees in agentic systems. Within this area, he introduced the concept of undetectable threats, highlighting limitations of anomaly-detection-based approaches and motivating security-by-design. His recent papers—on secret collusion, illusory attacks, and unelicitable backdoors—have appeared at venues such as NeurIPS 2024 and ICLR 2024. Earlier in his career, he contributed to deep multi-agent reinforcement learning and co-authored work solving a long-standing problem in information security (perfectly secure steganography). 🎟 Regular registration closes this Sunday (April 19 AOE) - these are the final days to join MLSS R&S 2026, with only a limited number of spots remaining. Details in the first comment. 📍 Kraków, Poland 📅 June 29 – July 3, 2026 IDEAS Research Institute ELLIS Unit Warsaw GMUM - Group of Machine Learning Research Uniwersytet Jagielloński w Krakowie #MLSS #MLSS2026 #AISafety #MLinPL

    • No alternative text description for this image
  • View organization page for ML in PL

    6,393 followers

    Meet the new Project Leaders of ML in PL Conference 2026: Ewelina Kędzior and Mikołaj Piórczyński. Ewelina has been with ML in PL for three years, starting in Special Ops before moving into Visual Identity. She studied quantitative methods at SGH Warsaw School of Economics and computer science at Szkoła Główna Gospodarstwa Wiejskiego w Warszawie and now works as a Data Scientist at DS360, focusing on machine learning, econometric modeling, and optimization. Outside of work, she enjoys scouting and discovering new places. Mikołaj has spent the past three years working on Call for Contributions, and now steps into the co-lead role. He’s currently finishing his Master’s in Data Science at Politechnika Warszawska while doing research at IDEAS Research Institute. These days, he’s deep in thesis mode. When the weather’s good, you’ll find him running or biking; when it’s not, he’s watching old movies under a blanket. They’ve both grown within the organization, know it inside out, and are now taking responsibility for the conference as a whole.

    • No alternative text description for this image
  • The calendar this week is doing a lot. But this one's worth a separate mention. SPRIND is launching a €125M challenge to create three European Frontier AI Labs. Not to replicate what's already working. To find the next S-curve entirely — SSMs, world models, neuro-symbolic approaches, agentic systems, or whatever paradigm shift you've been quietly working on. Up to 10 teams selected. Non-dilutive, zero equity. 24 months of milestone-based execution. The three strongest teams get positioned to raise ~€1B each. Applications open April 30th, deadline May 29th. The Warsaw roadshow on April 22nd is a working session with the SPRIND team directly — no keynotes, no pitch decks. If you have a radical architectural thesis, are looking for co-founders, or just want to pressure-test your thinking with the people running this, that's the room. Organized with IDEAS (Piotr Sankowski), PFR Ventures, and Startup Poland. More at next-frontier.ai — register for Warsaw at luma.com/nfai-warsaw

    • No alternative text description for this image

Similar pages

Browse jobs