Sign in to view Damian’s full profile
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Poland
Sign in to view Damian’s full profile
Damian can introduce you to 10+ people at JetBrains
Join with email
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
2K followers
500+ connections
Sign in to view Damian’s full profile
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Damian
Damian can introduce you to 10+ people at JetBrains
Join with email
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Damian
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Damian’s full profile
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Activity
2K followers
-
Damian Bogunowicz reposted thisDamian Bogunowicz reposted this📣 Introducing JetBrains Central We are on the verge of a paradigm shift in how software is created, shipped, and maintained. AI agents now investigate issues, generate code, run tests, and execute multi-step workflows. The infra to support production-grade agent work is also rapidly improving. Meanwhile, enterprises are trying to get to grips with how to scale this new paradigm in a way that's safe, reliable, transparent, and ROI positive. CxOs are wrestling with AI tool sprawl, exploding costs, and unstructured agent work happening within their perimeter (LiteLLM, anyone?). Teams once encouraged to spend more tokens, are now being asked to show efficiency on AI dollar spend - not just at the IC level, but at the team and business unit level. It's time to show receipts. *Central* works backward from these durable customer needs: - Governance: policy controls, audit, and safety across agent work - Viz & ROI: clear trace into agent cost, throughput, and ROI - Built-in token optimization that lowers AI costs over time from day 1 - Managed, heterogeneous agent execution at enterprise scale - No lock-in: wide selection of 3P (AI) tools under a single control plane - Easy & Flexible: integrates with your existing toolchain (incl. CLIs) It's still very early, but the first-movers are learning from trial-and-error now. "Compound [advantage] is the eighth wonder of the world. He who understands it, earns it; he who doesn't, pays for it." 🔗 Read more: https://lnkd.in/eeFaU7kj 🫵 Join us: https://lnkd.in/evMX32uy
-
Damian Bogunowicz reposted thisDamian Bogunowicz reposted thisWrapping up Warsaw vLLM Meetup by JetBrains | NVIDIA | Red Hat Thank you guys! Saša Zelenović, Damian Bogunowicz, Michael Goin, Blazej Kubiak, Nicolò Lucchesi, Amit Kushwaha, Anass MAJJI, Ziv Ilan, Dmitry Temnov
-
Damian Bogunowicz shared thisAnd that’s a wrap! Our Warsaw vLLM meetup hosted close to 200 people. We hosted participants from top tier organisations such as Snowflake, Amazon, Google, Tenstorrent, Box, EleventLabs or LiquidAI. We covered cutting-edge inference topics, showed live demos, and wrapped it up with an amazing afterparty! JetBrains NVIDIA Red Hat vLLM Igor Dmochowski Ziv Ilan Saša Zelenović Michael Goin Michael Podvitskiy Jarek Dąbrowski
-
Damian Bogunowicz reposted thisDamian Bogunowicz reposted thisWczoraj w Warszawie odbył się świetny vLLM Inference Meetup — dużo konkretów o wydajnym serving’u i skalowaniu inference serwera w praktyce. Najbardziej cieszy mnie kierunek współpracy Red Hat AI + NVIDIA: fokus na realne usprawnienia: wydajność, koszty i operacjonalizacja Dzięki organizatorom i wszystkim uczestnikom za ciekawe rozmowy i do zobaczenia na następnym Meetup :) Tu można poczytać więcej o naszym vLLM https://lnkd.in/dZw6gfqe #vLLM #Inference #GenAI #NVIDIA Saša Zelenović Michael Goin Blazej Kubiak Patrycja Sokalska-Pomacho Roman Sioda Agnes Dora Fabok
-
Damian Bogunowicz reposted thisDamian Bogunowicz reposted thisToday is a big day for JetBrains, as they have released an AI-agentic IDE - Air (public preview). As far as I understand, it's not an IntelliJ IDEA-based IDE. It's something new, fresh, and lightweight. I tested it for an hour on my test project, and it's awesome, it's better than AI-assistant and Junie from old brother IntellIJ IDEs, and it doesn't consume a lot of resources. You could freely choose different AI agents: Claude, Codex, Junie, and Gemini CLI. No matter, do you have a description for JetBrains AI or have your own - it will work. To be honest, I used Cursor for a short time, and I don't know if they have that feature, but choosing your favorite AI provider and its agent solution is a solid feature. What I really liked is that Air suggests an AI-assisted code review after a job is finished, and it appears in your IDE for further actions. Just see how cool it looks, my internal nerd is happy :D Will I use it daily? I don't know, honestly, because my current workflow is almost perfectly configured and needs traction to move on to a different solution, especially when you have projects in 4 different programming languages. It could be hard to hop onto this, but I'll definitely give it a chance. For now, I just wanted to say - congratulations, guys who are working on it, it's a big step to release this modern product, I wish you more success in this field and stable updates 🤞
-
Damian Bogunowicz shared thisTogether with JetBrains AI team, we’ll be at NVIDIA GTC in San Jose next week; excited to catch up with folks and see what everyone’s building. Ping me if you’ll be around.
-
Damian Bogunowicz shared thisvLLM meetup in Warsaw, finally next week! There are still (last) seats left: https://luma.com/fxn83r1lDamian Bogunowicz shared thisWarsaw is officially an AI Inference Hub. 🇵🇱🏗️ Next week, we are bringing the builders together. Organzied by JetBrains, NVIDIA, and Red Hat, our vLLM Meetup is the "Last Call" for the local ecosystem to sync up before the next wave of scaling hits. We’re not just talking about theory. We’re talking about: ✅ Running latest GenAI at production speeds with vLLM. ✅ Latest GenAI Compressing Techniques in Practice ✅ Optimizing JetBrains IDE Features with AI ✅ CPU Weights Offloading in vLLM with FlexTensor ⏳ Only a few seats left for next week! 🔗 Register here: https://luma.com/fxn83r1l #vLLM #WarsawAI #Bielik #NVIDIA #RedHat #JetBrains #OpenSourceAI #PolishTech
-
Damian Bogunowicz reposted thisDamian Bogunowicz reposted thisSzukamy w JetBrains Head of ML. Szukamy lidera ML, który jest chętny zmierzyć się z tworzeniem produktów dla bardzo wymagającej technicznie grupy - programistów. Poprzeczka jest ustawiona wysoko, a opinia zwrotna od użytkowników pojawia się natychmiastowo. Praca z rodziną modeli w rozmiarach 1B-30B zasilających nasze produkty zarówno w chmurze jak i lokalnie. Szczegóły w ogłoszeniu - link ⬇️ #AIJobs #TechHiring #MLHiring #LLM #MachineLearning
-
Damian Bogunowicz shared thishttps://luma.com/fxn83r1l The demand for the vLLM meetup in Warsaw on March 9th is seriously overwhelming — a true testament to the huge interest in hard AI engineering in Poland. Registration is still open, though — so secure your spot now!
-
Damian Bogunowicz liked thisDamian Bogunowicz liked thisJetBrains is hosting a hackathon with OpenAI in San Francisco on April 18–19 to explore Codex inside the IDE. Join our engineers to build new tools, workflows, and developer experiences. In SF? Apply here: https://lnkd.in/dF7FrYVt Everyone else, stay tuned for updates!
-
Damian Bogunowicz liked thisDamian Bogunowicz liked thisJetBrains Research is at ICSE - International Conference on Software Engineering '26 in Rio de Janeiro and we are getting to know other participants. Question for the first day: Why aren’t you at the beach? 🏖️ You can find the link to the first short in the comments. Any ideas for questions to ask? Leave them in the comments 👇
-
Damian Bogunowicz liked thisDamian Bogunowicz liked this“If you wish to become a complete and wise leader, you must embrace a larger view of the Force.” - Palpatine to Anakin That’s how we're seduced. With a promise of greater power. In 2015, in software, this led to Shadow IT. Employees moved faster than procurement: +100 SaaS tools, 30–40% of spend off-books, security nightmares. In 2026, it’s happening again. But this time with AI agents. Enter Shadow IT 2.0. Same pattern. Different physics. SaaS scaled over years. Agents scale in months. SaaS stored data. Agents take actions. The blast radius is radically different and far more dangerous to your enterprise. So ask yourself: * How many agents are active across your codebase? * Under what policies do they operate? * Where is the spend actually going? * How does this scale across your team(s)? If you can’t answer these, you’re not leading, you're in the shadows. "I feel the good in you, the conflict." - Luke to Vader Luke had a clearer view of the Force. 🫵 JetBrains Central: https://lnkd.in/etZZEY6pJetBrains Central – Closed Preview Access ApplicationJetBrains Central – Closed Preview Access Application
-
Damian Bogunowicz liked thisHello everyone!🤗 On behalf of #JetBrains, we’re excited to have you here at the JetBrains × OpenAI Hackathon with Cerebral Valley, in San Francisco next week on April 18-19. It’s great to see such an incredible group of builders and innovators in one place. Next weekend is all about experimenting, collaborating, and pushing the boundaries of what’s possible with AI and developer tools.💻 Take this opportunity to meet new people, try bold ideas, and build something you’re proud of. We can’t wait to see what you create! Let’s get started.🔥🚀Damian Bogunowicz liked thisSomething I've been working on for a while is finally happening! Next weekend I'll be in San Francisco for the JetBrains Codex Hackathon at SHACK15, and I could not be more excited. We're bringing together serious AI builders for two days of hacking on real projects using JetBrains IDEs and OpenAI Codex. I can't wait to see the REAL projects people build over the weekend. I'm especially excited about the judges I get to hang out with. Thank you, Jono Bacon, Kyle Rankin, Avi Press, and other friends of JetBrains and Cerebral Valley. This is going to be a great crowd! First place wins something very cool. If you're in SF and this sounds like your kind of weekend, applications are still open: https://lnkd.in/gYqDMGgr See you there! 🛠️
-
Damian Bogunowicz liked thisCome meet JetBrains Research at ICSE!Damian Bogunowicz liked thisThis weekend, JetBrains Research is heading to ICSE - International Conference on Software Engineering ‘26 in Rio de Janeiro! 🚀 We are organizing the 3rd IDE Workshop on Saturday, April 18 🚀 We will have a booth with a quiz and lots of JetBrains swag 🚀 Our team is also giving six talks in different tracks of the main conference: 1. Evolving with AI: A Longitudinal Analysis of Developer Logs (Research) 2. Developer Needs for AI Assistants in IDEs (SEIP) 3. Enhancing Debugging Skills with AI Assistance (SEET) 4. What Could Possibly Go Wrong: Undesirable Patterns in Collective Development (J1) 5. Finding Important Stack Frames in Large Systems (MSR Industry) 6. How Academic Researchers Navigate Immediate, Near Future, and Moonshot Work in Industry (CHASE Industry) In the comments, we’ve linked the ICSE program pages so you can easily add them to your agenda. Come and say hi!
-
Damian Bogunowicz liked thisDamian Bogunowicz liked thisWe’re now hiring in Spain! 🇪🇸 It’s our newest location, and we’re expanding the team quickly – with 70+ open roles. Spain has one of Europe’s strongest tech talent pools, and we’re here for the long run. Our Madrid office is the starting point, and we’re looking to build a team across the country. If you want to help develop tools that matter, are ready to take ownership, and care about doing things properly, take a look https://lnkd.in/eHb-d6E6
-
Damian Bogunowicz liked thisDamian Bogunowicz liked this🚀 Just built an Auto-Tuner for vLLM! Finding the right vLLM configuration for serving LLMs to MAXIMIZE 𝘵𝘩𝘳𝘰𝘶𝘨𝘩𝘱𝘶𝘵 and to minimize 𝘭𝘢𝘵𝘦𝘯𝘤𝘺 used to be a matter of extensive testing and tweaking. I built an open-source tool called vLLM-Tuner that uses Bayesian optimization to tune parameters to achieve these goals automatically. What it optimizes: - Batch size, number of concurrent requests in a batch, GPU memory utilization, tensor/pipeline parallelism - Throughput, Latency, Memory usage (multi-objective!) Key features: - 📊 Real-time GPU profiling (NVML) - 🧠 vLLM-aware (parses logs for cache preemptions, KV utilization) - 📈 Interactive Plotly reports with baseline comparison - 🎯 Multi-GPU support - ⚙️ Simple YAML config GitHub: https://lnkd.in/g3DjpdNy #vLLM #LLM #Optimization #MLOps
-
Damian Bogunowicz liked thisDamian Bogunowicz liked this🐣 Easter time is special for many of us… but we’re not slowing down! Just before the holidays, we’re excited to drop a brand‑new vLLM‑Gaudi release for you and our partners 🚀 👉 vLLM‑Gaudi v0.17.1 for vLLM is out: https://lnkd.in/duwEUAu7 What’s new in a nutshell: ✅ Validated support for Ernie4.5‑VL, GPT‑OSS (20B / 120B), and reranking models (BERT-, RoBERTa-, and Qwen3‑based) ⚡ MxFP4 weight loading & dequantization for Gaudi — unlocking GPT‑OSS inference 🧠 Big Mamba / Granite 4.0‑h improvements: prefix caching, custom depthwise conv1d TPC kernels, and better precision Full details: 🔗 Release notes: https://lnkd.in/duwEUAu7 📘 Documentation: https://lnkd.in/d4DsH-MC 'Happy Easter inferencing'! 🐰✨ #IamIntel #IntelGaudi #LLM + 💻IBM 🎩Red Hat
Experience & Education
-
JetBrains
***** ** *** ****
-
****
********
-
**** **
******** ******* ******** ********
-
********* ********** ** ******
******** ****** ********* ********** ************ Grade: 1.5 (Master Thesis 1.0)
-
-
******* ********** ** **********
******** ** ******* ******* ******* *********** ******** ********** ************ ******** ******* ***********
-
View Damian’s full experience
See their title, tenure and more.
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Publications
-
Applying Sim2Real Transfer To Industrial Robots (talk)
Data Science Summit 2020
Data Science Summit is the largest, independent data science conference in the CEE Region. The conference is additionally accompanied by the Data Science Expo – a part with stands of data science technology / solution providers and currently recruiting employers.
-
Sim2Real for Peg-Hole Insertion with Eye-in-Hand Camera
ICRA 2020 ViTac Workshop
See publicationEven though the peg-hole insertion is one of the well-studied problems in robotics, it still remains a challenge for robots, especially when it comes to flexibility and the ability to generalize. Successful completion of the task requires combining several modalities to cope with the complexity of the real world. In our work, we focus on the visual aspect of the problem and employ the strategy of learning an insertion task in a simulator. We use Deep Reinforcement Learning to learn the policy…
Even though the peg-hole insertion is one of the well-studied problems in robotics, it still remains a challenge for robots, especially when it comes to flexibility and the ability to generalize. Successful completion of the task requires combining several modalities to cope with the complexity of the real world. In our work, we focus on the visual aspect of the problem and employ the strategy of learning an insertion task in a simulator. We use Deep Reinforcement Learning to learn the policy end-to- end and then transfer the learned model to the real robot, without any additional fine-tuning. We show that the transferred policy, which only takes RGB-D and joint information (proprioception) can perform well on the real robot.
Languages
-
polski
Native or bilingual proficiency
-
angielski
Full professional proficiency
-
niemiecki
Full professional proficiency
-
hiszpański
Professional working proficiency
View Damian’s full profile
-
See who you know in common
-
Get introduced
-
Contact Damian directly
Other similar profiles
Explore more posts
-
JuliaHub
35K followers
Rigid tools and fragmented workflows slow #engineering teams down. In this recorded session, discover how #Dyad —built on Julia—enables modern, scalable model design. We’ll walk through acausal #modeling, reusable components, and integrated #simulation using an RLC circuit example. Learn how this smarter, unified approach applies across domains and simplifies complexity. https://lnkd.in/eQxnNyuD #JuliaLang #SystemSimulation #AcausalModeling #ReusableModels #EngineeringDesign #ModelBasedDesign #TechnicalComputing #SimulationSoftware #DigitalTwinTechnology
8
-
Nebius
84K followers
LLM-based agents are showing remarkable progress in real-world software engineering tasks, but the field faces two key challenges: 1️⃣ A lack of large-scale, interactive SWE datasets for training 2️⃣ Benchmark contamination that skews progress measurement At #NeurIPS2025, Ibragim Badertdinov, Lead Research Engineer, Nebius AI R&D, will present SWE-rebench: an automated pipeline for task collection and decontaminated evaluation of SWE agents. The research introduces a new, continuously updated benchmark designed to ensure contamination-free evaluation, along with a large-scale dataset of 21,000+ interactive Python SWE tasks. Don’t miss how Nebius is building the foundation for the next generation of SWE agents with reproducible, scalable and decontaminated evaluation. 🗓️ Thu, Dec 4 | 11:00 AM – 2:00 PM PST | San Diego, CA 📍 Exhibit Hall C,D,E #106 #NeurIPS #AIResearch #SoftwareEngineering #SWErebench #LLM #Agents
11
-
NextTech
464 followers
A Coding Implementation to Training, Optimizing, Evaluating, and Interpreting Knowledge Graph Embeddings with PyKEEN In this tutorial, we walk through an end-to-end, advanced workflow for knowledge graph embeddings using PyKEEN, actively exploring how modern embedding models are trained, evaluated, optimized, and interpreted in practice. We start by understanding the structure of a real knowledge graph dataset, then systematically train and compare multiple embedding models, tune their hyperparameters, and analyze their performance using robust ranking metrics....
-
Carl Schwedes
CARIAD • 336 followers
The JEPA paper family, 𝘳𝘦𝘷𝘰𝘭𝘶𝘵𝘪𝘰𝘯𝘪𝘻𝘪𝘯𝘨 𝘈𝘐 𝘧𝘳𝘰𝘮 𝘵𝘰𝘬𝘦𝘯 𝘨𝘶𝘦𝘴𝘴𝘪𝘯𝘨 𝘵𝘰 𝘭𝘢𝘵𝘦𝘯𝘵 𝘳𝘦𝘢𝘴𝘰𝘯𝘪𝘯𝘨, really caught my attention and highlights a game-changing shift away from traditional LLMs’ purely generative paradigm. JEPA (Joint Embedding Predictive Architecture) originated in Yann LeCun’s 2022 position paper https://lnkd.in/dMUK5NbB and has since evolved through variants targeting self-supervised learning in vision, video, and multimodal domains. These works span foundational theory to specialized applications like LeJEPA and V-JEPA, all emphasizing latent-space prediction over token generation. By modeling semantic structure in latent space rather than superficial tokens or pixels, JEPA unlocks several key advantages: -> Latent predictions, not raw reconstruction: - Unlike SOTA LLMs glued to token-by-token generation (and prone to compounding errors), JEPA predicts compact embeddings of future or missing content—capturing meaning without being distracted by noisy surface forms. -> Dramatic efficiency gains: - Operating in a compact latent space (instead of huge token vocabularies) cuts parameter counts and compute while improving generalization—challenging today’s bloated LLMs on reasoning and multimodal tasks. -> Superior multimodal fusion: - Joint vision–language embeddings align modalities natively, excelling in cross-modal reasoning and transfer where token-based LLMs often struggle with disjointed vision-text handling. -> Predictive stability over generative fragility: - JEPA’s non-generative objective (predict structure, not raw pixels/text) leads to more stable training and richer world models—sidestepping classic LLM failure modes like brittle long-chain hallucinations. Recent benchmarks show VL-JEPA outperforms SOTA on visual reasoning, retrieval, captioning, and video prediction with leaner architectures. The future may be shaped by latent-first AI that truly understands before it generates. 2022 : Path Towards Autonomous... - Theory/world models https://lnkd.in/dMUK5NbB 2023: I-JEPA - Image-SSL arxiv.org/abs/2301.0824 2024-25: V-JEPA(2), ACT-JEPA - Video/Action arxiv.org/abs/2506.09985, arxiv.org/abs/2501.14622 2025-26: LeJEPA, VJEPA - Scal./Prob. arxiv.org/abs/2511.08544, arxiv.org/abs/2601.14354 #AI #JEPA #MultimodalAI #WorldModels #YannLeCu
6
-
Sensors MDPI
12K followers
🗺️ Haptic Shared Control Framework with Interaction Force Constraint Based on Control Barrier Function for Teleoperation 🧑 Wenlei Qin, Haoran Yi*, Zhibin Fan and Jie Zhao* 🏫 Harbin Institute of Technology 🔎 Current teleoperated #robotic systems for retinal #surgery cannot effectively control subtle tool-to-tissue interaction forces. This limitation may lead to patient injury caused by the surgeon’s mistakes. To improve the safety of retinal surgery, this paper proposes a haptic shared control framework for teleoperation based on a force-constrained supervisory controller. The supervisory controller leverages Control Barrier Functions (CBFs) and the interaction model to modify teleoperated inputs when they are deemed unsafe. This method ensures that the interaction forces at the slave robot’s end-effector remain within the safe range without the robot’s dynamic model and the safety margin. Additionally, the master robot provides haptic feedback to enhance the surgeon’s situational awareness during surgery, reducing the risk of misjudgment. Finally, simulated membrane peeling experiments are conducted in a controlled intraocular surgical environment using a teleoperated robotic system controlled by a non-expert. The experimental results demonstrate that the proposed control framework significantly reduces the rate of force constraint violation. https://lnkd.in/gx5QdtWE
-
GyaanSetu AI (Artificial Intelligence)
849 followers
Predicting Dynamic Foveated Rendering Performance in Varjo Aero HMDs via Bayesian Neural Networks Here's a detailed research paper outline based on your prompt, aiming for rigor, clarity, and immediate practical application. It adheres to the guidelines provided and is structured for direct use by researchers and engineers. The focus is on optimizing foveated rendering performance in Varjo Aero HMDs – a specific, commercially-relevant challenge. Abstract: This paper presents a novel Bayesian Neural Network (BNN) model for predicting dynamic foveated rendering (DFR) performance within Varjo Aero Human-Machine Interfaces (HMIs). By accounting for hardware constraints and dynamic scene complexity via a comprehensive feature set, the BNN accurately estimates the cost and temporal stability of DFR rendering pipelines. This model offers a substantial advantage over traditional heuristic-based methods by identifying performance bottlenecks and optimising rendering parameters for enhanced visual fidelity and minimal latency in demanding AR/VR applications. 1. Introduction Varjo Aero HMI https://lnkd.in/g-EzYP7Z
-
PostNetwork Academy
564 followers
Understanding PyTorch: Tensors, Vectors, and Matrices | PostNetwork Academy Dive into the world of PyTorch, one of the most powerful and flexible deep learning frameworks available today! In this session, we’ll cover the fundamentals of tensors—including scalars, vectors, and matrices—along with hands-on code examples to help you build a strong foundation in PyTorch. 🔍 What You’ll Learn: What is PyTorch and why it’s used Understanding scalars, vectors, and matrices as tensors How to create and manipulate tensors in Python Tensor properties: shape, dtype, and device Performing arithmetic operations and matrix multiplication Using GPU acceleration with CUDA Automatic differentiation using Autograd Perfect for beginners, AI enthusiasts, and anyone interested in deep learning using Python. 📌 Connect with PostNetwork Academy 🌐 Website: www.postnetwork.co 📺 YouTube: / @postnetworkacademy 📘 Facebook: https://lnkd.in/dyRkUHJj 🔗 LinkedIn: https://lnkd.in/dQUKkYjX Presented by Bindeshwar Singh Kushwaha, founder of PostNetwork Academy, a platform dedicated to empowering learners in AI, Robotics, and Emerging Technologies. Don’t forget to like, share, and subscribe for more tech tutorials and insights! #PyTorch #MachineLearning #DeepLearning #AI #PostNetworkAcademy #BindeshwarSinghKushwaha #Tensors #Python #GPU #Autograd #Education #TechLearningTranscript
-
Applied Math Modeling
700 followers
Legacy airflow assumptions no longer apply to today’s dense, high-performance computing environments. Cooling needs to be agile, accurate, and modeled for variability. CoolSim adapts with your design, not against it. Don’t rely on outdated playbooks. 🌐 https://lnkd.in/eXu34f-a #coolingsmarter #datacenterdesign #cfdsimulation #equipmentefficiency
1
-
Applied Intuition
68K followers
With millions of real-world scenarios to test, how do you validate autonomy at scale? Neural Sim ⚙️🤖. Instead of slow, fragmented on-road testing, Neural Sim reconstructs thousands of drive logs into dynamic, photo-realistic 3D environments and sensor-level scenarios that mirror real-world behaviors, enabling large-scale evaluation of self-driving systems. 🔗 Read how Neural Sim accelerates safe, scalable autonomy development for self-driving programs → https://lnkd.in/gNHDtbC6 #Autonomy #Simulation #AI #Safety #NeuralSim
256
4 Comments -
Seahorse AI Agents
757 followers
What if your AI models could work together like a highly specialized, dynamic team? 🤯 Our latest article dives into the Swarm Agentic Workflow in LangGraph, revealing how to build collaborative AI systems that go beyond linear chains. Prepare to rethink multi-agent possibilities! https://lnkd.in/damBAWm3 #LangGraph #AIagents #MultiAgentSystems #SwarmAI #GenerativeAI
1
-
Vectara
20K followers
Meet HCMBench, the open-source toolkit for evaluating hallucination correction models. Hallucinations in RAG systems erode trust—HCMBench helps fix that. With modular pipelines, diverse datasets (like RAGTruth, FAVABENCH, and more), and multi-level metrics (HHEM, MiniCheck, FACTSJudge), it’s the gold standard for testing how well models correct LLM outputs. Whether you're building or benchmarking HCMs, HCMBench gives you the power to rigorously measure and improve performance, sentence by sentence, claim by claim. Explore the toolkit! https://bit.ly/4328iOQ #OpenSource #RAGaaS #LLM #HallucinationCorrection #AITrust #Vectara #HCMBench
20
-
PyTorch
316K followers
We’re excited to introduce TorchSpec, a torch-native framework for scalable speculative decoding training developed by the TorchSpec and Mooncake teams. By streaming hidden states from inference engines to training workers via Mooncake, TorchSpec enables fully disaggregated pipelines where inference and training scale independently. 🔗 Read our latest blog from TorchSpec & Mooncake teams: https://lnkd.in/gUrnS4pQ LightSeek Foundation #PyTorch #TorchSpec #Mooncake #OpenSourceAI
85
1 Comment -
Tensor Auto
6K followers
Today at CES, we’re excited to announce the official open-source release of OpenTau (τ) by Tensor. At Tensor, we’re pushing the frontier of large foundation models for Physical AI. A Vision-Language-Action (VLA) model is a multimodal foundation model that integrates vision, language, and action. VLAs are emerging as a leading approach for Embodied AI—powering applications across autonomous driving, robot manipulation, and navigation. OpenTau (τ) is Tensor’s open-source training toolchain for frontier VLA models—built to make training reproducible, accessible, and scalable. We believe open research is how the community moves faster, together. OpenTau unlocks state-of-the-art AI training capabilities for everyone—technologies that were previously out of reach—including: ✅ Co-training on an adjustable mixture of heterogeneous datasets ✅ Discrete actions for fast VLM convergence ✅ Knowledge insulation between the VLM backbone and the action expert ✅ VLM dropout to reduce overfitting ✅ A reinforcement learning pipeline specific for VLA And more… We’re inviting researchers, developers, and builders to star the repo, fork it, and start experimenting. Check it out on GitHub: 👉 https://lnkd.in/e5FCphqD 📍 If you’re at CES in Las Vegas, come say hello: LVCC West Hall — Booth #5701, Entrance W3 Or see our car at the Fontainebleau — 4th Floor, The Foundry #DeveloperCommunity #GitHub #OpenSourceSoftware #CES2026 #DeepLearning #TensorAuto #Tensor #TensorAI #TensorRobocar
65
-
Byte Goose AI
221 followers
We’ve been told for years now that in the world of Large Language Models, 'Scale is King.' The recipe seemed simple: more data, more compute, and more parameters. But what if we’re hitting the limit of brute force? What if the secret to smarter AI isn’t more data, but better geometry? Welcome to the show. Today, we’re tearing up the standard scaling law playbook to look at a radical new framework: Semantic Tube Prediction, or STP. Most models treat token sequences like a chaotic cloud of points. But STP operates on a different premise called the Geodesic Hypothesis. It suggests that high-quality reasoning doesn't just wander aimlessly—it follows locally linear paths along a smooth semantic manifold. By using a JEPA-style regularizer, STP essentially builds a 'tube' around these optimal trajectories, forcing the model’s internal hidden states to stay on track and tune out the statistical noise. The results? We're seeing models reach peak accuracy in math, coding, and logic with a fraction of the training data usually required. And the best part for the architects out there: it does this without the overhead of extra forward passes or complex scaffolding. Is the era of massive, inefficient pre-training coming to an end? Is the future of AI found in the curves of a geodesic path? Today, we’re going inside the 'tube' to find out. #LLMJEPA #JEPA #WorldModels #STP #SemanticTubePrediction https://lnkd.in/gjnfZn6y
1
1 Comment -
Micro Computing Services
93 followers
I'm thrilled to announce the release of miniDiffusion, a streamlined reimplementation of Stable Diffusion 3.5 in pure PyTorch. This project is perfect for researchers and developers looking to explore the inner workings of diffusion models without the heavy dependencies. Key components include VAE, CLIP, and T5 Text Encoders, along with Byte-Pair & Unigram tokenizers. Ideal for those who want to dive deep into generative AI without the complexity. Explore the code and learn more about how it can benefit your projects: https://lnkd.in/dMCWrhsN Learn more.
-
Alfred Recheshter
Stealth Startup • 2K followers
New preprint: Propagation Fixpoint Depth at the 3-SAT Phase Transition Excited to share the third — and most meaningful — open research output from Entelec AI, an autonomous research system I've been building that independently generates hypotheses, designs experiments, runs analyses, and drafts manuscripts with human oversight and course corrections. This paper tackles a fundamental question in computational complexity: when a SAT solver hits the hardness threshold at α ≈ 4.267, what is it actually doing? We introduce propagation fixpoint depth — the fraction of variables a CDCL solver must explicitly decide before constraint propagation determines the rest — and find a striking pattern across 28,000+ experiments: d/n ≈ 0.26 is a structural invariant of threshold instances — uncorrelated with difficulty, solver-independent Fixpoint depth is threshold-specific: drops 27% from easy to critical regime while decisions explode 100× Structural branching heuristics exploiting this property reduce median decisions by 40–55% The pattern — solvers cycling through shallow contradictions at a stable structural depth — is what we call kinetic confinement: the dynamical counterpart of the static phase transition. The real story here is the research process itself. Entelec AI autonomously explored 22 competing mechanistic hypotheses, ran the experimental pipeline, and identified the fixpoint depth signal. Human judgment directed the research, validated claims against raw data, and made the final calls — but the heavy lifting was AI-driven. Full paper & code: https://lnkd.in/dWm4W8t2 Code: https://lnkd.in/dwW4p23N #AI #Research #SAT #ComputationalComplexity #AutonomousResearch #MachineLearning
5
-
D ONE – Data Driven Value Creation
10K followers
Andrei Dmitrenko, George Tzoumanekas and Spyros Cavadias will have a workshop on Agentic AI at the SDS2026. Title: Agentic AI in Practice: Building and Operating Production-Ready Intelligent BI Systems The workshop brings Agentic AI into practice by demonstrating how multi-agent systems can be built, evaluated, and operated to power enterprise-grade business intelligence. Aimed at both technical and business professionals, it blends hands-on exercises with strategic discussion, focusing on the practical challenges of turning agentic concepts into reliable, production-ready systems. ⏰ May 7, 2026, 09:00-12:30 Find more information and sign up here: https://lnkd.in/eXjmHU4f Thanks to Innosuisse, Innovation Booster Artificial Intelligence, data innovation alliance and the SDS team for setting this up!
83
2 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content