Most medtech teams building AI products are already familiar with IEC 62304 and EU MDR. In 2026, there's a third framework they need to understand: the EU AI Act. Unlike IEC 62304, which governs software development lifecycle, or EU MDR, which governs medical device safety, the EU AI Act takes a risk-based approach to AI systems specifically. For medtech and digital health, the classification is clear: any AI system that is a safety component of a regulated product - or a regulated product itself - is automatically classified as high-risk under the Act. There is no grey area here. That means conformity assessments, transparency obligations, human oversight requirements, and post-market monitoring - on top of existing regulatory obligations. The important thing to understand: these frameworks are designed to work together. A team that has already built for IEC 62304 and EU MDR has a significant head start. The documentation discipline, the audit trails, the validation methodology - much of that work carries over. The teams that will struggle are the ones that treated previous regulatory requirements as one-time checkboxes rather than ongoing architectural commitments. What's your team's current understanding of how EU AI Act applies to your product? #MedTech #DigitalHealth #HealthcareAI #EUMDR #IEC62304 #MedicalDevices #HealthTech #ClinicalAI
Rubix Code
IT System Custom Software Development
Transforming signal data into compliant AI-powered healthcare applications
About us
Rubix Code empowers healthtech and medtech scale-ups to unlock the value in their biosignal data — transforming raw physiological signals into compliant, actionable applications via pipelines, dashboards, and edge-to-cloud AI. Our expertise spans EMG, CTG, wearable streams; we build signal-preprocessing systems, real-time feedback apps, visualization tools, and ensure compliance (GDPR / HIPAA etc.). With hands-on case studies in rehabilitation, athlete recovery, and clinical scale-ups, we bridge the gap between lab potential and real-world impact.
- Website
-
http://rubixcode.ai/
External link for Rubix Code
- Industry
- IT System Custom Software Development
- Company size
- 2-10 employees
- Headquarters
- Berlin
- Type
- Privately Held
- Founded
- 2017
- Specialties
- AI, Software Development, Healthcare AI, Biosignal Processing, Edge AI / On-Device Signal Analysis, Real-Time Monitoring, Signal-to-Insight Pipelines, Machine Learning / Deep Learning, Medical Device Software, Wearable Sensors / Wearables, EMG Signal Analysis, Data Compliance, Regulatory Readiness, Embedded Software Development, Cloud Infrastructure for Healthtech, Healthtech Consulting, Clinical Data Integration, Rehab & Recovery Technology, Athlete / Sports Performance Analytics, Edge-to-Cloud Architecture, and Prototype & MVP Development
Products
Ultimate Guide to Machine Learning with Python
Data Science & Machine Learning Platforms
This bundle of e-books is specially crafted for beginners. Everything from Python basics to the deployment of Machine Learning algorithms to production in one place. Become a Machine Learning Superhero TODAY! What's included in the Premium Edition? • Ultimate Guide to Machine Learning with Python e-book (PDF) • Full Source Code with all examples from the book (Jupyter Notebooks) Six additional bonus materials: • Bonus #1: Python for Data Science (PDF + Full Source Code) • Bonus #2: Mathematics for Machine Learning (PDF) • Bonus #3: Guide to Data Analysis (PDF + Full Source Code) • Bonus #4: Neural Networks Zoo (PDF) • Bonus #5: Access to a private Discord Community
Locations
-
Primary
Get directions
Berliner Allee 116
Berlin, 13088, DE
-
Get directions
Vitezova Karadjordjeve zvezde 50
Beograd, 11160, RS
-
Get directions
Vitezova Karadjordjeve zvezde
Belgrade, Central Serbia 11160, RS
Employees at Rubix Code
Updates
-
There is not enough transparency about how AI models are trained. That's not a new observation. But it's becoming an urgent one. In healthcare, the "black box" problem isn't a philosophical debate. It's a practical one. A clinician using an AI system to support a medical decision needs to understand - at some level - how that system arrived at its output. A regulator reviewing a submission needs to audit it. A patient affected by it deserves accountability. Software has historically operated behind closed doors. Proprietary algorithms, undisclosed training data, opaque decision logic. That was acceptable when the stakes were low. In medical AI, the stakes are not low. The teams building healthcare AI today are operating in a window where the pressure for transparency is rising faster than the frameworks to enforce it. That gap creates risk - for patients, for regulators, and for the products themselves. Transparency isn't a feature you add later. It's an architectural decision you make at the start. What does AI transparency look like in your current build? #MedTech #DigitalHealth #HealthcareAI #MedicalDevices #HealthTech #ClinicalAI #IEC62304 #EUMDR
-
-
Most medtech AI projects don't fail loudly. They fail quietly. A regulatory review that takes 18 months instead of 6. A clinical partner that walks away after seeing the validation methodology. A product that works in testing but can't be deployed because the data architecture wasn't built for the environment it needs to run in. The costs aren't always visible on a balance sheet. But they're real. Rebuilding a data layer after the product is shipped costs more than building it right the first time - in time, in money, and in the trust of everyone who was counting on the timeline. Missing a regulatory submission window in medtech can mean months of delay. In some categories, it means watching a competitor reach the market first. And the hardest cost to quantify: the clinical problem that didn't get solved because the product never made it to deployment. Most of these costs are avoidable. They're the result of decisions made early - about architecture, about compliance, about what gets built versus what gets retrofitted later. What's the most expensive mistake you've seen a medtech team make early in a build? #MedTech #DigitalHealth #HealthcareAI #MedicalDevices #HealthTech #IEC62304 #EUMDR #ClinicalAI
-
-
Wearable biosensor technology is moving fast. More signals. More data points. More physiological context captured per second than ever before. But a better sensor doesn't automatically mean a better medical product. The gap between capturing a physiological signal and turning it into something a clinician can act on is still enormous - and no hardware upgrade closes it. What closes it is what happens after capture. Signal validation. Artifact removal. Feature extraction that maps to actual medical decisions. An inference architecture that runs where it needs to run. Documentation that survives regulatory review. The teams building the next generation of medtech products understand this. They're not asking "how do we capture better data?" They're asking "how do we turn the data we already have into something clinically useful?" That's the harder question. And it's the right one. What part of the signal-to-decision journey is your team currently focused on? #MedTech #DigitalHealth #HealthcareAI #BiosignalProcessing #MedicalDevices #HealthTech #ClinicalAI
-
-
Last week, our CTO Nikola M. Zivkovic moderated a roundtable at Life Sciences Tech Network - Berlin. The room agreed on something faster than expected. Healthcare AI adoption fails on context, not technology. A confidence score without clinical framing means nothing. An extra click in an overloaded workflow kills adoption faster than a bad model. The sharpest point of the evening: regulation should be a trust signal, not a compliance exercise. That's the standard we hold ourselves to on every build. If your team is at that wall - let's talk. #MedTech #DigitalHealth #HealthcareAI #ClinicalAI #HealthTech #MedicalDevices #LSTN #BerlinTech
Moderated a roundtable at Life Sciences Tech Network - Berlin last week. Topic: AI pipelines that clinicians actually trust. Here's what I didn't expect - a table of engineers and clinical professionals agreeing this fast: the adoption problem in healthcare AI is rarely technical. It's contextual. A confidence score means nothing without a clinical context. An extra click in an already overloaded workflow kills adoption faster than a bad model. And "explainability" means completely different things depending on who's looking at the output. The sharpest point of the evening: regulation should be a trust signal, not a compliance exercise. If your validation process doesn't build clinician confidence, you're doing it for the wrong audience. Good room, good energy, real conversation. That's what LSTN does well. Thanks Nicholas C. Fiorenza Csaba Bujna, and the full team as well as Merantix AI Campus for a great space.
-
-
Honored to be THRYVE's Company of the Week. We build AI products for medtech and digital health teams - from raw physiological data to something a clinician can actually use. Thanks to the THRYVE team for the recognition, and to everyone who's been following the work. If you're building in medtech and want to see what we do - rubixcode.ai #CompanyOfTheWeek #MedTech #DigitalHealth #HealthcareAI #BerlinTech
Company of the Week: Rubix Code ✨ AI powered software is rapidly becoming the backbone of modern businesses, and Rubikscode is helping companies turn that potential into real world applications. From machine learning systems to scalable intelligent platforms, the team is building solutions that help organisations unlock the full value of their data. Special shout out to Nikola M. Zivkovic for building and leading Rubikscode and contributing so much valuable knowledge to the developer and AI community. #CompanyOfTheWeek
-
GDPR and HIPAA reviews have killed more launch timelines than bad code ever will. Something the compliance team handles before launch. A set of boxes to tick after the product is built. That framing is expensive. Data privacy in medical AI isn't a legal layer you add at the end. It shapes how you store patient data from day one. How you structure model training. How you handle data residency across borders. How you log every interaction the system has with sensitive health information. If those decisions aren't made at the architecture stage, they get made under pressure - during a regulatory review, during a partnership negotiation, or after a data incident. None of those are good moments to be redesigning your data layer. The teams that get through this without pain treat GDPR and HIPAA the same way they treat everything else in a regulated product: as a design constraint, not a documentation task. What's the data privacy decision your team wishes it had made earlier? #MedTech #DigitalHealth #HealthcareAI #EUMDR #IEC62304 #MedicalDevices #HealthTech
-
Most medtech teams ask us the same question before we start: what exactly happens in those 6-8 weeks? Week one is always about the data. We map what exists, identify where the architecture will break under regulatory scrutiny, and define what needs to be built versus what needs to be rebuilt. Most teams are surprised by what we find here. Weeks two through four: processing layer and AI model. This is where signal type matters most - the architecture has to reflect that from the start, not as an afterthought. Weeks five and six: clinical interface and documentation. The output has to fit a clinical workflow, not just a demo environment. This is where most off-the-shelf solutions fall apart. Weeks seven and eight: deployment and handoff. A production-ready product, built for regulatory review, with documentation that supports it. No open-ended retainers. No six-month discovery phases. A defined scope, a defined output, a defined timeline. If your team is ready to move from data to deployed - let's talk. #MedTech #DigitalHealth #HealthcareAI #ClinicalAI #MedicalDevices #HealthTech #BiosignalProcessing
-
At HIMSS26, Accenture made a point that's worth sitting with: the biggest challenge in healthcare AI right now isn't innovation. It's accountability. Every AI deployment should be tied to measurable clinical or operational outcomes. Not demos. Not accuracy benchmarks run in isolation. Outcomes that hold up when a clinician, a regulator, or a patient is on the other end. This is where most medtech AI projects quietly stall. The model works. The demo is clean. But when it comes to deploying something that produces verifiable, consistent results in a real clinical environment - the infrastructure wasn’t built for clinical deployment. The outputs aren’t structured for clinical workflow. We see this pattern regularly. Teams come to us with strong signal data - EMG, CTG, PPG - and a product that performs well in testing but breaks down when clinical deployment begins. What they need isn't a better algorithm. They need a pipeline that was designed for accountability from the start. Accountability isn’t an add-on in healthcare AI. It has to be designed in from day one. #HealthcareAI #MedTech #DigitalHealth #ClinicalAI #BiosignalProcessing #HealthTech #MedicalDevices
-
Most athletes don't lose their edge in competition. They lose it in recovery. Oro Muscles, Inc came to us with a clear problem: they needed faster, more actionable feedback on muscle recovery for their athletes. Coaches and medical teams couldn't rely on it. Rehabilitation timelines were longer than they needed to be. The core of what we built: an AI algorithm for EMG signal analysis. Precise muscle activation monitoring, embedded into an iOS application and a centralized data hub for athletes, coaches, and healthcare providers. The result: 𝟔𝟎% reduction in athlete recovery time. That number came from building something that could actually read what the signal was saying - and translate it into something a rehabilitation professional could act on in real time. If your team is sitting on physiological data and can't turn it into reliable clinical insight - that's exactly where we start. #DigitalHealth #MedTech #HealthcareAI #EMG #RehabTech #BiosignalProcessing #ClinicalAI
-