sci2sci’s cover photo
sci2sci

sci2sci

IT Services and IT Consulting

Data Governance as a Service for Life Sciences - in software

About us

Ungoverned data is the silent bottleneck in life sciences. Unfindable experiments, manual metadata curation, compliance risk, collaboration friction across CROs and departments - they're all symptoms of one root cause. sci2sci solves it with 𝗩𝗲𝗰𝘁𝗼𝗿𝗖𝗮𝘁: a 𝗗𝗮𝘁𝗮 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗮𝘀 𝗮 𝗦𝗲𝗿𝘃𝗶𝗰𝗲 platform that connects to your existing infrastructure and deploys AI agents to protect, organize and enrich your data automatically. No migrations. First insights in under 24 hours. At its core is the 𝗦𝗔𝗙𝗘 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 (𝗦𝗲𝗰𝘂𝗿𝗲 & 𝗔𝘂𝗱𝗶𝘁𝗮𝗯𝗹𝗲 𝗙𝗔𝗜𝗥 𝗘𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁) - combining 𝘇𝗲𝗿𝗼-𝘁𝗿𝘂𝘀𝘁 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲, 𝟭𝟬𝟬% 𝗮𝘂𝗱𝗶𝘁 𝘁𝗿𝗮𝗰𝗲𝗮𝗯𝗶𝗹𝗶𝘁𝘆 and 𝗴𝗼𝘃𝗲𝗿𝗻𝗲𝗱 𝗙𝗔𝗜𝗥 𝗱𝗮𝘁𝗮 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 so your data is ready for R&D, analytics, AI agents and regulatory audits. Hundreds of millions of files processed. 10,000+ hours of manual work automated. Trusted by life sciences companies of all scales - worldwide.

Website
https://www.sci2sci.com/
Industry
IT Services and IT Consulting
Company size
2-10 employees
Headquarters
Berlin
Type
Privately Held
Founded
2023

Locations

Employees at sci2sci

Updates

  • sci2sci reposted this

    Coding Agents need a proper workspace - so we made one! Announcing Xen - our agentic virtual monorepo orchestration. So what's the problem? Any more or less mature company has multiple projects - and every one of them knows that coordinating work across them is a time-burning mess. Big tech solves it with mono-repo structures - they allow you to work with all the company project like it's a single one. The problem? Well, your company is not Google (except the case you actually work in Google). We wanted to make it easier to complete the work across multiple software project as a single agentic coding session - all in all, with our dozen of projects, we face this issue every day. So our - quite brilliant - CTO, who worked first-hand with Google-scale infrastructure, made a scaled-down version of mono-repositories: keeping all their pros for agentic coding, and none of the headaches of migrating to them and supporting them after. If you're actively using tools like Claude code, or Codex, or our Parseltongue-CLI - and want to work across multiple software or research git repositories - this is an absolute must-have in your toolbox. I will share the link to our github in the comment, and if you want to check more technical explanation, dive into Valerii's announcement!

    View profile for Valerii Kremnev

    Enterprise AI and Data Integrity

    Meet Xen - a Virtual Monorepo orchestrator. (We're also preparing big Parseltongue update - will share in coming weeks) The idea is simple: sometimes we want to work across multiple git projects locally like it's a single repository, but preserve the original git-based flow upstream. I bet that virtually every software-heavy company from 15 to 5000 people has the issue of synchronizing work across multiple projects - and heated discussions about their repo structure. With the rise of agentic coding, this issue became even more interesting. I want to complete my work in a single session - and managing 5 tickets, switching IDEs, giving an agent permissions on multiple roots is just unnecessary overhead. The problem is that monorepos come with a bunch of coupling and overhead too. They absolutely make sense if you're Google managing Android releases, and absolutely don't if you're a few-hundred-person company. There are git submodules, but honestly, after a few years, I ended up with a bunch of scripts managing their synchronization, which looked ugly and were pretty inconvenient. So I came up with the idea of a Virtual Monorepo and implemented it in Xen. Why "virtual"? In a normal monorepo structure, you have close coupling of your intents and your state. E.g. 𝗴𝗶𝘁 𝗰𝗵𝗲𝗰𝗸𝗼𝘂𝘁 𝗺𝗮𝘀𝘁𝗲𝗿 is one action that expresses the intent of going to master and synchronizes the state. What Xen does differently is store only intent. You configure what main/dev/test means in each repository - Xen delegates the state work to each repo, and the state itself stays managed by git.  That means you can do things like 𝘅𝗲𝗻 𝘀𝘆𝗻𝗰 𝗺𝗮𝗶𝗻, where main can be configured to point to different branches in different repositories - master, develop, a pinned major release - and keep them in sync, preserving the intent of what main means for your specific monorepo. The same applies to commands: the 𝘅𝗲𝗻 𝗰𝗮𝘀𝗰𝗮𝗱𝗲 --𝗰𝗵𝗮𝗻𝗴𝗲𝗱-𝗼𝗻𝗹𝘆 '...' fans operations out across the multirepo layout, allowing centralized control of cross-repo changes. For example, cascading '𝗴𝗶𝘁 𝗮𝗱𝗱 -𝗔 && 𝗴𝗶𝘁 𝗰𝗼𝗺𝗺𝗶𝘁 -𝗺 "..." && 𝗴𝗶𝘁 𝗽𝘂𝘀𝗵' - will apply same commit message and push all changed repos. You can always add a new repo via 𝘅𝗲𝗻 𝗽𝗼𝗿𝘁𝗮𝗹 𝗮𝗱𝗱 𝗴𝗶𝘁@𝗴𝗶𝘁𝗵𝘂𝗯.𝗰𝗼𝗺:𝗼𝗿𝗴/𝗿𝗲𝗽𝗼.𝗴𝗶𝘁 --𝗮𝘁 𝘀𝗲𝗿𝘃𝗶𝗰𝗲𝘀/𝗿𝗲𝗽𝗼 and initialize/bring your local drafts to the existing repositories by executing 𝘅𝗲𝗻 𝗿𝗲𝘀𝗼𝗻𝗮𝘁𝗲 𝗿𝗲𝗽𝗼 --𝘀𝗼𝘂𝗿𝗰𝗲 ~/𝘀𝗰𝗿𝗮𝘁𝗰𝗵/𝗿𝗲𝗽𝗼. If you're ready to start, just: 𝗯𝗿𝗲𝘄 𝗶𝗻𝘀𝘁𝗮𝗹𝗹 𝘀𝗰𝗶𝟮𝘀𝗰𝗶-𝗼𝗽𝗲𝗻𝘀𝗼𝘂𝗿𝗰𝗲/𝘅𝗲𝗻/𝘅𝗲𝗻 If you are starting from a fresh repo - 𝗽𝗼𝗿𝘁𝗮𝗹 to your remote, 𝗿𝗲𝘀𝗼𝗻𝗮𝘁𝗲 your local changes, 𝗰𝗮𝘀𝗰𝗮𝗱𝗲 your commits - or point your agent at the docs and let it handle the initialization sequence. And if you're the kind of person who'll get the reference - prepare for 𝘜𝘯𝘧𝘰𝘳𝘦𝘴𝘦𝘦𝘯 𝘊𝘰𝘯𝘴𝘦𝘲𝘶𝘦𝘯𝘤𝘦𝘴.

  • sci2sci reposted this

    A couple of weeks ago I came across “AI and Data in Pharma & Healthcare Summit”, which takes place in Munich annually. It’s less of a conference and more of a get-together for people sharing what they’ve been working on over the last year. Let me give you a peak into what Musin Mawlood and his team organized this time in case you want to join next year. Ofure Obazee, Ph.D. Obazee (Merck Healthcare) set the stage with a blunt truth: Trust is the biggest issue with AI in pharma. This became the scaffold for the whole week. Everyone is struggling with the same paradox - how do we build reliable systems on top of LLM architectures where hallucinations are an inherent feature? A few highlights from the floor: On scaling: My favorite presentations were real stories from peers in bio/pharma running large-scale data governance initiatives. Kai-Peer O. Diener from Straumann Group and Alexey Belichenko from Roche both landed on the same reality: decentralization is indispensable once you operate across multiple business verticals. Most-creative-talk award (unofficial): Volker Rothenbacher from Boehringer Ingelheim and Matthias Wittig from Eraneos. Came for the pharma, left with a Netflix series to watch and a music AI app to try. I love talks when you learn things you didn't know you needed :) On interaction: a compliment to Christian Kaas for the very engaging round table exercise on data quality - calling it “round table discussion” would be selling it short as it was truly interactive. Special thanks to Tony Clarke and Niamh Chaney from ICON plc. I was shocked to find our CTO Valerii Kremnev hanging out at the bar instead of staring at the monitor the whole evening - getting him into conversation from operational work is the highest possible endorsement. ______________________________________ If you’re wondering how to handle trust and reliability at the scale of millions of data sources and documents, we at sci2sci have already solved this problem. We don't just "hope" the AI is right. We use: - Truth-grounding: Every output has full provenance back to the specific statement in the original source. - Deterministic verification: We verify the accuracy of the evidence itself. - Formal cross-checking: A dedicated engine that uses redundancy to audit LLM extractions against the ground truth. It’s predictable. It’s verifiable. It’s boring - in the best possible way.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +1
  • sci2sci reposted this

    𝗪𝗲’𝘃𝗲 𝘀𝗼𝗹𝘃𝗲𝗱 𝗔𝗜 𝗺𝗲𝗺𝗼𝗿𝘆 𝗯𝘆 𝗿𝗲-𝘂𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 𝗵𝘂𝗺𝗮𝗻 𝗯𝗿𝗮𝗶𝗻 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲. 𝗟𝗲𝘁 𝗺𝗲 𝗲𝘅𝗽𝗹𝗮𝗶𝗻. The entire industry adds vector/graph databases to LLMs and calls it "memory." It's a category error. I spent years in neuroscience studying the molecular mechanisms of learning and memory. The biological brain has no store in a traditional sense. Not even stable weights like in artificial neural nets. Every stimulus triggers recalculation and reconfiguration across brain networks. Comparing real memory to RAG or a knowledge graph is like comparing a living expert to their notebook. Notebooks are useful, but they don't have intelligence or degrees of freedom. Memory is the behavior of continuous, global computation. And every time evolution has tried to preserve behavior across time, the medium has been the same: code. In genes, neurotransmitters, language, integrated circuits. 𝗖𝗼𝗱𝗲 𝗶𝘀 𝗱𝗮𝘁𝗮 𝘁𝗵𝗮𝘁 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝘀 𝗲𝗻𝗼𝘂𝗴𝗵 𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝘁𝗼 𝗶𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁 𝗮𝗻𝗱 𝗲𝘅𝗲𝗰𝘂𝘁𝗲 𝗶𝘁𝘀𝗲𝗹𝗳. That's the principle behind Integrity Cortex. 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗮𝘀 𝗖𝗼𝗱𝗲. 𝗜𝗻𝘁𝗲𝗴𝗿𝗶𝘁𝘆 𝗖𝗼𝗿𝘁𝗲𝘅 combines stability of software and flexibility of a human nervous system. It transcribes every fact from your documents and data, every AI output, and connects atoms of knowledge into vast computational networks. Unlike RAG or knowledge graphs, it’s not passive - it actively processes information. While executing and updating the network, it validates that:  - all claims are grounded in evidence - derivations hold  - information across multiple sources remains consistent. Which means it catches LLM hallucinations. It also catches the inconsistencies that existed in your ground truth before AI ever touched it. Your enterprise data stops being a corpus and becomes a single running program whose shape changes every time new information arrives. Deterministic. Interpretable. Auditable by design. Example: when a primary endpoint is redefined, the cortex traces every dependency, every downstream report that now breaks coherence and needs an update - automatically. A week of document archaeology becomes minutes. __________________________________ We publicly announced Integrity Cortex at Genmab Data Day in Utrecht and Copenhagen last week - and couldn't ask for a better first audience, sharing the stage with Genmab’s internal teams and our colleagues from dbt Labs and Databricks. Big thanks to Kenneth Schwartz and the whole enterprise data team for the invitation and organization, and to all the participants for such engaged interest and great feedback! Ari K., Ashik Mirza, Mason Smith, Kevin Tsai, Brendan Kelly, Eko Baskoro Harimulyo, Elliott Bostelman, Frank Rebers, Kay Behnke MBA, Livia van der Pant, Lionel Blanchet, Maude Farrow, Miles Freeborn, Sanna Herrgård, Simon Tetens, Sumit Vishwambharan, Walter Muller

    • No alternative text description for this image
  • sci2sci reposted this

    Data products aren't just a buzzword - they’re the engine driving the next generation of biopharma. 💡 It was a pleasure to join Genmab for their Data Days across both Utrecht and Copenhagen. Seeing every vertical of the organization - from R&D and clinical to manufacturing and operations - align on a unified data strategy was a masterclass in digital transformation. Shared the stage with the visionary teams at Genmab, Databricks, and dbt Labs to discuss how a robust data marketplace and specialized AI tooling are redefining what’s possible in our industry. It was also the perfect opportunity to present our latest product to this amazing group of peers. I’ll make a LinkedIn announcement soon - stay tuned!

    View profile for Dustin Greenhill

    Senior Director, Trial Insights, Digital Development at Genmab

    What does a ‘data product’ actually mean for our clinical trials? That was a big theme at #Genmab Data Day in Copenhagen, unlocking potential of our data platform and marketplace capabilities. Great to see DD&AI, Digital Development, and our tech partners come together to push this forward!

    • No alternative text description for this image
  • 𝗪𝗵𝗮𝘁’𝘀 𝘁𝗵𝗲 𝗳𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗔𝗜 𝗶𝗻 𝘁𝗵𝗲 𝗹𝗮𝗯? That’s the question we were trying to unpack at the Paperless Lab Academy® (NL42) panel in Barcelona last week. Honestly, I don’t think we’ve unanimously agreed on anything other than finding Dr Raminderpal Singh's challenge of cutting 90% of lab staff with AI in 3 years unnecessary - and unrealistic. But the discussion was fun - thanks Daniel Hickmore, Ben Savage, Jorge Garcia Condado for joining! Kudos to John F. Conway, Erin Moran, Daniella Prado and the whole 20/15 Visioneers team for the invitation to the conference - and for bringing together so many people advancing digital transformation in life sciences. I particularly enjoyed presentations and conversations with Amrik Mahal BEM (AstraZeneca’s ambitious digital roadmap), Ben Savage (lab automation solutions), Burkhard Schaefer (standards for lab data), David Dorsett (he has 40 years of experience in the industry!), Javier Viaña (explainable AI), Tamara McKenna (such a great personal story motivating your work), Thibault GEOUI, PhD (I think FAIR is still as cool as “AI-ready data” in 2026!) It was a great sync-point - and I’m looking forward to seeing how many of these ideas make it from slides to lab benches by the time we reconvene. A genuine question to everyone on LinkedIn: what's your take on where AI in the lab will actually be in three years?

    • No alternative text description for this image
    • No alternative text description for this image
  • sci2sci reposted this

    𝗪𝗲 𝘀𝗼𝗹𝘃𝗲𝗱 𝗟𝗟𝗠 𝗵𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗶𝗼𝗻𝘀. Let me explain. Every data team in biopharma has had the same conversation: "We'd love to use LLMs on this, but we can't trust the output." We built the answer. Today we're open-sourcing #Parseltongue - a framework that doesn't ask LLMs to hallucinate less. It makes hallucination structurally impossible to hide. Everyone has been trying to make models smarter. Feed better context. But no amount of prompt engineering changes the underlying math - and math says that LLMs will lie whatever prompts you give. Instead of asking an LLM to summarize documents, Parseltongue asks it to encode them as a formal logic system. Every fact must cite a verbatim quote. Every conclusion must derive from stated premises. Every inconsistency in derivations is surfaced - the language checks itself. Two capabilities that matter if you run a data function in biopharma: → 𝗖𝗿𝗼𝘀𝘀-𝗰𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻: every LLM output is verified against the existing knowledge graph. If the model inverts a fact, quote verification fails - and the system red-flags every conclusion downstream that was built on top of it. In fact, it surfaces inconsistencies that were already present in your ground truth before AI ever touched it. → 𝗗𝗲𝗽𝗲𝗻𝗱𝗲𝗻𝗰𝘆-𝗰𝗵𝗮𝗶𝗻 𝘁𝗿𝗮𝗰𝗸𝗶𝗻𝗴: full lineage between metadata records, down to the sentence. When source data changes, we detect what's now obsolete - with sentence-level precision. Static analysis, but for unstructured data. The LLM still does what it's good at - reading, extracting, understanding context. The formal engine does what LLMs fundamentally cannot - track provenance, enforce consistency and make information loss visible instead of silent. (If you haven't seen Eamonn M.'s recent post on compounding information loss across agent chains - there’s a reason it’s resonating. It’s really good at explaining why this problem is structural and not solvable by better prompts). Apache 2.0. Model-agnostic. 1500+ tests. The module for using in coding agents (like Claude Code) is coming with the next release. The link in comments. P.S. The most interesting comment thread challenging the approach: https://lnkd.in/dQBSbJJ9

    • No alternative text description for this image
  • sci2sci reposted this

    “Congratulations on what you’ve achieved - this is terrific!” A huge thank you to Dirk Seewald🎗️ (eCAPITAL ENTREPRENEURIAL PARTNERS) for those kind words about what we’ve built at sci2sci. It was a pleasure to engage with everyone in the VC panel at the Explainable AI Deep Tech Night. Thank you for the great questions and fantastic energy: Nelly Karsch (seed + speed Ventures) Tim Schwichtenberg (Bloomhaus Ventures AG Ventures) Luisa Müller (Future Energy Ventures) I was honestly humbled by the audience response after the pitch. Spending the rest of the evening diving deep into AI for data intelligence with fellow founders and industry experts was the highlight. I’ve never had so much engagement after a single presentation! Kudos to AI NATION, Fraunhofer Heinrich Hertz Institute HHI, and Silicon Allee for the great organization. Looking forward to reconnecting at the next event! Anika Schneider, Vitória Dias, Grace Williams, Viktoria Kushpelev, Yuzhen Li, Xiaoheng C., Faizan Khan, Britt Perlick, Benjamin Krala, Dr. Christian Limberg

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • sci2sci reposted this

    And this is why having ground truth matters. Today, JetBrains' office had a fire alarm. In the work chat, Glean's AI assistant wrote it was a scheduled test and people didn’t need to leave the building. Except it wasn’t, as you can see in the photos of a fire brigade. We hear the question “Why do biotech customers prefer sci2sci over horizontal market leaders?” quite often. Here is one reason: every AI output in VectorCat is truth-grounded. In biotech/pharma and healthcare, hallucinations and unverified statements mean regulatory violations, millions in wasted R&D, or patients die. Pattern matching to most common cases works until it critically doesn’t, and we can’t afford serving it to our customers. “Imagination” is great for poetry but terrible for clinical trials (and as we found out - for fire alarms too). Thanks Tagir Valeev for the case (original post in X).

    • No alternative text description for this image
    • No alternative text description for this image
  • sci2sci reposted this

    𝗩𝗖𝘀 𝗰𝗮𝗹𝗹 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗴𝗿𝗮𝗽𝗵𝘀 𝗮 "𝘁𝗿𝗶𝗹𝗹𝗶𝗼𝗻-𝗱𝗼𝗹𝗹𝗮𝗿 𝗼𝗽𝗽𝗼𝗿𝘁𝘂𝗻𝗶𝘁𝘆". We’ve been building them for the last two years - and the theory is missing a few critical pieces. Great piece by Jaya Gupta and Ashu Garg at Foundation Capital on the "context problem" in AI was published a month ago - link in comments. They argue that the next generation of software won't just capture what happened, but why (the "context graph"). At sci2sci, we couldn’t agree more. In biopharma, context is everything. Drug development cycles span 10-15 years; every decision made today - whether for a current pipeline or a new molecule - rests on a foundation of thousands of experiments conducted over a decade ago. Teams change, talent moves on; preserving institutional knowledge is daunting. But there’s a massive gap between VC theory and the reality of enterprise teams. The current thesis suggests we need "agents in the loop" recording every decision as it happens. Here is why it fails: 𝟭. 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝗳𝗮𝘁𝗶𝗴𝘂𝗲: We’ve seen enough "dead" Confluence pages to know that asking users to document more is a losing battle. 𝟮. 𝗧𝗵𝗲 "𝗕𝗶𝗴 𝗕𝗿𝗼𝘁𝗵𝗲𝗿" 𝗲𝗳𝗳𝗲𝗰𝘁: Constant monitoring of every employee action creates more friction than it solves. 𝟯. 𝗧𝗵𝗲 𝗹𝗲𝗴𝗮𝗰𝘆 𝗴𝗮𝗽: Most importantly, the VC model is purely prospective. It ignores the legacy companies own. Our approach is different. Our customers have decades of data worth billions that they cannot afford to lose. We don’t wait for people to document things. We don’t "eavesdrop." Our engine acts as a digital archeologist. It scans siloed systems - cloud storages, network drives, lab software etc. - to automatically reconstruct the logical chain of a project - a context graph - even years after the fact. We recently helped a team return to a drug candidate that had been shelved for 3 years. Instead of months of "data forensics" to figure out why specific formulations were chosen/ discarded, our engine automatically connected the raw assay data to the original slide decks, lab notes etc. We restored the full decision-making context in minutes, not months. Two things the theory misses: The cross-platform reality: The article focuses on agents sitting mostly in one workflow (e.g. CRM). Real enterprise context is buried across silos. Ttrue context requires an engine that can bridge these platforms. Auditability as a requirement: In a regulated environment, context isn't "nice-to-have"; it’s a compliance necessity. We approach it by truth-grounding every AI decision with a full provenance chain, so that the reconstructed context holds up to the same scrutiny as the original study. Unlocking this "trillion-dollar opportunity" is not about recording the future - it’s about a way to make the context we already have usable, searchable and trustworthy. What do you think? #AI #Biopharma #DataGovernance #ContextGraphs #GenerativeAI #DigitalTransformation

  • Ok, since everyone is posting, I guess I also have to put my two cents about #JPM26. My favourite thing about the week? Wasn’t an official JPM report actually, but I think it was announced during JPM for a reason: FDA has finally approved Bayesian statistics for drug and biologics submissions. Why this is cool? It reduces regulatory risk for those who use it and enables smaller, faster and smarter trials. Bayesian methods are especially powerful when: - patient populations are small, - diseases are rare or heterogeneous, - or evidence is accumulating over time. Concrete benefits: - Smaller sample sizes by borrowing prior or external data - Earlier stopping for success or futility - Adaptive designs that learn and adjust mid-trial - Continuous evidence updating, rather than waiting for a single p-value => This can shave years and tens of millions of dollars off development. Put simply: - Traditional hypothesis testing asks “Is this result unlikely under the null hypothesis?” - Bayesian methods ask “What is the probability this treatment works — and by how much?” On a personal note… I just met an incredible number of interesting people - both new ones and those who I’ve only seen on Zoom calls before :) Would be impossible to tag everyone here, but I have to mention two ladies who’ve absolutely made my week: Gita Bassman Reinitz and Juliette Humer. Three extra takeaways: - The priority of biology: I met investors who are radically focused on the convergence of knowledge around human biology. As one put it: "It is more important to have 100 sensors over a human body than over a machine." So happy when people prioritize life sciences over traditional tech. Larry Li Amino Capital - Multi-scale Modeling: Some people are bold enough to model biology at multiple levels simultaneously — check out Deep Origin to see how they’re bridging the gap from quantum effects to the cellular level and to systemic data. Michael Antonov Garegin Papoian - I was reminded that so many people in biotech rarely fit into a single box. It was inspiring to see how creative energy fuels the industry — from scientists who are talented artists to entrepreneurs who are accomplished athletes as well. As a fun case in point: I learned that some VCs make surprisingly good DJs! Omri Amirav-Drory What was your biggest "under the radar" takeaway from JPM this year?

    • No alternative text description for this image
    • No alternative text description for this image

Similar pages

Browse jobs