Sign in to view Savin’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Savin’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
San Francisco, California, United States
Sign in to view Savin’s full profile
Savin can introduce you to 10+ people at Outerbounds (acq. by Anaconda)
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
17K followers
500+ connections
Sign in to view Savin’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Savin
Savin can introduce you to 10+ people at Outerbounds (acq. by Anaconda)
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Savin
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Savin’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Building the modern, human-centric AI infrastructure…
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
Outerbounds
********** * ***
-
*******
******** ********
-
********
******** ********
-
****** ********* ** *********** *****
******** ** ********** ********* ******** ******* * *********** undefined
-
-
******** **********
******** ********** ** ********** * ********** ********
View Savin’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View Savin’s full profile
-
See who you know in common
-
Get introduced
-
Contact Savin directly
Other similar profiles
Explore more posts
-
Devon O'Rourke
Fluvio • 8K followers
On the latest episode of Embracing Erosion I sat down with Suyog Deshpande, Co-Founder & CEO of Webless (a Fluvio Ventures portfolio company). Before starting Webless, Suyog spent years at Amplitude, Salesforce, and Samsara - shaping products and GTM strategy at scale as a product marketing leader. That perspective is now fueling one of the boldest bets in tech: 🏗️ rebuilding the web for LLMs. Instead of optimizing for clicks and SEO, Webless imagines an agentic web -where sites are designed to interact directly with AI models and agents. We dug into: - What an LLM-native web could actually look like - How companies can prepare to be LLM-ready - What metrics will matter beyond pageviews and clicks - Why safety and trust are core to agent-driven experiences - Lessons Suyog brings from “big tech” into this transformation If you’re curious about how discovery, trust, and value exchange will evolve online, this one’s worth your time. Link to the full episode in the comments 👇
9
1 Comment -
Navneet Kumar
BioBrain Insights • 5K followers
Consumers are speaking, questioning, and deciding in the wild. Everyday. Everywhere. On Everything. When insight systems are designed to understand context, emotion, and trade-offs together, clarity emerges well before the market consensus does. At BioBrain, we process millions of unprompted UGC signals through our proprietary Web Intelligence Engine which combines large-scale digital listening, AI-led narrative clustering, and rigorous signal filtration to isolate decision-grade meaning. Our models don’t just track sentiment. They map belief systems, credibility filters, ethical thresholds, and purchase trade-offs as they form in real time. Authentic, deep, consumer-grade insights. This is how we see the beauty category reorganising itself, away from claims and toward proof, coherence, and restraint, long before these shifts harden into demand curves. #BioBrain #Insights #research #beauty
3
-
Paul Baier
GAI Insights • 20K followers
Last night Ashish Bhatia gave a demo session to 250 (of 420 regs) at our monthly GAI Insights "Learning Lab" Zoom call on Claude Code and OpenClaw For Claude Code, his 3 key takeaways are: - Anyone can Build: Technical/Non-Technical, any one can build. If you have an idea you can bring it to life. - Mins -> Hours -> Days: AI Tools can perform long horizon tasks. - Gulf of Specification: How do you convey your intent most effectively with AI models/agents. His 3 takeways for OpenClaw are: - Orchestration and not the model: Models are fungible and intelligence is getting commoditize. - AI is becoming the Operating system: Transition from answers to actions. - Agent Manager mindset: No upper limit to resources, what tasks would you offload to your agents. Link to recording in the comments. Let's keep Learning and Building!
61
2 Comments -
Sharad Kumar
FluidCloud • 18K followers
❤️ Loved this from Harshit Omar Cloud outages aren’t anomalies; they’re warnings. Most enterprises are architected as if hyperscalers are “too big to fail,” but recent outages have proven the opposite. Resilience needs a new baseline: Can you run somewhere else when your provider falters? That’s the promise of #CloudCloning and true #multicloud optionality. Big thanks to MR Rangaswami Sir for the conversation. Worth the read.
30
1 Comment -
Amit Singh
Weekday (YC W21) • 25K followers
15 Seed to Series B companies (including 4 YC companies) that have collectively raised over $307 million in funding are hiring in India - both onsite and fully remote: 1/ Track3D – AI Powered Reality Intelligence Platform for Construction Full-Stack Engineer [Hyderabad] 2/ SigNoz (YC W21) – Open source DataDog alternative Forward Deployed Engineer [Remote] 3/ Polybee – Turning unpredictable farms into data-driven factories with physical AI agents Backend Developer [Remote] 4/ Knowlify (YC S25) – Removing the pain of creating quality explainer videos Full-Stack Engineer [Remote] 5/ Exaforce – Agentic SOC Platform Software Engineer [Bengaluru] 6/ Senpi – Trade like the 1% Lead & Sr. Engineers, Mobile and Backend [Remote] 7/ faff – Your personal concierge AI Engineer [Bengaluru] 8/ indē wild – Difference Is Our Superpower Data Engineer [Mumbai] 9/ Lemnisk – Supercharge Marketing and Personalization Product Manager, Sr. QA Engineer [Bengaluru] 10/ Arango – Contextual Data Layer for Enterprise AI AI Forward Deployed Engineer [New Delhi] 11/ Flywl – Make cloud marketplace procurement fast, simple, and secure SDE 2 [Bengaluru] 12/ Tijoree – Empowering large Indian corporates to optimise their treasury SDE 2 [Mumbai] 13/ Cozmo AI (YC W22) – AI Employees That See, Speak, Think and Do for Enterprises SDE 4 Backend [Bengaluru] 14/ Yenmo (YC W24) – Secured consumer lending in India Head of Engineering [Bengaluru] 15/ Penguin Ai – Healthcare Native Intelligence Security Engineer [Hyderabad] Follow for more such weekly roundups of exciting engineering, product, and design openings.
42
1 Comment -
Keyur Shah
Krama AI • 2K followers
Over the last two weeks, three posts on X from Andrej Karpathy, Boris Cherny, and Jaya Gupta caught my attention. While each post had a different aim, together they pointed to the same underlying challenge: the industry lacks deep awareness of how work actually gets done day to day not just how it’s documented. In their posts, Andrej and Boris reflect on that awareness directly. By paying close attention to their own day to day work, they can see where they’re falling behind and where they still slip into older ways of working. Although they’re writing from a software development lens, the dynamic applies to almost every kind of workflow and task. Jaya’s post reinforces this from an organizational angle. She shares examples that highlight a common reality inside companies: what’s documented as the “source of truth” for a process is rarely what happens day to day, as there are several reasons because of which processes that starts getting executed from a SOP over time looks completely different in terms of how it is executed by one operator to another. Operators make decisions and take actions that never make it into SOPs or formal documentation thereby making it as a critical blocker to augment or automate using AI. So if Andrej and Boris, who are extremely close to the tools and the frontier feel like they’re playing catch-up, there are many teams and individuals who don’t even have a clear picture of their workflows in the first place. And without that clarity, improving those workflows is barely even on the table. Andrej's post- https://lnkd.in/gHdG5AFy Other two posts in comments 👇
12
1 Comment -
Mukund Jha
Emergent Labs • 85K followers
Most billion-dollar ideas die because the barrier to build is too high, Emergent changed that and hit $50M run rate in 7 months. How did we get here? It was a pleasure talking to Hemant Mohapatra on the Lightspeed India podcast about the Emergent story so far. We covered everything from technical moats to what it takes to build successfully in AI. Hemant and I share a great rapport, and I think it really shows in this conversation. Here are the some of the best takeaways: - Control the infra: Most AI startups are just a UI slapped on a 3rd party API. That's a feature, not a company. For production-grade software, we built our own container tech and coding agents from the ground up because if you don't control the infra, you don't control the feedback loop. - The 0.2s Rule: I tell my team AI is an "Olympic race". No one remembers the silver medalist who lost by 0.2 seconds. In a high-growth startup, every delay is a compounding loss, and we fight entropy every single day to save those fractions of a second. - Don't build moats around problems: Most founders build a "moat" around a hard problem just to avoid it. We do the opposite: identify the 1% of the problem that scares you most, whether it's deployment stability or eval quality, and pull the entire org toward it. Solve the hard thing first and the moat follows. - Launch before you're "ready": We sat on the world's best coding agent for 6 months. Big mistake. The moment we went live, I spent 24/7 on customer support. That raw, painful feedback loop is the only way to close the gap between a "cool demo" and a "must-have product". - AI is Bitcoin at $1: I've posted this before. The exponential curve is just beginning. Drop anything not related to AI. Start or join an AI-native company (we're hiring!). The cost of starting has collapsed, domain experts are the new architects. This podcast also brought out my startup motivations, my dynamic with Madhav Jha, and the resilience required to go from a plateau back to the drawing board. We also talked about the "whiteboard graveyard," the millions of ideas that die because the barrier to building is too high. The reason why Emergent exists. This is one of my favourite chats. Watch the full episode here: https://lnkd.in/gjFzKbY4 #aistartup #buildinginai #vibecoding
258
14 Comments -
Sajith Pai
Blume Ventures • 87K followers
This section on how dbt Labs transitioned from a purely PLG motion to layering on enterprise is a fascinating one. Two instructive passages that I have bolded (below from First Round Review's path to PMF series featuring dbt Labs) TLDR: GTM comprises your ICP, channel, and message. When you transition from PLG / bottom up to Enterprise / top down motion, naturally your ICP and channel also changes, but your messaging / proposition needs to change too, e.g., the enterprise may be multiple personas and are buying assurance as much as solution. --- "Handy found product-market fit organically for dbt as an open-source tool mostly used by data practitioners and developers. But a few years into running a commercial business, he realized he had to build a growth curve all over again with C-suite data leaders. “Even though we had an unbelievable amount of market pull, as we initially commercialized, it wasn’t easy for us to transform this open-source command line tool into a product that enterprises would pay a million dollars for,” says Handy. “When you have enough product-market fit, sometimes it allows you to get away with not being super tight on product marketing or sales motions. So around 2022, we went from this gigantic acceleration curve and overnight we realized, we have to sit at the adults’ table and figure things out real fast,” says Handy. After the PLG flame started to fizzle, Handy turned his attention to layering on a sales-led motion for the cloud platform. “We had to focus our efforts on telling cohesive stories to senior data leaders. We had to have a very clear, explainable answer to the question, ‘Why should I use the commercial product and not the open-source product? And it had to be digestible by someone with a C in their title,” he says. Handy’s answer: dbt Cloud can handle complex data for companies of every size. “The longer people used dbt, the more complex their code became,” he says. “It was a problem for the most sophisticated dbt users, who were often at the largest companies. So there was a real opportunity for us to step in and solve that for them with dbt Cloud.” To tell that story to enterprise customers, Handy relied on data, naturally. “At a user conference we presented a chart that showed the number of dbt projects that had a certain number of models in them — over 100, over 1,000, et cetera,” he says. “We watched that number climb and we knew as users ourselves, ‘Oh my God, trying to work in a dbt project with 5,000 models in it is challenging.’ So we started with that quantitative data point and asked folks in our community about their experiences with these very large, complex dbt projects, and validated that this was a pain in the ass without a cloud platform.”
23
4 Comments -
Neil Tewari
Conversion • 18K followers
The hottest role in AI startups right now isn’t Forward Deployed Engineers. It isn't GTM Engineers. It’s Deployment Strategists. Decagon calls it an “Agent Product Manager.” Harvey calls it a “Solutions Architect.” Palantir Technologies has had versions of this role for years. And the salaries are climbing fast: - Decagon: $200k–$285k - Palantir Technologies: $120k–$200k - Figma: $150k–$260k - Ramp: $100k–$180k - Harvey: $190k–$260k So who are these people? They are usually pseudo-technical -- CS or engineering majors, or folks with technical work experience. Many come from 2 years in consulting, IB, or PE, then jump into startups to get their hands dirty. They are young, hungry, polished, and comfortable being in front of customers. What do they actually do? They make sure enterprise AI deployments succeed. A $100k+ deal does not survive on a nice pitch or a self-serve onboarding flow. It survives if the customer sees value in the pilot. That means: - Embedding directly with the customer - Designing prompt logic for specific workflows - Working with engineering to align integrations and data flow - Helping exec teams define their AI roadmap - Running feedback loops into product and GTM Why does this role matter so much? Because enterprise AI is messy. Integrations, data transfer, and adoption make or break a deal. Most buyers are using AI for the first time, and each has unique workflows. Deployment Strategists bridge that gap. They own the outcome. They are accountable for making pilots successful, which often means millions in revenue down the line. At Conversion, Sam Bochner has been leading this work for us. We are now thinking about scaling it into a full team. Because a few successful pilots can fund an entire department, and the cost of failed deployments is too high to ignore. Is this just a rebrand of customer success? Not really. Success is about answering tickets and renewals. Deployment Strategy is about going deep with a few enterprise accounts, extracting maximum value, and ensuring the pilot closes into a multi-year contract. Call it Agent PM, Solutions Architect, or Deployment Strategist. Whatever the title, this is becoming one of the most important roles in AI SaaS.
656
91 Comments -
Peter Carrescia
Courtyard AI • 4K followers
Every tech ecosystem in the US (and the world) is losing ground to SF. Network effects apply here too. Let’s stop beating ourselves up about everything we are doing wrong like any of it makes any difference at all - at best it’s a rounding error. Eg. If tax mattered SF wouldn’t be on this list as it’s literally the highest tax place to start a business in all of the US. This is no different than trying to compete against the IBM of the 80s, the Microsoft of the 90s or the Nvidia of today. How do you beat a leader that has strong network effects? Focus on what you can do that is differentiated and unique and in an area the leader cannot follow or doesn’t now care about.
34
5 Comments -
Bob Evans
Cloud Wars • 66K followers
BHUSRI CALLS B.S. ON A.I. OMNIPOTENCE: Workday's former/current CEO #AneelBhusri brilliantly breaks down the apps & #AI interplay while also hilariously framing limits of vibe-coding. For more news on the Cloud Wars Top 10, visit https://lnkd.in/efy5xrGj
53
2 Comments -
Abdul Ghani Manan
Stealth AI Infra Robotics… • 4K followers
Serving LLMs isn’t “just more FLOPs.” It’s four constraints you have to balance. Capacity: weights + KV‑cache. At long context, KV dominates. Example: a 70B model (80 layers, d=8192, BF16) needs ~82 GB of KV at 32k context with standard attention—but ~10 GB with MQA (shared K/V) and <5 GB with 8‑/4‑bit KV. Bandwidth: every new token reads the entire KV history per layer. If HBM/DRAM can’t stream it, you stall—no matter your FLOPs. Synchronization: scaling across 64–128 accelerators adds collective latency & jitter. Sub‑second collectives are table stakes if you want low TTFT. Compute: at short contexts, matmuls dominate; at long contexts, attention’s seq_len term does. Use speculative decoding, GQA/MQA, KV‑quant, and hybrid/linear attention to stay fast as sequences grow. Operator playbook: • Budget KV (MHA vs GQA/MQA; BF16 vs 8‑/4‑bit). • Measure HBM GB/s and collective p95/p99 per token. • Dashboards: TTFT, time‑between‑tokens, TPS/$, TPS/W on a production‑like mix. • A/B the levers above; keep a quality guardrail with long‑context evals. My take: the next step‑function isn’t a single kernel—it’s balanced design: smarter caching, faster collectives, and algorithms that burn less memory per token. https://lnkd.in/e67pMVtK
-
Ashpreet B.
Agno • 20K followers
Introducing Agentic Culture — an open-source experiment in collective memory and in-context cultural accumulation for multi-agent systems. A hot topic from the recent Andrej karpathy // dwarkesh podcast is that LLMs don't yet have culture. >>> So we built the scaffolding for them to develop one. Agno Agents can now tap into a shared cultural memory — knowledge that persists beyond individual sessions, users or memories. Culture is collective, evolving context that shapes agent behavior over time. Now they can create, read, explore and learn from their culture. You can use the `CultureManager` class to create and manage `CulturalKnowledge` objects. These are stored in your database of choice (SQLite for now) and automatically retrieved as in-context cultural grounding for your agents. To give your agents access to shared culture, just set `add_culture_to_context=True`. That's it. Agents now learn from collective experience. Explore the code and README: https://lnkd.in/e7E3AcJX Agno is open source — come build with us: https://agno.link/gh
98
4 Comments -
Amber Illig
The Council • 5K followers
🎙️ From Square to Mercury: How Rohini Pandhi became one of fintech’s top product leaders—and how velocity of learning at high slope companies shaped everything. Rohini dropped so many gems in our first episode of First Builders: 💎 Generalist → Specialist: In her early career, she was a generalist and tried everything. This led her to critical discoveries. She specialized in product and fintech when the “time was right” 💎 Follow the Engineers and Designers: Square was a high slope environment for Rohini, packed with talent density and learnings. She finds high slope environments by studying where smart engineers and designers are going. 💎 Fake News about PMs: PMs don’t just “move fast and break things.” Her team runs 30-50 customer interviews per quarter, providing rigor behind every decision. 💎 It’s Usually Too Early to Hire a PM: A top question she gets from founders is whether they should hire a PM. She usually says: “it’s too early.” 💎 Hiring a World Class Team is Like Tennis: When finding a tennis partner, you want someone a little better than you who can challenge you to improve your game. Give us a listen and leave a review on your favorite podcast location (see comments for links)! #FirstBuilders #StartupPodcast #ProductLeadership #TechCareers #Leadership
26
4 Comments -
Aravind Ratnam
Q-CTRL • 6K followers
Here is a framework for nations to work with revolutions such as #AI and #Quantum. First you have your people learn, then dabble and become early/late adopters (hopefully not laggards!), then become effective technical users and finally value creators. It is only then that the economy benefits from your investment, and you create sustainable job growth. While #India is a key market for #Quantum with millions thirsting to learn and actually do things with this technology, there are still large gaps between: 1. Where Quantum Computing is today Vs where it needs to be for mass adoption [Quantum Sensing on the other hand is ready for prime time and GPS compromising incidents are only increasing] 2. The specialist work being done Vs the ability to hire millions of generalists 3. What it costs to tap into the technology Vs perception of what it should cost as a commodity 4. The talent it takes to produce world class indigenous technology Vs the talent that actually exists on the ground 5... Each of these gaps will close over time. In the lead-up and as a first step, India first needs to learn enough about this technology, ideally from the very best, and spread awareness. The starting point of this journey should ideally not be from dull videos or low quality social media content, but from hands-on practical learning that is delivered through high quality polished content. Such practical training effectively complements all the theoretical content you learn in universities. Q-CTRL's Black Opal product is the world's leading interactive learning tool that gives you just enough fundamentals (think 101) required to program a quantum computer. With over 28,000 users worldwide and success across the UK (Tamil Nadu adopted it last year as part of the naanmudhalvan program). And now, we have expanded our presence in India through a network of local resellers. We encourage Indian companies, governments and universities to work with us either directly or through our contacts. https://lnkd.in/gjJWHYB3 We also look forward to aligning with AICTE standards and to make our content even more relevant to India. Waseem Shiraz, Shobhit Gupta, Nagendra Nagaraja, Reena Dayal, Anil Prabhakar, Kamakoti Veezhinathan, Ajai Chowdhry, Harshan Budke, Sunil Gupta, Arindam Ghosh, Ravi Puvvala Dr M Jayaprakasan, ISDS., National Skill Development Corporation, MindTech, Kquanta Research, RV College Of Engineering, Sanjay Chittore, Uttkrist Innovations Private Limited, Subu Gupta, AICTE QRDLab - Quantum Computing in India, IBM DRDO, Ministry of Defence, Govt. of India, Ashutosh Sharma, Urbasi Sinha, Bloq Quantum LTIMindtree, Tata Consultancy Services Abhay Karandikar, NPTEL, National Quantum Mission, Eltech, Andhra Pradesh Information Technology Academy ITE and C Department Government of AP, Jayesh Ranjan, CDAC Bangalore, Kaushalya The skill University
50
2 Comments -
Arteen Arabshahi
Fika Ventures • 9K followers
SF AI-Native Operator Takeaway #2: In AI-native PLG, the hard part isn’t conversion... it’s discovery. Many AI-native teams are still talking about PLG using a classic SaaS mental model, but based on operator conversations in SF, that model is starting to break down in fairly obvious ways. The biggest bottleneck right now isn’t conversion. It’s discovery. In traditional PLG, users generally understood the category before they ever signed up. The problem was obvious, the product’s value was legible from the homepage, and the “aha” moment tended to show up quickly in first use. In that world, PLG meant optimizing onboarding, reducing friction, and improving free-to-paid conversion because user intent already existed. AI changes that assumption. In AI-native products, users are often curious but unclear. They don’t yet know what’s possible, value depends heavily on workflow, context, data, and role, and the product can feel abstract until it’s applied directly to their job. As a result, many users stall not because the product isn’t valuable, but because they haven’t discovered how it fits into their world and how they can't live without it. This is the real distinction people kept coming back to. PLG conversion answers, “Is this worth paying for?” PLG discovery answers, “What problem does this solve for me, right now?” What’s working best in practice is less about funnel polish and more about clarity up front: role- or workflow-specific entry points, guided examples instead of blank states, and opinionated first actions that show users a concrete outcome before asking them to explore. This also explains a broader pattern across AI-native companies. Forward-deployed teams and services-heavy delivery aren’t just implementation tools; they’re discovery mechanisms. They translate abstract AI capability into concrete workflow value, observe real use cases users wouldn’t self-discover, and feed those learnings back into what eventually becomes productized. PLG isn’t going away, but in AI-native companies it’s being redefined. Self-serve no longer means self-explanatory. Education becomes part of the product, and discovery has to come before optimization. The teams making progress aren’t obsessing over conversion rates yet. They’re focused on whether users see themselves in the product, how quickly they reach a meaningful outcome, and whether the product helps users get to a meaningful outcome for themselves quickly, without too much guesswork. Bottom line: in AI, PLG is less about removing conversion friction early and much more about creating understanding first. Once they understand, they may be hooked. Tomorrow is my last SF AI operator takeaway focusing on everyone's favorite topic du jour: 996 work schedules.
15
1 Comment -
Ashish Mishra
lucidledger.co.in • 7K followers
Most tech founders assume that building a strong product is enough. It isn’t. What Abhilash shared here is a pattern I keep seeing - smart builders, solid tech, but unclear on validation, positioning, and actual market readiness. The gap is rarely effort. It’s direction. 0to1 isn’t about teaching theory. It’s about forcing clarity: – Is your idea worth building? – Are you solving a real problem? – Can you sell it? Because a “good product” that doesn’t convert is just wasted time. Glad to see this shift. #StartupLessons #FounderJourney #TechFounders #BuildInPublic #StartupReality #ProductMarketFit #StartupGrowth #Entrepreneurship #0to1 #FounderLife #StartupIndia #SalesForFounders #IdeaValidation #EarlyStageStartups #B2BStartups
2
1 Comment -
Rod Beckstrom
BECKSTROM • 8K followers
We concluded an incredible week at the India AI Impact Summit 2026 , highlighted by a compelling interview of Vinod Khosla by Nivruti Rai. Vinod delivered a candid fireside chat that covered topics such as VC investing, risk-taking, his enthusiasm for emergent complex systems, and his insights on AI. Following this discussion, we gathered to celebrate Professor Ramesh Raskar’s Project NANDA: Architecting the Internet of AI Agents. This initiative is focused on developing an open-sourced, decentralized agentic AI stack for developers. The framework and toolset aim to help realize India’s vision of creating an agentic AI Stack, akin to the India Stack for payments, which includes UPI and identity management through Aadhaar. With 1.4 billion Indians enrolled in Aadhaar and UPI facilitating trillions of dollars in real-time payments across the country, this represents a remarkable success story in government technology. Project NANDA: Architecting the Internet of AI Agents has the potential to achieve a similar breakthrough in agentic AI for India. We celebrated this initiative by wearing team caps, pictured from left to right: myself, Vinod Khosla—one of the world's foremost VC investors and a key supporter of the India AI Stack, MIT AI Professor Ramesh Raskar Raskar, who leads Project Nanda, and Tuan Ho , Venture Partner at X Fund. Those interested in joining this starfish-stule decentralized agentic AI movement are invited to sign up at ProjectNanda.org.
122
2 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content