Sign in to view Yash’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Yash’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
San Francisco, California, United States
Sign in to view Yash’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
1K followers
500+ connections
Sign in to view Yash’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Yash
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Yash
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Yash’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View Yash’s full profile
-
See who you know in common
-
Get introduced
-
Contact Yash directly
Other similar profiles
Explore more posts
-
Tony Ndezwa
Relego Ai Solutions • 944 followers
Finding product-market fit isn’t a clean checkpoint — it’s a continuous, often chaotic process that can make or break a startup. At TechCrunch Disrupt 2025, leaders like Rajat Bhageria of Chef Robotics, Ann Bordetsky from NEA, and Murali Joshi of ICONIQ emphasize that it’s not about guessing what customers want, but about smart testing, real-time iteration, and deeply listening without getting overwhelmed by noise. The most striking takeaway is that product-market fit should be viewed as an ongoing journey rather than a milestone to tick off. This perspective shifts how founders approach growth: instead of waiting for a magic moment, they build, learn, adapt, and refine constantly. It also matters for investors who back startups navigating this messy terrain—recognizing signals of traction early while understanding the underlying effort required. Whether companies are prototyping or scaling, success hinges on solving real problems so customers can’t live without the product. How do you keep your teams grounded and focused through the unpredictable, often nonlinear path to product-market fit? What tactics or mindsets help you avoid noise and zero in on meaningful signals? #ProductMarketFit #StartupGrowth #TechCrunchDisrupt #FounderInsights
1
-
Fahad Awan
Northstar GTM • 145 followers
The more I work in the GTM engineering space, the more I notice a clear geographic split. A lot of founders and decision-makers sit in Western, developed markets. But much of the actual GTM execution is happening out of developing countries, especially India and Pakistan. Different time zones. Different cost structures. Same output when done right. If you’re a GTM engineer, where are you based? Are you closer to the founder side, the execution side, or somewhere in between?
6
-
Archit prasar
CoRover • 607 followers
Kanha AI is now live at https://kanhaji.ai Built for families who want a mindful, culturally rooted, and truly screen free AI companion for their children, KanhaJi brings together storytelling, values, curiosity, and safe intelligence into one seamless home experience through the KanhaJi Home Pod. Proud to be a part of the core team behind this journey from idea to reality. Working across product direction, experience design, and system thinking to shape what an AI first childhood environment should feel like has been deeply meaningful. Heartfelt thanks to our incredible core team members who made this possible: Sudhanshu Sharma, Ankush Sabharwal, Lavanya Sharma, Ankur Thakur, Md Ibrahim, Ankit Bhardwaj We are excited to see KanhaJi become a trusted companion in homes that value both tradition and thoughtful technology. Onward.
117
7 Comments -
Sita Lakshmi Sangameswaran
Google • 4K followers
✨ Feeling refreshed and energized after a vacation, and I'm excited to share 𝘁𝘄𝗼 𝗶𝗻-𝗱𝗲𝗽𝘁𝗵 𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝘀𝗲𝘀𝘀𝗶𝗼𝗻𝘀 I had the pleasure of recording. If you're building with LLMs and AI agents, these deep-dives are for you. 🤖 𝟭) 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗔𝗗𝗞 & 𝗩𝗲𝗰𝘁𝗼𝗿 𝗦𝗲𝗮𝗿𝗰𝗵 with Kaz Sato 👉 Key Takeaways: How an Agent Development Kit (ADK) works with Vector Search to create sophisticated, production-ready AI systems. 👉 Beyond Basic RAG: We move past simple Q&A to discuss architectural patterns for building powerful semantic search and Retrieval-Augmented Generation (RAG) pipelines. 👉 Practical Implementation: A look at the code and components needed to bring these advanced search agents to life. 🔗 𝗪𝗮𝘁𝗰𝗵 𝗵𝗲𝗿𝗲: https://lnkd.in/gjKvq-MM 📈 𝟮) 𝗠𝗮𝘀𝘁𝗲𝗿𝗶𝗻𝗴 𝗔𝗴𝗲𝗻𝘁𝗢𝗽𝘀 with Dr. Sokratis Kartakis 👉 Key Takeaways: The "Day Two" Problem: Why traditional DevOps and observability tools fall short for monitoring complex, non-deterministic AI agents. 👉 Metrics That Matter: Learn the key metrics for tracking agent performance, cost, and reliability to ensure your agents are effective and efficient. 👉 A Framework for Reliability: Dr. Kartakis shares a practical framework for debugging, evaluating, and continuously improving your agents post-deployment. 🔗 𝗪𝗮𝘁𝗰𝗵 𝗵𝗲𝗿𝗲: https://lnkd.in/gpdC_Jk9 Which topic is more critical for you right now: building new agent capabilities or managing them in production? Let me know in the comments! #AI #GenerativeAI #LLM #VectorSearch #RAG #AgentOps #MLOps
49
3 Comments -
Pamela Mishkin
OpenAI • 4K followers
I’ve been loving the new work coming out of Stanford’s Digital Economy Lab and The Budget Lab at Yale -- not because it shows dramatic change, but because it’s finally asking the right questions. The most honest conclusion we have right now about AI and labor is that we simply need more (and better) measurement and a readiness to support workers even before we have definitive answers. Going to spend the next few weeks sharing more here on my read on the work. Early shifts in AI-exposed jobs, especially at the entry level, are not proof that firms are “hiring AI instead of people.” Higher exposure predicts jobs where AI could plausibly reshape how firms think about junior work: 1/ jobs where most learning can happen off the job 2/ jobs where junior and senior work/tasks look pretty similar, with speed and polish as the main differences. Those are precisely the jobs where training data already exists -- where models can be trained if labs have access to the right data. So if those jobs are changing -- in number, scope, or quality -- it could reflect many forces. It might be delayed hiring, re-sequencing of skill development, or a rethinking of what early career work even is. It does not automatically follow that senior workers have suddenly become vastly more productive thanks to AI. This (correlation != causation) matters because it has real implications for how we design policy responses. If the symptom is that early career workers are struggling, then the right interventions should be tied to those workers -- new pathways, better training models, wage protections, smoother transitions -- rather than tied narrowly to AI -- subsidizing adoption, enforcing impact assessments, or chasing productivity metrics. If we assume the wrong mechanism, we will build the wrong response. The goal isn’t to respond to "AI." It’s to respond to what’s happening to workers. Bharat Chandar, Erik Brynjolfsson, Ruyu Chen, Martha Gimbel, Molly Kinder, Joshua Kendall, Madeline Lee
243
8 Comments -
Pranav Guruprasad
Metarch • 1K followers
The next generation of multimodal models won't just perceive complex environments, they will take action in them. MultiNet v1.0 is a first step in measuring progress towards this future. Tune in for a deep dive on how we are architecting the benchmarks to drive this shift!
13
-
Bill Prin
Workato • 827 followers
YCombinator startups are famous for their vocal adoption of AI and LLMs, so it was a privilege to interview the CTO of Mercoa (YC W23) Sandeep Dinesh, about how his team uses AI to deliver real business value. In this interview, we covered * Building state‑machine AI agents for bill payments * Prompt‑engineering tactics that reduce hallucination * Lessons from using Gemini, GPT‑4, and BAML in production * Practical advice for engineers and founders moving faster with AI One big takeaway is that while Sandeep is hesitant to adopt new tools for their own sake, he's gotten huge value from BAML for both reliable JSON generation and chain-of-thought prompting. Full post here: https://lnkd.in/gpkdhSkq
20
1 Comment -
Dylan Freedman
The New York Times • 1K followers
When Kashmir Hill and I started reporting this story, A.I.-induced delusions were just starting to be taken seriously. But people were skeptical that chatbots could lead ordinary folks without a history of mental illness astray. Then, we encountered Allan Brooks's story. A corporate recruiter in Canada, Allan is an otherwise sane man who was convinced by ChatGPT he had breakthroughs that would change the world. It all started with an innocuous chat about mathematics. The conversation eventually spiraled into thousands of chats, with ChatGPT leading Allan to believe he had cracked industry-standard encryption. At ChatGPT's urging, he put his recruiting skills to work, contacting security professionals and government agencies including the NSA. Over three weeks, he and ChatGPT created plans for force field vests and levitation beams. Finally, he broke out of his spiral (with the help of another chatbot, Google Gemini). We talked to experts to explain how this kind of situation could happen, including Helen Toner, Terence Tao and Jared Moore, who emphasized several chatbot behaviors that can contribute to delusional mythmaking: - sycophancy: the tendency of the chatbot to flatter the user - building a scene: like an "improv actor," chatbots will try to stay in a narrative scene once it develops over a long conversation - narrative tropes: chatbots are trained on websites and books, including sci-fi stories where super-intelligent A.I. systems trick people These issues are not unique to ChatGPT. Models from Claude and Gemini reacted similarly in experiments where we put them in similar situations to Allan's (Gemini likely helped Allan break out of his delusion because it was coming at it fresh, without the context of a longer scene). We spent months thoroughly reporting this story. Here's a gift link 🎁: https://lnkd.in/dxeeeyfp
64
2 Comments -
Mohammed Amrath
HugoHub • 1K followers
My roommate and I spent the weekend benchmarking K3s on GCP to see how Istio architectures impact latency. We started with standard Sidecar mode, which clocked in at ~20ms, then switched to Ambient mesh, dropping it to ~15.2ms. The real breakthrough came when we enabled eBPF for traffic redirection, slashing latency down to just 3ms. It was a massive performance win, and we’re planning to test Istio’s in-pod redirection next to see if we can achieve similar results without a full eBPF dependency.
46
-
Aakash Bhatnagar
Second Axis • 2K followers
We took PMs in AI to New York for an evening of quick demos on the AI tools and workflows that are making PMs more effective. Nikita Kabra (PM at Walmart) showed how she uses Replit to prototype ideas and put something clickable in front of stakeholders instead of a PRD. Her takeaway: the conversation completely changes when people can see and interact with what you're proposing. Hashim Syed (AI GTM Lead Google) demoed Gemini Enterprise as a single workspace for everything a PM touches — presentations, research, strategy docs — all with AI built in. The common thread: AI isn't replacing PM judgment. It's removing the friction between having a good idea and getting your org to act on it. Thanks to everyone who came out. More events coming soon — drop a comment or DM if you want to stay in the loop. read more: https://lnkd.in/eFBer2JJ next event: https://luma.com/klwqbe4t Second Axis Kabir J. Jinal Thakkar #productmanagment
16
1 Comment -
Subrat S Gupta
Flexprice • 1K followers
𝐏𝐫𝐢𝐜𝐢𝐧𝐠’𝐬 𝐞𝐯𝐨𝐥𝐯𝐢𝐧𝐠 — 𝐟𝐚𝐬𝐭. What used to be a 𝘧𝘪𝘯𝘢𝘯𝘤𝘦 𝘱𝘳𝘰𝘣𝘭𝘦𝘮 is now a 𝘱𝘳𝘰𝘥𝘶𝘤𝘵 𝘱𝘳𝘰𝘣𝘭𝘦𝘮. Pricing isn’t just a number — it’s a reflection of how well your product understands its own value. Great one by Rohit M. and Naveen Mohan — breaking down hybrid pricing and why monetization should live where the product lives. Builders need to think about pricing as architecture, not an afterthought. ☕ 🎧 A must-listen for anyone building in SaaS. Link in the comments!
2
1 Comment -
Jishant Sharma
414 followers
I loved hearing Uma Ratnam Krishnan and Rohit Agarwal talk about innovation as something we all own. It’s so true. Whether you're in tech, ops, or product, your perspective is valued. That feeling that your idea could be the next big thing. That’s the magic of working here at Optum India The latest FYI episode a great watch if you need a shot of inspiration today! What was your biggest takeaway? http://spr.ly/6044AYrir
-
Jayanth Kanugo
IBM • 781 followers
Why LLMs May Struggle in India Large Language Models are built on a simple assumption: the user will ask a question. But in India, asking questions doesn’t always come naturally. As children, we’re endlessly curious. Yet as we grow, we’re taught to stay quiet—don’t challenge authority, don’t upset the status quo, don’t risk offending someone. Over time, we stop asking even the most basic questions: Why are there potholes on the road? Where is my tax money going? In many settings, asking feels uncomfortable—or even unsafe. This cultural habit creates a barrier. No matter how affordable AI tools become, adoption will be limited if the experience depends on people openly questioning. For LLMs to truly work here, they may need to evolve from being passive “answer machines” into proactive companions that guide, prompt, and anticipate needs—so people don’t have to break through cultural conditioning just to use them. What do you think—can AI overcome cultural barriers, or will culture always shape how we use technology? PS: Yeah I did use an LLM to polish this up 😅
31
1 Comment -
Henry Shi
Anthropic • 79K followers
The SF Chronicle just quoted my hot take on the AI bubble. Turns out my wildest prediction was spot-on (down to the exact number). I told them how I meet first-time founders (non researchers) with no revenue, no product, no team, and no deck and yet they are easily raising $5M+ on $25M+ caps for oversubscribed initial rounds. This was impossible 5 years ago. Back then, only proven founders with exits could raise large seed rounds. But now, engineers leaving OpenAI or hot AI startups get $100M+ for their first startup with zero proof of concept. I’ll be the first to claim: “Yes, it's a bubble”, but it may NOT matter because VCs are okay with having 1000 failures if one of them becomes a trillion-dollar company. A decade ago, one 10x exit could justify 10 failures. Then came the decacorns. One could justify 100 failures. Now, in the trillion-dollar era, one winner can justify 1000 failures. And, this is happening in real time. OpenAI is now preparing for an IPO targeting a $1 trillion valuation. They are doing $10B in annualized revenue but lost $12B just last quarter. This proves my point about why investors keep believing despite the exorbitant losses or sky–high valuations. In fact, Sam Altman himself warned that someone would lose a phenomenal amount of money. And the Bank of England and the IMF both warned recently of a market correction. So yes, it’s a bubble. But in the meantime, we’re all living in it and benefiting from the trillion-dollar potential of AI companies and the thousands of overpriced startups that it can support. And my trillion-dollar analogy is playing out right now with OpenAI's IPO plans. What do you think? Will OpenAI be that one massive winner that justifies the 1000 failures, or will this end like the dot-com crash?
150
29 Comments -
Sandeep Swami
Tailnode Technology • 3K followers
🎵 Imagine this: You’re listening to “Kesariya” from Brahmastra. It’s romantic, mid-tempo, sung in Hindi, has strong emotional depth, and no explicit content. Now think: how does Spotify know all of this? The answer lies in a smart mix of Generative AI + human expertise — an annotation system built at scale. Spotify had to label 100M+ tracks with rich metadata across mood, genre, language, instruments, explicitness, and more — including podcasts and video content. A massive challenge! So, how did they do it? Here’s their playbook: 🔹 GenAI handles predictable patterns — like detecting language or suggesting likely moods from audio signals 🔹 Human annotators verify nuanced cases — like distinguishing between “romantic” and “heartbreak” 🔹 Three-tier workforce: Core annotators, quality analysts for edge cases, and project managers for workflow coordination 🔹 Tool-agnostic platforms: Built flexible, multimodal interfaces (audio/video/text) with real-time dashboards 🔹 Agreement scoring + escalation: Low-confidence items are flagged for deeper human review 🚀 The result? ✅ 10× growth in annotation volume ✅ 3× improvement in annotator speed ✅ Faster ML model training and personalization for users like us 🎯 Lesson: Scaling AI isn’t just about bigger models — it’s about intelligent workflows that blend automation with human judgment. As someone exploring GenAI , I find this a masterclass in how to operationalize AI beyond experiments. What other real-world use cases are ripe for human-in-the-loop GenAI systems? Let’s discuss! #GenAI #Spotify #MLOps #AIProduct #HumanInTheLoop #Annotation #MusicTech #LLM #GenerativeAI #AIWorkflow #LinkedInLearning
13
1 Comment -
Andy Wang
Google • 485 followers
Mass consumption of self driving cars is coming, and for selected cities, is already here. Looking at data, Alphabet's Waymo has high potential to dominate the market as they have a big first move advantage. There are also no real competitors picking up the pace other than Tesla's robotaxi service. However, there are still certain roadblocks, especially the high costs and ethical concerns. Making science fiction a reality isn't cheap. Waymo is currently not profitable at all, in fact, they are going in the other direction. They reported a 4.4 billion USD loss last year despite raising 5.6 billion USD in funding. This isn't that big of a deal, as they are growing extremely fast currently, but things might change when strong competitors, namely Tesla, enter the market. Waymo's core strategy is to take safety first, relying not only on AI but also on "maps, sensors, and human feedback, which makes it so expensive." This is fantastic for the passengers, but the company's finance department is paying the price. Tesla, on the other hand, is looking at ways to cut costs at the expense of road safety. With road safety, there comes a plethora of ethical concerns. Whose fault is it when the car crashes? This is the biggest question that surrounds this space, and nobody has a good answer. On one hand, there is a decent chance that the driver was just in the back seat, scrolling on their phone and not paying any attention. But on the other hand, can you blame AI and get away scot free? Is it the passenger's responsibility to watch the road and actively try to avoid accidents if they can? Additionally, like the trolley problem, how would AI decide the situation where the car needs to dodge one pedestrian but crash into another? All these questions are up in the air, and will most likely be a case by case decision. And who knows, maybe the entire industry will collapse when one bad accident happens. Despite all the challenges, driverless cars are coming. I am excited about Waymo and other competitors turning cities and even suburbs into a horde of driverless cars. Personally, I haven't gotten a chance to try them out yet, but I will definitely do so if I get the chance. #AI #SELFDRIVINGCARS #WAYMO #TESLA
12
3 Comments -
Jonathan Gordon
Elbit Systems • 233 followers
Its past midnight and I cant sleep, so here's my take on the LLM "AI" debate. There are two groups of people using LLM's in their day to day work. The first recognize it as a semi-useful tool which happens to be really good at spitting about short bursts of code for patterns it is familiar with. It is nothing more than an elaborate code-completion tool. The second almost go so far as to cede their creativity and certainly large parts of their intelligence to these LLMs. They see the tool as being able to build whole projects, being able to replace actual engineers. This is the group claiming AI is coming for our jobs, and in truth, they are probably correct - well their job anyway. When the LLM is doing all the work how are 2 engineers distinguishable? If neither knows how to debug a system, or preemptively fix security issues the LLM introduces why would a company spend money on the expensive local one when the cheap one is just as good? Why hire them on a permanent basis at all if they are only needed when the LLM needs a tweak? In my last role I was introduced to over a dozen technologies I had no previous experience with. I read API docs, built quick proof-of-concept applications to test applicability to my system, etc. I take all this knowledge and experience into my next role. A "prompt engineer" doesn't understand why the LLM "made its choices", and retains nothing about the system once they move on. An actual experience I had in the last 12 months. I was not familiar with how the various cloud services were hooked up (think ALB, security groups, load balancers, etc) so I went to ask the cloud team to show me how to click-ops something I needed. The dev could not do that and resorted to the cli LLM agent to do it. This repeated with the lead dev of that team. If the agent is down that team is completely useless. So what happens when the team gets a new hire? How does an "AI first" team onboard new members? Where is the technical knowledge transfer? There is none because its all LLM agents doing the work. Eventually the last of the old guard is gone and you have nothing but over-paid LLM baby-sitters. Where is the business incentive to keep any particular person employed? So what happens to the "AI skeptics"? They leave the industry, or they exploit the LLMs output as black- (or white-) hat hackers? I don't know, but I hope the bubble bursts before we get that far. Would you trust a bridge entirely designed by an LLM? Of course not, but that's where the software industry is going.
17
1 Comment -
Shweta Bharti
Stony Brook University • 2K followers
Curious about which LLMs are actually leading the pack in 2025? With so much buzz around the launch of GPT-5, it’s easy to get lost in the headlines. But for those of us knee-deep in real-world applications, benchmark data is where the rubber meets the road. I recently came across the Vellum LLM Leaderboard (Link: https://lnkd.in/gwEE7jeD) — a site that compiles up‑to‑date benchmarks on leading models like GPT‑5, Gemini 2.5 Pro, Claude Opus 4, Grok, Llama 4 Scout, and many others. 🔍 Why it’s interesting: Shows model performance across reasoning, coding, math, and agentic tasks — beyond just generic “accuracy” claims. Compares cost, latency, and context window size — which is extremely handy when planning production deployments. Includes both open‑source and proprietary models in one place for a fair, side‑by‑side look. What I like most is how it helps answer nuanced questions: - Which model gives the best value for high‑throughput coding tasks? - Which ones can handle massive context windows without breaking latency goals? 🔗 Here’s the resource: https://lnkd.in/gwEE7jeD Even if you’re not building LLM-powered products today, the insights are invaluable: you can rapidly compare models for everything from RAG pipelines to chatbots, tailor choices to your budget, and keep your team ahead of the curve. 📊 For anyone exploring, architecting, or scaling AI solutions: bookmark this leaderboard and check in before your next project. The landscape is changing fast, and smart decisions start with reliable data. Disclaimer: I’m not affiliated with Vellum.ai in any way, nor do I have any incentive to share this — just thought this might be useful for anyone in the AI/ML community trying to stay on top of a rapidly evolving model landscape. #AI #ML #GPT5 #Vellum #LLM #GenAI
3
-
Chamod Gamage
AngelList • 933 followers
Recently I gave a talk at SF Ruby about a library we built (and recently open-sourced!) at AngelList called Zaxcel. Why'd we build it? Excel is the lingua franca of finance, but most programmatic Excel generation libraries force you to think in coordinates and formula strings. We took a different approach: model the workbook as a directed graph where data flows through named objects. Let the library handle coordinate resolution and formula composition. The result is a DSL that actually makes sense: you write formulas as Ruby expressions, reference things by name, and the type checker catches mistakes at compile time. It's revolutionized how we build financial statements. What used to take weeks now takes hours (and with AI agents - only minutes!), empowering thousands of workbook deliveries a quarter. Read how we went from nightmare refactors to 100x productivity: https://lnkd.in/es2aTttR Open-sourced repo (you can use it today!): https://lnkd.in/ecKRwcaW
91
8 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top contentOthers named Yash Mittal in United States
-
Yash Mittal
San Francisco Bay Area -
Yash Mittal
New York, NY -
Yash Mittal
Bellevue, WA -
Yash Mittal
Cupertino, CA
14 others named Yash Mittal in United States are on LinkedIn
See others named Yash Mittal