Sign in to view Soha’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Soha’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
San Francisco, California, United States
Sign in to view Soha’s full profile
Soha can introduce you to 10+ people at NVIDIA
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
10K followers
500+ connections
Sign in to view Soha’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Soha
Soha can introduce you to 10+ people at NVIDIA
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Soha
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Soha’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Websites
- Stanford Page
-
http://nmbl.stanford.edu/people/soha-pouya/
- EPFL Page
-
http://biorob.epfl.ch/people/pouya
- Google Scholar
-
https://scholar.google.com/citations?user=qns86fUAAAAJ&hl=en
Activity
10K followers
-
Soha Pouya posted thisGrateful to share that I’ve been promoted to Senior Director, Robotics Software Engineering at NVIDIA. Last week also marked my 5th year here - deeply thankful to work alongside world-class teams across NVIDIA and our ecosystem partners, and highly appreciative of the amazing work and support by team members, collaborators, and mentors who’ve shaped this journey, building across the full robotics stack, from simulation and data to AI models, CUDA libraries, and real-world deployment. Looking forward to what we continue to build.
-
Soha Pouya shared thisAnother great GTC last week - at #NVIDIA #Isaac, we shared exciting updates with strong momentum across the full robotics stack, from simulation and synthetic data to foundation models to CUDA libraries and deployment: - Isaac Lab Arena is live, enabling large-scale benchmarking with growing adoption across partners. See Lightwheel exciting updates on RoboFinal: https://lnkd.in/gdmrFagt - Neural Reconstruction workflows in Isaac Lab/Sim extending with XGRIDS devices, alongside great updates by partners adoption such as Serve Robotics To learn more, see the post-training of our COMPASS Navigation model: https://lnkd.in/gzbX-Bb4 - Humanoids whole body control SONIC in collaboration with our research teams - including demo at #GTC as well as exciting news on release of massive training dataset enabling users to train on other embodiments. More here: https://lnkd.in/g7AgtkZV - cuVSLAM is now fully open source - what the community has long been asking for - with insightful updates from our partners Idealworks, RealSense, and Intermodalics. More here: https://lnkd.in/gRdqrcDT - We are also excited to have integration of Isaac Lab 3.0 and the Newton physics engine into our workflows. Learn more about Newton here: https://lnkd.in/gV7yTwxf And for a comprehensive update on NVIDIA Robotics developer tools at #GTC2026, check out this article: https://lnkd.in/gej5qDMz It’s an incredible time for robotics, and we’re grateful to be pushing the boundaries of physical AI together with our partners and a vibrant, growing ecosystem. #Robotics #PhysicalAI #GTC2026 #NVIDIA #IsaacFrom Simulation to Production: How to Build Robots With AIFrom Simulation to Production: How to Build Robots With AI
-
Soha Pouya reposted thisSoha Pouya reposted thisAt GTC, Jensen called CUDA-X libraries "the crown jewels" of NVIDIA. I'm proud to share a milestone for one of those jewels — cuVSLAM, NVIDIA's accelerated VSLAM library, is now open source! For robots to scale into dynamic, unstructured environments, solving localization failures is critical. With multiple APIs, modular architecture, debugging guide and more – Prototype cuVSLAM in Python, deploy with C++ or ROS2 with mono to multi-stereo cameras. 🔗GitHub: https://lnkd.in/g5_Ubijx Even more exciting — our partners championed cuVSLAM at GTC showcasing real-world deployments with Jetson: Idealworks — Validated drift-free localization for AMRs in industrial environments. Our joint whitepaper with comparison benchmarks is a must-read. 📰Announcement → https://lnkd.in/grSwpsRK 📄Whitepaper → https://lnkd.in/gHz7gHSa RealSense — Unveiled humanoid autonomous navigation with LimX Dynamics, powered by cuVSLAM VO. 📰 Announcement → https://lnkd.in/gb7nTVMc 🎬Video → https://lnkd.in/gw2NCJVY Intermodalics — Tested on AMRs with Stereolabs Zed camera → https://lnkd.in/gMC7nwMj This was made possible through years of relentless efforts from our Isaac cuVSLAM team, Hesam Rabeti, Soha Pouya and many others. Many thanks to our partners and the developer community whose feedback made this inevitable. We're just getting started! NVIDIA Robotics #VSLAM #GTC2026 #NVIDIAGTC #Robotics #OpenSource #Jetson #CUDAX #humanoids #autonomousnavigation #cuVSLAM
-
Soha Pouya reposted thisProud to see cuVSLAM being used in such amazing applications!Soha Pouya reposted this🎺 #GTC26 Special🎺 NVIDIA Robotics Open Sourced this week their Visual Slam stack cuvslam. We partnered with Zeal Robotics to put it to good use with a Stereolabs ZED - eyes on the 🤖!
-
Soha Pouya reposted thisSoha Pouya reposted thisExciting moment for Physical AI at GTC 2026! Jensen Huang shared the next step toward generalist robotics with NVIDIA Isaac GR00T N, a foundation model designed to enable humanoid robots to learn and perform complex real-world tasks. Two key updates stood out: GR00T N1.7 The latest generation of NVIDIA’s robot foundation model, now entering early access, brings stronger dexterity and general manipulation capabilities. Developers will soon be able to download the model, experiment with fine-tuning, and integrate it with the NVIDIA Isaac robotics stack. GR00T N2 (preview) Built on DreamZero research (https://lnkd.in/gbDqEnsF) and a new world-action model architecture, the next generation aims to dramatically improve robots’ ability to generalize to new tasks and environments. We’re still at the beginning of the Physical AI era, but the progress across simulation, data pipelines, and robot foundation models is accelerating quickly. Excited to see what developers and partners will build on top of the NVIDIA Isaac robotics platform. #GTC2026 #PhysicalAI #Robotics #HumanoidRobots #NVIDIA #GR00T
-
Soha Pouya reposted thisIt's been a busy year for us! We've been working hard with amazing partners such as RealSense to enable robotics use-cases! You can try cuVSLAM for yourself here: https://lnkd.in/gstaMVNVSoha Pouya reposted thisSeeing this come to life is easily the most impressive project I’ve ever been a part of. A year ago, we were all impressed just seeing humanoids walk or dance. But let’s be real, most of that was just following a script. The "holy grail" is getting them to actually reason through their environment in real-time. That’s exactly what we’ve been building with LimX Dynamics and NVIDIA Robotics. By pairing our RealSense D436 and VSLAM with NVIDIA cuVSLAM VIO, we’re giving these robots true visual perception. They aren’t just moving; they’re actually understanding the world, navigating with high precision, and detecting obstacles on the fly. This is the shift from cool mechanics to true Physical AI. It’s one thing to talk about autonomy, but seeing a humanoid navigate the real world with this kind of intelligence is a whole different level. We’re just getting started. Learn more about the demonstration: https://lnkd.in/gc2UpwzS #Humanoids #RealSense #NVIDIA #PhysicalAI #Robotics #VisualSLAM #limxDynamics
-
Soha Pouya reposted thisSoha Pouya reposted this🚀 Isaac Lab-Arena Benchmark: GPU-Accelerated Parallel Evaluation at Scale We're excited to share comprehensive benchmark results from our collaboration with NVIDIA, demonstrating how Isaac Lab-Arena's GPU-accelerated parallelism transforms robot policy evaluation. 📊 Key findings from testing 10 complex manipulation tasks: - 13.5× speedup at 4,096 parallel environments - Evaluation time reduced from 10+ hours to under 1 hour - Performance scales with parallelism: 10.7× → 12.4× → 13.5× The critical insight: GPU parallelism enables each of 8 GPUs to run 512 environments concurrently (4,096 total) vs. sequential execution (8 streams). This isn't just faster, it's a fundamentally different scale. For VLA developers, this means multiple evaluation iterations per day instead of waiting overnight—accelerating the entire development cycle. NVIDIA Isaac Lab-Arena is co-developed by Lightwheel and NVIDIA as an open-source framework for scalable robot policy evaluation. This infrastructure directly powers RoboFinals, our industrial-grade evaluation platform used by frontier robotics teams like Qwen to iterate rapidly and measure real capability gains. Read the full benchmark study: https://lnkd.in/gRS7uXVt Review the documentation of Isaac Lab-Arena: https://lnkd.in/g3GNeVXJ Special thanks to Sangeeta Subramanian, Soha Pouya, Alexander Millane, Arnav Khanna, Kalyan V., Oyindamola Omotuyi and the broader NVIDIA Robotics team for their collaboration.
-
Soha Pouya reposted thisSoha Pouya reposted thisMaster scalable robotic policy evaluation with NVIDIA Isaac Lab-Arena. 🤖 Join our Robotics Office Hours on Wednesday, Feb 4 @ 9 AM PT with Lightwheel to explore this open-source framework, and walk through our co-designed and validated workflows, followed by a live demo and Q&A.
-
Soha Pouya shared thisInterested in joining #Nvidia #Isaac Engineering team in shaping the future of Physical AI? We're looking for a Software Engineering Manager to lead our Robotics 3D Perception & Simulation efforts. In this role, you’ll help define the technical vision, mentor exceptional engineers, and deliver platforms that enable and accelerate the global robotics developer ecosystem - from applied research to real-world deployment. If you’re excited to turn cutting-edge ideas into platforms used by #robotics #developers worldwide, come build the future with us. https://lnkd.in/gVK49vgGSoftware Engineering Manager, Robotics 3D Perception and SimulationSoftware Engineering Manager, Robotics 3D Perception and Simulation
-
Soha Pouya liked thisSoha Pouya liked thisNewton 1.0 is live. GA release highlights: 🧩 Stable, unified API across modeling, solving, control, and sensing ⚡ MuJoCo Warp solver: up to 252× (locomotion) and 475× (manipulation) speedups for MJX 🤖 Kamino solver (beta): complex mechanisms including linkages and loop closures 🧵 Deformable solvers: VBD + MPM, explicit two-way coupling with MuJoCo Warp 🎯 SDF collisions + hydroelastic contact for high-fidelity interactions 🔗 Isaac Lab and Sim early access + OpenUSD integration for end-to-end robot learning 👁️ Tiled camera sensor fully written in Warp for high-throughput vision-based RL on DGX platform Post-1.0 release and feature roadmap: Monthly releases rolling out multiphysics features (automatic coupling, impulse exchange API, richer behaviors), standardized USD schemas, a multiphysics asset library, and advanced throughput-optimized solvers, while evaluating low-latency paths, faster CPU execution, deterministic simulation, and broader differentiability. 👉 Full release + roadmap: https://lnkd.in/gntj-GqR Built in the open, shaped with the community. Looking forward to what you build with Newton. Example below shows Ethernet cable manipulation and RJ45 snap-fit insertion in the context of GB300 GPU assembly. The cable and rigid bodies are simulated with Newton SDF collisions/contacts and VBD solver, visualized in Newton viewer.
-
Soha Pouya liked thisOur collaboration with Idealworks is taking robot autonomy further with GPU‑accelerated Visual SLAM, powered by NVIDIA cuVSLAM, delivering reliable real‑time localization for AMRs operating in complex industrial environments. cuVSLAM is now open‑sourced on GitHub, enabling developers to build high‑performance visual SLAM on NVIDIA Jetson. 🔗 https://lnkd.in/gsXuNpNz Check out IdealWorks post. 👇Soha Pouya liked thisLocalization failure is where automation stalls. We partnered with NVIDIA to fix that. Industrial environments are unforgiving – repetitive aisles, moving obstacles, limited structural variation. Conditions where traditional localization breaks down and fleets can't perform. Our joint whitepaper demonstrates how GPU-accelerated Visual SLAM, powered by NVIDIA cuVSLAM, delivers robust real-time localization for autonomous mobile robots – without eating into the compute resources needed for safety-critical processes. Built for production. Built for scale. This is what it looks like when perception keeps up with the pace of the factory floor. Read the full news article to access the paper: https://lnkd.in/dVaywnsv #Idealworks #NVIDIA #InConcert #IdealworksOS #robotics #ai #whitepaper NVIDIA Robotics NVIDIA AI Jimmy Nassif Charbel Abi Hana Mihir Acharya Hesam Rabeti Kameel Amareen Dmitry Slepichev Zheng Wang Anthony Rizk
-
Soha Pouya liked thiscuVSLAM is now open source!Soha Pouya liked thisAt GTC, Jensen called CUDA-X libraries "the crown jewels" of NVIDIA. I'm proud to share a milestone for one of those jewels — cuVSLAM, NVIDIA's accelerated VSLAM library, is now open source! For robots to scale into dynamic, unstructured environments, solving localization failures is critical. With multiple APIs, modular architecture, debugging guide and more – Prototype cuVSLAM in Python, deploy with C++ or ROS2 with mono to multi-stereo cameras. 🔗GitHub: https://lnkd.in/g5_Ubijx Even more exciting — our partners championed cuVSLAM at GTC showcasing real-world deployments with Jetson: Idealworks — Validated drift-free localization for AMRs in industrial environments. Our joint whitepaper with comparison benchmarks is a must-read. 📰Announcement → https://lnkd.in/grSwpsRK 📄Whitepaper → https://lnkd.in/gHz7gHSa RealSense — Unveiled humanoid autonomous navigation with LimX Dynamics, powered by cuVSLAM VO. 📰 Announcement → https://lnkd.in/gb7nTVMc 🎬Video → https://lnkd.in/gw2NCJVY Intermodalics — Tested on AMRs with Stereolabs Zed camera → https://lnkd.in/gMC7nwMj This was made possible through years of relentless efforts from our Isaac cuVSLAM team, Hesam Rabeti, Soha Pouya and many others. Many thanks to our partners and the developer community whose feedback made this inevitable. We're just getting started! NVIDIA Robotics #VSLAM #GTC2026 #NVIDIAGTC #Robotics #OpenSource #Jetson #CUDAX #humanoids #autonomousnavigation #cuVSLAM
Experience & Education
-
NVIDIA
****** ******** ** ************ ******** ********
-
********* ****
********* ******** * ********** * ********* * ********* ** *****
-
*****
********* ********* ********
-
******** **********
************ ******** *********** undefined
-
-
******** ********** ******** ****** ** ********
******** ******* *********** ******* ** ********** *** **************** undefined
-
View Soha’s full experience
See their title, tenure and more.
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Volunteer Experience
-
Member
IEEE Robotics and Automation Society & IEEE TC on Human Movement Understanding
- Present 10 years 2 months
Science and Technology
-
Member of Leadership Team
AIMS - The Stanford Postdoc Link to Entrepreneurship and Industry
- 1 year
Science and Technology
View Soha’s full profile
-
See who you know in common
-
Get introduced
-
Contact Soha directly
Other similar profiles
Explore more posts
-
Abhisumat Kundu
GDG on Campus MCKV Institute… • 5K followers
The autonomous vehicle industry is at a crossroads. Waymo's co-CEO Tekedra Mawakana recently warned that robotaxi companies must go beyond marketing and deliver concrete safety proofs to regulators and the public. With recent incidents involving competitors like Cruise, the pressure is on to establish standardized safety metrics and transparent validation processes. Key challenges include: 1. Data transparency: Companies must share safety test results and incident reports openly. 2. Human-in-the-loop verification: Autonomous systems need rigorous human oversight during edge-case scenarios. 3. Regulatory alignment: Collaborating with agencies to create measurable safety benchmarks. For tech founders building mobility solutions, here's what to prioritize: - Invest in real-world safety testing with third-party validation. - Develop public dashboards to track safety KPIs in real time. - Advocate for industry-wide safety protocols that regulators can audit. The path to mass adoption isn't just about innovation—it's about trust. How will your team balance speed with safety in autonomous tech development? Let's discuss.
9
-
Adrian Pearson JR
Tesla • 13K followers
WAYMO EXPANDS BAY AREA ROBOTAXI SERVICE ACROSS PENINSULA & SILICON VALLEY Waymo continues to lead the autonomous vehicle industry with its latest expansion of robotaxi services across the Peninsula and Silicon Valley. •Waymo’s fully driverless rides are now available in San Mateo, Santa Clara, Sunnyvale, and Mountain View, connecting key tech hubs. •The autonomous vehicle industry is projected to surpass $1.2 trillion globally by 2030, with AI-powered robotaxis transforming urban mobility. •Waymo, backed by Alphabet, has logged over 20 million autonomous miles on public roads, setting the industry standard for safety and innovation. •The Bay Area remains a focal point for AI-driven transportation, fueling economic growth, job creation, and cutting-edge research. #AI #AutonomousVehicles #Waymo #Robotics #SelfDriving #TechNews #SiliconValley #Mobility #Innovation
8
1 Comment -
Surendran M
Vantiva • 730 followers
🚨 Stop the 6-8 Hour Manual Log Analysis Grind! 🚨 The complexity of RDK-B embedded software has made traditional firmware diagnostics obsolete. We're fighting gigabytes of data with simple grep commands, leading to diagnostic fatigue and long Mean Time to Resolution (MTTR). That's why we engineered ORCA (Optimized Root Cause Analysis)—a game-changing Multi-Agent System (MAS) built on LangGraph. Here are the core innovations we deployed to achieve a 3X speedup, slashing diagnostic time to ~90 minutes: 🔹 Agentic Reasoning: We replaced static scripts with a Supervisory MAS that uses a cyclical, stateful workflow for self-correction and deep causal analysis across subsystems . 🔹 Model Agility: Introducing the Prompt Optimizer Agent—our custom middleware that dynamically "compiles" instructions for models like Llama 3, Claude 3.5 Sonnet, and Mistral, ensuring high-fidelity reasoning no matter the LLM backbone. 🔹 Code-Level Precision: We integrated Tree-sitter for robust syntactic C++ source code analysis, correlating log errors directly to function definitions within the complex Yocto build environment. The result? We're not just finding errors; we're identifying the Temporal-Causal Nexus—revealing that a "instability" is often a downstream symptom of a error event. This system is transforming how we diagnose Issues. What role do you see Agentic AI playing in the future of embedded systems diagnostics? Share your thoughts below! #EmbeddedSystems #RDKB #AI #MultiAgentSystems #LangGraph #Firmware #RootCauseAnalysis #LangChain #IoT #SoftwareEngineering
7
-
Nishantha Ruwan
IWROBOTX Software Inc. • 2K followers
The paper proposes a new motion-planning framework for mobile robots operating in environments with uncertainty. Robots navigating real-world environments face multiple sources of uncertainty, including localization errors, imperfect predictions of moving obstacles, and environmental disturbances. Many existing planning approaches simplify robot and obstacle geometry using shapes such as circles or ellipses, which can introduce excessive conservatism and reduce efficiency in tight environments. The authors extend the Optimization-Based Collision Avoidance (OBCA) framework to incorporate uncertainty explicitly, creating the method called U-OBCA. Their approach models collision risk using chance constraints and applies a Wasserstein distributionally robust formulation to account for uncertainty in obstacle motion and sensing. This formulation enables the planner to maintain safety guarantees while avoiding overly conservative assumptions about uncertainty distributions. The proposed framework integrates accurate polygon-based geometry with robust optimization techniques, allowing robots to plan trajectories that balance safety and efficiency in constrained environments such as warehouses, hospitals, and parking structures. By modeling uncertainty using Wasserstein ambiguity sets, the method can reason about worst-case distributions close to observed data rather than assuming a precise probability model. This improves reliability when real-world uncertainty deviates from assumed distributions. The resulting optimization problem is reformulated into a tractable structure that can be solved efficiently while maintaining collision avoidance guarantees. Simulation and experimental results demonstrate that U-OBCA reduces unnecessary conservatism and enables robots to navigate closer to obstacles without compromising safety, producing smoother and more efficient trajectories than traditional uncertainty-aware planners. Overall, the paper contributes a robust, optimization-based framework for safe autonomous navigation under realistic uncertainty conditions. https://lnkd.in/gt7-NpHP
-
Timo Rainio
Tampere University of Applied… • 5K followers
Thread: Waymo’s robotaxis go driverless in Nashville Waymo’s robotaxi service in Nashville is now operating driverless in select zones and hours, marking a notable milestone for city-scale autonomous mobility. This shift, highlighted in Mashable’s coverage, moves us from pilots to more scalable, on-demand rides in a real urban environment. 1) What happened - In Nashville, Waymo’s robotaxis are running without a human safety driver in defined areas, signaling a significant step toward fully driverless operation in a city context. The change focuses on real-world testing and service at scale, beyond controlled corridors. 2) Why it matters - It demonstrates maturity in sensing, mapping, and decision-making needed for driverless rides in busy streets. - It pushes the industry toward evaluating not just feasibility, but reliability, rider experience, and city integration at scale. 3) What this means for riders and businesses - Riders gain the potential for more predictable travel times and around-the-clock service in certain neighborhoods. - For businesses, it opens opportunities in on-demand mobility, last-mile logistics, and new data-driven service models. - City stakeholders should watch implications for curb management, safety oversight, insurance models, and accessibility. 4) What to watch next - Safety performance across weather, peak hours, and edge cases. - Customer experience: wait times, ride quality, accessibility for all users. - Regulatory and policy updates that shape deployment speed and governance. - The business case: cost structures, partnerships, and ROI for large-scale adoption. Takeaway for leaders: this aligns with a broader trend toward driverless mobility becoming a tangible, city-backed service. If you’re in mobility, real estate, urban planning, or customer experience, now is the time to explore pilots, partnerships, and responsible deployment strategies. Read the full article for details: https://ift.tt/6TwNh0F #Waymo #Robotaxi #AutonomousVehicles #Nashville #Mobility #Transportation #FutureOfMobility #TechNews #Innovation #TAMK #TampereUniversity #BusinessTampere #emobility #mobility #ITSFinland #ITSfactory #Spotlog #byAI
2
-
Devscourt AI
62 followers
The quiet revolution in AI inference ⚡ Why pay for expensive API calls when you can run state-of-the-art models on your own hardware? Tools like OpenAI Triton and NVIDIA TensorRT are making optimized inference accessible to everyone. Achieve cloud-level performance without the cloud bills. Faster, cheaper, and more private. The future is local. What's your go-to for optimized inference? #AIEngineering #InferenceOptimization #Triton #TensorRT #AIEngineer
1
-
CPA Trendlines
2K followers
.@TechCrunch Waymo’s Tekedra Mawakana on Scaling Self-Driving Beyond the Hype: Waymo co-CEO Tekedra Mawakana takes the Disrupt Stage at TechCrunch Disrupt 2025 (Oct 27–29, SF) to share the realities of scaling autonomous vehicles — from safety to regulation to the road ahead. http://dlvr.it/TN6PSW #tech #startups #VC
3
-
Bay Street Group LLC
1K followers
.@TechCrunch Waymo’s Tekedra Mawakana on Scaling Self-Driving Beyond the Hype: Waymo co-CEO Tekedra Mawakana takes the Disrupt Stage at TechCrunch Disrupt 2025 (Oct 27–29, SF) to share the realities of scaling autonomous vehicles — from safety to regulation to the road ahead. http://dlvr.it/TN6PSx #tech #startups #VC
-
CPA Trendlines Research
647 followers
.@TechCrunch Waymo’s Tekedra Mawakana on Scaling Self-Driving Beyond the Hype: Waymo co-CEO Tekedra Mawakana takes the Disrupt Stage at TechCrunch Disrupt 2025 (Oct 27–29, SF) to share the realities of scaling autonomous vehicles — from safety to regulation to the road ahead. http://dlvr.it/TN6PT0 #tech #startups #VC
-
AIZona
181 followers
🚀 How are former Waymo engineers revolutionizing the construction industry? Meet Bedrock Robotics! 🏗️🤖 Key Insights: - Former Waymo engineers have founded Bedrock Robotics with an impressive $80M funding. - Their mission is to automate construction processes, enhancing efficiency and safety. - This move showcases the growing intersection between AI and traditional industries. Value & Context: The construction industry, often slow to adopt new tech, is ripe for transformation. With automation, we could see a future where construction projects are completed faster and with fewer accidents. Imagine AI-powered robots managing construction sites—this isn't just sci-fi anymore. Engagement Prompt: What do you think about AI entering the construction industry? Have you seen similar innovations in your field? Feel free to share this post with others who are passionate about AI and innovation. Personal Reflection: Every leap in technology opens new doors. As we embrace AI in various sectors, we must also consider the skills and training needed to work alongside these advancements. 🚀 #Robotics #AI #ConstructionInnovation #TechTrends #FutureOfWork @Waymo @TechCrunch
1
-
Nicko Trataris
AI Agents LLC • 11K followers
– Always providing the latest updates on tech innovations and trends. #TechNews – Waymo discusses the significance of their robotaxi expansion in San Francisco. #Waymo #Transportation – Recent jury verdict reveals Apple owes Masimo $634 million for patent infringement. #Apple #PatentInfringement – Disney and YouTube TV have reached an agreement to end service blackout issues. #Disney #YouTubeTV – Leaked documents uncover payments OpenAI makes to Microsoft. #OpenAI #Microsoft – Tesla shares a detailed safety report following requests for more data on their autonomous driving systems. #Tesla #AutonomousDriving – Comprehensive list released discussing tech layoffs of 2025. #TechLayoffs #Startups – WhatsApp plans to enable third-party chat integration in Europe shortly. #WhatsApp #Apps – Apple adjusts its app review guidelines, restricting apps that share user data with third-party AI. #Apple #Privacy – ChatGPT rolls out pilot group chats in several countries including Japan and New Zealand. #ChatGPT #AI
1
5 Comments -
Advance SF
1K followers
Waymo is one of San Francisco’s best examples of an innovative company delivering world-class services with global impact. The autonomous mobility future starts here in San Francisco, and it is overwhelmingly popular. According to recent polling from Advance SF, a significant majority of San Franciscans feel Waymo is making their city safer, hold Waymo as a source of civic pride, and believe Waymo should be allowed in more cities around the U.S. Our partners at the Bay Area Council, San Francisco Chamber of Commerce, and sf.citi agree that our region benefit when homegrown technologies are developed and embraced here and then spread across the state. Waymo reinforces our position as a global leader in transportation and technology innovation.
54
-
ZETIC
699 followers
On-device AI becomes powerful only when it works reliably on real hardware. In this Melange Walkthrough, Yeonseok Kim and Sun-Q Kim break down how the full on-device pipeline works, from model selection and quantization to real-device benchmarking and SDK deployment. Select. Benchmark. Deploy. This is how production-ready on-device AI should operate. Watch the full walkthrough below. #OnDeviceAI #EdgeAI #MobileAI #AIInfrastructure #ZETIC #Melange
26
-
Jay Soni
Antler • 3K followers
Waymo Expands Its Autonomous Reach in California In a significant leap for autonomous driving technology, Waymo has announced its regulatory approval to operate fully autonomously across an expanded area in the Bay Area and Southern California. This exciting development allows the robotaxi company to extend its services and contribute to the future of transportation. With this expansion, Waymo is not just paving the way for innovative mobility solutions, but also setting standards for safety and efficiency in the autonomous vehicle industry. As they continue to grow, Waymo is reaffirming its commitment to creating a seamless and reliable autonomous driving experience for all Californians. This approval marks a pivotal moment in the journey towards fully integrated autonomous transportation. #Waymo #AutonomousVehicles #Innovation #Transportation #California #TechInnovation,#MobilitySolutions,#TransportTech,#AI,#BusinessGrowth
-
Zifeng L.
ElasticDash • 3K followers
The end of human drivers? Tried Tesla Robotaxi in the Bay Area for the first time, 8.8 miles from Meta SF to Daly City station for $18. Unlike Waymo, Tesla’s Robotaxi can take the freeway. The ride was smooth, and the decision-making felt far closer to human than “bot-like.” What surprised me most: it’s just a regular Model Y. With the built-in rear screen, any Tesla could potentially be activated as a Robotaxi. That’s scalability at an entirely different level. Waymo might have an early lead, but Tesla’s vision-only approach gives it massive coverage and faster rollout potential. If this scales, Uber, Lyft, and even car ownership itself could be disrupted faster than expected. This is exactly why I quit my job to go all-in on ElasticDash We’re entering a new industrial revolution, the real question is: will you shape it, or be forced to adapt?
15
2 Comments -
Abdul Basha
techpedo.com • 2K followers
Waymo gets regulatory approval to expand across Bay Area and Southern California|EXCLUSIVE: Waymo continues to expand its reach, with the robotaxi company posting Friday that it’s now “officially authorized to drive fully autonomously across more of the Golden State.” Waymo already operates in San Francisco, Silicon Valley, and Los Angeles (and outside California as well, in Atlanta, Austin, and Phoenix). But maps published by California’s Department of Motor Vehicles showed that the company can now test and deploy its autonomous vehicles across a much larger area in both the Bay Area and Southern California....
-
Paul Ortiz
Grupo de Investigación en… • 519 followers
🚗🤖 Waymo introduces its new simulation system for autonomous vehicles 🌍✨ Waymo takes a giant step forward in autonomous driving development with the launch of its Waymo World Model, a next-generation simulation system powered by Genie 3. Unlike traditional approaches—which train models solely on real-world, on-road data—this system leverages a much broader understanding of the world. Thanks to its pre-training on a massive and diverse set of videos, Genie 3 enables the simulation of situations that were never directly observed by the real-world fleet, including rare and long-tail scenarios that are critical for the safety and robustness of autonomous systems. 🔍 What makes it different? Knowledge transfer from 2D video to 3D LiDAR outputs, tailored to Waymo’s specific hardware. A combination of cameras (visual detail) and LiDAR (precise depth and geometry). The ability to generate almost any scene, from everyday driving to rare, extreme events. This breakthrough redefines how autonomous driving systems are trained and validated, accelerating innovation and raising safety standards across the industry. 🚀 The future of autonomous simulation is already here—and it’s powered by Waymo. https://lnkd.in/eK63bh6A Video © Waymo. All rights reserved
36
4 Comments -
Andrea L. Thomaz
Diligent Robotics • 4K followers
The Power of Data: More Data. More Compute. Better Models. Waymo's recent blog post (https://lnkd.in/gydNP68D) highlights a powerful truth: performance in robot autonomy scales with data and compute. They show the impact of real data, how larger models, trained on vast diverse datasets, are yielding breakthrough capabilities in autonomous driving. This echoes a theme we’ve seen in the robotics community. Covariant, in their RFM-1 update last year (https://lnkd.in/gVrHx2Sz), made the case that the difference between good and great performance in robotic manipulation comes down to real-world data and massive amounts of it. At Diligent Robotics, we have always placed a high value on experience our robots can collect out in the WILD in the hands of real users. Our robots have logged hundreds of thousands of hours operating inside complex, dynamic indoor environments like hospitals where the stakes are high and the variability is endless. This hard-earned experience is what allows us to deliver reliable and scalable autonomy, coexisting gracefully with our human co-workers and adapting fluidly to real-world challenges. The future of robotics won’t just be built in simulation or labs. It will be forged in the noisy, unpredictable, people-filled places where robots work alongside us. I'm extremely proud to say that this is where Team Diligent shines. We've taken years to hone our operational execution muscle and this team is ready to go, exciting times! #Robotics #AI #Autonomy #DataDriven #PhysicalAI
55
1 Comment -
Subham Sett
Cadence • 5K followers
Generate at the speed of AI? By the earth scorching pace genAI has set, a conference submission from late 2024 is likely already outdated. Good news though, we focused on our core strengths, talked to our users and begain putting the building blocks that are fast-tracking the delivery of agentic workflows. I'll cover the basics and some more next Monday at the Nafems World Congress in Salzburg. Hope to see a lot of new faces, and old ones next week. But folks interested in agents and want to discuss use cases, take a look at some of the cases the team has been working on: Tony Favaloro on the scaling of User Intent with Generative: https://lnkd.in/eBKnKMf2 Fernando Okigami on Resistant Spot Welds #welding: https://lnkd.in/epZuck6C See you at NAFEMS! Kaustubh (Keb) Nande, Ph.D. Michael Schlenkrich #mscnastran #simufact #nonlinear #nolimits
57
5 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content