Day 3 at #MODEX and we've talked to more ops, automation, and quality leaders than we expected for the whole show 🤖 The demos that keep getting requested: torn packaging detection on the conveyor, pallet load verification at the dock, VLM-driven process monitoring on the assembly floor → all running on real customer footage, not staged setups. One last day to catch us. The fastest way to get a focused 1:1 → Ping Brandon Neustadter or Max Wasilko to book a dedicated demo slot!
Datature
Software Development
San Francisco, California 18,948 followers
Datature is the way to build, train, and deploy production-grade vision AI - all in one platform.
About us
Datature simplifies the way people build deep-learning capabilities. Using Nexus, our end-to-end #nocode mlops platform, we enable everyone to create AI breakthroughs of their own.
- Website
-
https://datature.com
External link for Datature
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- San Francisco, California
- Type
- Privately Held
- Founded
- 2020
- Specialties
- Computer Vision, Machine Learning, Deep Learning, Artificial Intelligence, and MLOps
Products
Datature
Machine Learning Software
Datature Nexus is an MLOps platform designed to streamline the workflow for computer vision applications. It enables users to effortlessly annotate unstructured data and build intricate machine learning pipelines directly within their web browser, utilizing intuitive, visual editing tools. This comprehensive, no-code solution facilitates the creation, training, and deployment of computer vision models, enhancing collaboration and efficiency for developers and teams. Datature's platform has allowed users from Agritech, MedTech, Manufacturing, Industrial Automation, and many more verticals to create cutting-edge ML models to revolutionize their own fields within weeks. Learn More 👉 datature.com
Locations
-
Primary
Get directions
535 Mission St
San Francisco, California 94105, US
-
Get directions
92 Amoy Street
#02-01
Singapore, 069911, SG
Employees at Datature
Updates
-
Datature reposted this
One thing about Gemma 4 that hasn't gotten enough attention → you can pick how many tokens each image consumes at inference time. Gemma 4 gives you the options of 70, 140, 280, 560, or 1,120 tokens per image, same weights for every budget. 70 is enough for scene classification. 1,120 handles OCR and small-object detection. Basically, its a way for you to set the resolution of the image analysis. Drop from 1,120 to 280 tokens per image, cut per-image compute roughly 4x, keep enough spatial information for defect classification on a quality line. Also, it actually runs on edge hardware. The E2B variant decodes at 7.6 tokens/second on a Raspberry Pi 5 in under 1.5 GB of memory with INT4 quantization. Not real-time, but plenty for periodic monitoring (quality checkpoints, agricultural surveillance, environmental sensing). Official runtime support spans Android, iOS, Windows, Linux, macOS, and WebGPU. We will be bringing this onto our Datature Vi platform for fine-tuning soon. Full Breakdown → https://lnkd.in/g-f4haEZ
-
Datature is at #MODEX next week 🤖 We will be running live demos of Datature's Vision AI on real warehouse footages → from catching torn and crushed packaging on the conveyor, to verifying pallet loading configuration before they leave the dock. Be sure to stop by and chat with our engineers on how teams can leverage Vision-Language Models to automate their systems, with surprisingly little data and setup. ▎Booth A7024, Hall A ▎April 13-16 ▎Georgia World Congress Center, Atlanta ▎Brandon Neustadter Max Wasilko #MODEX2026 #VisionAI #WarehouseAutomation #ManufacturingAI
-
-
Training a defect detection model usually means labeling hundreds of broken parts. But in manufacturing, defects are rare. Some failure modes haven't even happened yet. You can't label what doesn't exist. Anomaly detection flips the script. Instead of teaching a model what "broken" looks like, you teach it what "normal" looks like. Anything outside that boundary gets flagged. We tested three approaches in Anomalib on the MVTec AD benchmark ↘ PaDiM, PatchCore, EfficientAd All three hit pretty good image-level detection on the bottle category. The differences showed up in pixel-level precision + training time. The best part? No bounding boxes, no polygons, no annotation files. Your "labeling" is sorting /good and /bad folders. What's the drawback of using this library? Read our full walkthrough with code, results, and analysis on the blog to find out - Link in comments 👇
-
-
New Tutorial - Fine-Tune Qwen3-VL on Your Own Dataset 🩻
Releasing our Qwen3-VL Finetuning Guide on Datature Vi for Vision Tasks 🎆 One thing that's becoming obvious building in Vision AI → The near future isn't traditional CV or VLMs. Its both, running together to solve different problems. If you need high-throughput, structured, repeatable outputs at scale - a YOLO or DETR model still wins. Bounding boxes, masks, and most importantly, confidence scores. This is still running in production for most use cases. However, when workflows gets more human-shaped ("only find the scratches near the weld joint", "identify and generate a report on the fractures in this xray"), the old boxes-and-labels interface starts running out of road. That's where Vision-Language Models get interesting. They change the output format from rigid classes to usable language + reasoning + tool_calling The new bottleneck is how will you annotate such a dataset to fine-tune models specifically for your use case, and how you can monitor and deploy them in your application. In this article, we share more about how to fine-tune your own Qwen3-VL model on Datature Vi → https://lnkd.in/gXZ4Ai3v Datature Vi Platform (Beta) → https://vi.datature.com
-
-
Most CV engineers stop at imread and imshow. But production computer vision runs on a different set of functions entirely. We broke down the 7 OpenCV functions that show up again and again in real-world pipelines: 🔹 dnn.readNet - run neural network inference without framework dependencies 🔹 warpPerspective - fix skewed documents and misaligned parts 🔹 calcOpticalFlowPyrLK - track motion between video frames 🔹 createBackgroundSubtractorMOG2 - separate moving objects from static scenes 🔹 findContours - detect shapes, count parts, measure defects 🔹 Canny - edge detection (1986 algorithm, still the standard) 🔹 inRange + morphologyEx - segment objects by color in any lighting Each one includes runnable Python code, parameter guidance, and real examples - from document scanning to factory inspection to traffic analysis. We will be going deeper into how chaining various function can lead to transformative data pipelines in our next post. The best part: these functions chain together. Canny feeds contours. Background subtraction enables counting. OpenCV handles the image plumbing so your ML models can focus on the hard problems. Full Guide with Code 👇 https://lnkd.in/gxixyz3E
-
We’ve been talking to several infrastructure drone and inspection infrastructure companies about how they’re deploying AI for solar panel inspection - and the takeaway is clear: speed matters, but workflow integration matters more - models that generate panel-level, GPS-tagged tickets are key. In this comprehensive article, we break down the end-to-end playbook: why manual inspection doesn’t scale, the full solar defect taxonomy (hotspots, PID, diode failures, cracks, soiling, delamination), which imaging modalities catch what (RGB vs thermal IR vs EL/UVF/PL) We present the modern drone workflow from capture → orthomosaic → panel segmentation → defect detection → prioritized maintenance reports - ultimately showing how Datature has all the features to support this workflow from end to end. Read The Article → https://lnkd.in/grKwWD7t
-
Great updates by Vladimir Iglovikov and team over at Albumentations 🤩
Albumentations 2.0.20 is out! 🚀 Two major updates for computer vision pipelines by Mikhail Druzhinin ⚡ Performance • Perspective transform — up to 2.7× faster on video batches (grayscale depth maps, medical slices) • HueSaturationValue — up to 1.2× faster on large image batches Biggest gains for grayscale video where we now warp the whole batch in a single C++ call instead of frame-by-frame. 🆕 user_data target by Vladimir Iglovikov Pass arbitrary custom data through your augmentation pipeline — camera intrinsics, captions, point clouds, timestamps. By default it passes through unchanged; override apply_to_user_data to update it when transforms change the image. Use cases: - robot/autonomous driving (update K matrix on crop) - vision-language models (keep captions in sync with flips) - LiDAR/BEV, multi-sensor fusion. pip install -U albumentationsx #ComputerVision #MachineLearning #DeepLearning #DataAugmentation #Albumentations
-
Pose estimation has quietly become production-grade. Object detection tells you “a person is here.” Pose estimation tells you “left arm up, right knee bent” - by predicting keypoints (wrists, elbows, knees, ankles, etc.) and connecting them into a skeleton. That structural signal is why pose is showing up everywhere from sports analytics and rehab to safety monitoring and gesture interfaces. In our latest guide, we cover the parts that matter when you actually ship. If you’re building anything that needs posture, motion, or intent - this is the quick primer. Read The Blog → https://lnkd.in/gc3f78Pt
-
Vision AI and Agri-Robotics are way past pilots as of 2026, but production deployments of vision capabilities stall for one reason: generic pre-trained models don’t hold up in the field (domain shift, lighting/season drift, tiny targets). Teams often see ~20–40% accuracy drops moving from benchmarks to real farm imagery. When speaking to our users, this accuracy has to be above ~88% on average to justify moving past pilots. The practical playbook for agri-robotics PMs/devs: fine-tune on 500+ labeled field images, then quantize and run on-edge (connectivity is unreliable and cloud latency misses actuation timing). Ship a tight loop: deploy → collect edge cases → retrain on fresh field data each season, per unique customer. Datature's platform compresses that pipeline (label → train → evaluate → export → deploy via MQTT) so you can get to field-ready models in weeks, not months. Read Our Findings → https://lnkd.in/gVbAzaFM