AI-powered video analysis for neuroscience, pharmacology, and behavioral phenotyping
Markerless body-part tracking with real-time skeleton overlays. Track snout, ears, body center, tail base, and limb positions without physical markers.
Track multiple arenas in a single video simultaneously. Run parallel sessions and batch export data for high-throughput studies.
Draw zones directly on the video feed. Get live metrics per zone — center vs edge occupancy, arm entries, time in target quadrant.
Import pre-trained DeepLabCut and SLEAP models. Combine pose estimation outputs with ConductVision's built-in behavioral metrics.
Classify freezing, rearing, grooming, head dips, and stretch-attend postures automatically. No manual frame-by-frame coding.
TTL triggers for optogenetics, shock grids, light/dark protocols, and temperature control. Frame-accurate event alignment.
16 validated paradigms for neuroscience, pharmacology, and behavioral phenotyping
Spatial learning and memory
Spatial learning
Working memory
Spatial working memory
Working and reference memory
Associative learning
High-throughput learning
Inhibitory learning
Spatial cognition
Anxiety-like behavior
Locomotion and anxiety
Anxiety assessment
Risk assessment behavior
Motor coordination
Fine motor control
Social behavior
Standard and advanced behavioral tests, each with validated tracking parameters.
High-speed capture for fast movements — rearing, grooming, escape responses.
Freezing, rearing, grooming, head dips, stretch-attend, and more — no manual coding.
Our Staff are PhD Scientists
Our PhD scientists will help design your behavioral study.