← Capabilities

Annotation and Manual Scoring

Manual annotation tools integrated with automated tracking

Mark custom behavioral events with configurable keyboard shortcuts directly on the tracking timeline — automated and manual scores merged in a single time-aligned output.

Annotation and Manual Scoring
33ms
Annotation precision (1 frame)
Unlimited
Custom event types
Yes
Auto + manual merge
Keyboard
Configurable shortcut scoring
The problem

Separate annotation tools break the analysis workflow

When automated tracking cannot score a behavior, researchers switch to separate annotation tools (BORIS, Solomon Coder, Noldus Observer) that produce disconnected event logs. Merging manual annotations with automated tracking requires custom alignment scripts.

  • Switching between tracking software and annotation tools wastes time and loses context
  • Manual annotations and automated tracking use different time bases — alignment is error-prone
  • Separate tool outputs must be merged post-hoc with custom scripts
The solution

Integrated annotation on the tracking timeline

ConductVision includes a built-in annotation panel with configurable keyboard shortcuts. Press a key to mark event onset, release for offset. Manual events appear on the same timeline as automated tracking data — no alignment needed.

  • Configurable keyboard shortcuts: assign any key to any behavioral event type
  • Press-to-start, release-to-end marking for state events; single press for point events
  • Merged output: automated tracking data and manual annotations in a single time-aligned file
Endpoints

Annotation outputs

Merged event timeline

Merged event timeline

Single file containing automated tracking results and manual annotations with identical timestamps — no post-hoc alignment.

CSVJSON
Annotation-only export

Annotation-only export

Manual annotations exported separately with event type, onset, offset, duration, and frame numbers.

CSVJSON
Inter-rater reliability

Inter-rater reliability

When multiple users annotate the same recording, compute Cohen's kappa and percent agreement for scoring validation.

CSV
Applications

Annotation use cases

Rare behaviors

Cataloging uncommon events

Some behaviors (mounting, aggressive postures, seizure subtypes) are too rare for automated classifier training. Manual annotation captures them with frame-level precision.

Measures
  • Event count
  • Event duration
  • Temporal distribution
Training data

Generating classifier training sets

Manual annotations serve as labeled training data for custom behavior classifiers — annotate examples, train the model, automate future scoring.

Measures
  • Labeled frame count
  • Class distribution
  • Annotation consistency
Validation

Automated scoring validation

Manually score a subset of recordings to validate automated classifier accuracy — compute agreement between human and machine.

Measures
  • Human-machine agreement
  • False positive rate
  • False negative rate
Complex social

Multi-behavior social scoring

Social behaviors (mounting, chasing, grooming another) require contextual judgment. Manual scoring with keyboard shortcuts enables rapid multi-behavior annotation.

Measures
  • Behavior frequency
  • Bout duration
  • Social event sequence
Compared to typical systems

How ConductVision differs

FeatureConductVisionTypical systems
Integration with trackingSame timeline, no alignment neededSeparate tool, manual alignment
Keyboard shortcutsFully configurableFixed or limited shortcuts (BORIS)
Merged outputAuto + manual in single fileSeparate outputs, custom merge
Inter-rater reliabilityBuilt-in kappa calculationExternal calculation
Frame-level precision33 ms (video frame rate)Varies by tool

Score what automation cannot — integrated in one tool

Set up keyboard shortcuts for your behavioral events and start annotating alongside automated tracking.