Auto-Annotation from Expert-Crafted Guidelines
The 1st workshop on AutoExpert
This website is under construction.
Location: tbd, Denver CO
Time: tbd
in conjunction with CVPR 2026, Denver CO, USA
Overview
Machine-learned visual systems are transforming numerous fields such as autonomous driving, biodiversity assessment, and ecological monitoring, but they hunger for vast, high-quality annotated data. While asking domain experts to manually annotate large-scale data is unrealistic, the current paradigm to scale up data annotation is to have domain experts craft annotation guidelines using visual examples and descriptions for non-expert annotators to apply. This paradigm is commonly adopted by companies which provide data labeling services. Lacking domain knowledge, ordinary annotators often produce annotations that are erroneous, subjective, biased, and inconsistent. Further, this process is labor-intensive, tedious, and costly. This workshop aims to pioneer auto-annotation, developing AI agents that can interpret expert-crafted annotation guidelines and generate labels automatically. In essence, we seek to replace ordinary human annotators with AI.
Topics
This workshop aims to bring together computer vision researchers and practitioners from both academia and industry who are interested in the topic of auto-anontation from expert-crafted guidelines (AutoExpert). It involves multiple research topics as listed below.
- data: web-scale of data, domain-specific data, multimodal data, synthetic data, etc.
- concepts: taxonomy, ontology, vocabulary, expert/human-in-the-loop, etc.
- models: foundation models, expert models, Large Multimodal Models (LMMs), Large Language Models (LLMs), Vision-Language Models (VLMs), Large Vision Models (LVMs), etc.
- learning: foundation model adaptation, few-shot learning, semi-supervised learning, domain adaptation, active learning, etc.
- social impact: inter-disciplinary research, real-world application, responsible AI, etc.
- misc: dataset curation, annotation guidelines, machine-expert interaction, etc.


















