Which Workflow Should I Use?¶
Decision guide for choosing the right PrimateFace workflow.
Quick Decision Tree¶
graph TD
A[What do you need to do?] --> B{Task Type}
B -->|Detect faces| C[Use Demos Workflow]
B -->|Analyze features| D[Use DINOv2 Workflow]
B -->|Annotate data| E[Use GUI Workflow]
B -->|Convert formats| F[Use Landmark Converter]
B -->|Train models| G{Which framework?}
G -->|Best accuracy| H[MMPose/MMDetection]
G -->|Behavioral analysis| I[DeepLabCut]
G -->|Multi-animal| J[SLEAP]
G -->|Real-time| K[YOLO/Ultralytics]
C --> L[demos/primateface_demo.py]
D --> M[dinov2/dinov2_cli.py]
E --> N[gui/pseudolabel_gui_fm.py]
F --> O[landmark-converter/train.py]
Detailed Decision Guide¶
I have images/videos and want to...¶
Detect faces and landmarks → Demos Workflow¶
Best for: - Quick inference on new data - Batch processing - Integration into pipelinesSelect best images for annotation → DINOv2 Workflow¶
Best for: - Large datasets (1000+ images) - Limited annotation budget - Ensuring diversityCreate training annotations → GUI Workflow¶
Best for: - Creating ground truth - Correcting model predictions - Interactive annotationI have annotations and want to...¶
Convert between formats → Landmark Converter¶
Best for: - Using human face datasets - Cross-dataset compatibility - Framework interoperabilityTrain detection/pose models → Choose Framework:¶
Production deployment → MMPose/MMDetection¶
- Highest accuracy
- Best documentation
- PrimateFace primary framework
Behavioral studies → DeepLabCut¶
- Markerless tracking
- Temporal analysis
- Large community
Multi-animal scenarios → SLEAP¶
- Identity tracking
- Social interactions
- Optimized for multiple animals
Real-time/edge deployment → Ultralytics¶
- Fastest inference
- Mobile deployment
- Minimal dependencies
I want to evaluate models...¶
Compare performance → Evaluation Utilities¶
Visualize results → Visualization Utilities¶
Common Workflows¶
Workflow 1: From Raw Images to Trained Model¶
- Collect images → Organize in folders
- Select subset → DINOv2 selection (optional)
- Annotate → GUI pseudo-labeling
- Train → Choose framework
- Evaluate → Compare metrics
Workflow 2: Using Pretrained Models¶
- Download models →
demos/download_models.py
- Run inference → Demos workflow
- Post-process → Smoothing, filtering
- Analyze → Extract metrics
Workflow 3: Cross-Dataset Training¶
- Convert annotations → Landmark converter
- Merge datasets → COCO utilities
- Train models → Framework of choice
- Cross-validate → Evaluation tools
Framework Selection Matrix¶
Framework | Speed | Accuracy | Multi-Animal | Edge Deploy | Learning Curve |
---|---|---|---|---|---|
MMPose | ★★★☆☆ | ★★★★★ | ★★☆☆☆ | ★★★☆☆ | ★★★☆☆ |
DeepLabCut | ★★★☆☆ | ★★★★☆ | ★★★☆☆ | ★★☆☆☆ | ★★☆☆☆ |
SLEAP | ★★★☆☆ | ★★★★☆ | ★★★★★ | ★★☆☆☆ | ★★☆☆☆ |
YOLO | ★★★★★ | ★★★☆☆ | ★★☆☆☆ | ★★★★★ | ★☆☆☆☆ |
Species-Specific Recommendations¶
Great Apes (Gorillas, Chimpanzees, Orangutans)¶
- Use standard 68-point landmarks
- MMPose for best accuracy
- Consider fur occlusion
Old World Monkeys (Macaques, Baboons)¶
- 48-point system works well
- Any framework suitable
- Good for multi-animal SLEAP
New World Monkeys (Capuchins, Howlers)¶
- May need custom landmarks
- DINOv2 for feature analysis
- Consider smaller face sizes
Prosimians (Lemurs, Lorises)¶
- Specialized models recommended
- Large eye consideration
- Low-light optimization
Still Unsure?¶
- Start with Demos - Test pretrained models on your data
- Try the GUI - Explore annotation tools
- Check Tutorials - See similar use cases
- Ask Community - GitHub discussions