Ultralytics (YOLO) Integration¶
Integration guide for using PrimateFace with Ultralytics YOLO for real-time detection.
Overview¶
Ultralytics YOLO provides fast, real-time detection and pose estimation. PrimateFace models can be exported to YOLO format for edge deployment.
Quick Start¶
from ultralytics import YOLO
# Load PrimateFace-trained YOLO model
model = YOLO("path/to/primateface_yolo.pt")
# Run inference
results = model("primate_image.jpg")
Integration Points¶
Model Formats¶
PrimateFace provides: - Pre-trained YOLO models for primate faces - Conversion scripts from COCO format - Export utilities for deployment
Detection Pipeline¶
-
Face Detection
-
Pose Estimation
Training Custom Models¶
Convert COCO annotations to YOLO format:
# Conversion utility
python scripts/coco_to_yolo.py \
--coco-json annotations.json \
--output-dir yolo_dataset
Train with Ultralytics:
Deployment¶
Export Options¶
# Export for different platforms
model.export(format="onnx") # ONNX for cross-platform
model.export(format="tflite") # TensorFlow Lite for mobile
model.export(format="coreml") # CoreML for iOS
Edge Deployment¶
- Raspberry Pi: Use TFLite export
- NVIDIA Jetson: Use TensorRT export
- Mobile: Use CoreML (iOS) or TFLite (Android)
Performance Optimization¶
- Input Size: 640x640 for best speed/accuracy trade-off
- Model Size: YOLOv8n for edge, YOLOv8x for accuracy
- Batch Processing: Increase batch size for throughput
Troubleshooting¶
Common Issues¶
- Low FPS
- Use smaller model (nano/small variants)
-
Reduce input resolution
-
Export errors
- Ensure compatible PyTorch version
- Check export requirements for target platform