Why Annotation Pricing Is So Hard to Benchmark
Before the numbers, the context: data annotation pricing varies across a wider range than almost any other technical service. A task priced at $0.02 per image from one vendor costs $0.25 per image from another. Both numbers can be correct — and both datasets can be worth exactly what you paid for them.
The variation is driven by five factors:
1. Task Complexity
A single bounding box label on an image of a car takes seconds. A full instance segmentation of a complex surgical scene with multiple instruments takes minutes. Complexity drives labour cost, and labour cost drives price.
2. QA Depth
Algorithmic QA (automated consistency checks) is cheap. Multi-stage human QA with peer review and senior sign-off costs more. The delta shows up in dataset quality. Investing in robust data QA and validation is what separates production-ready datasets from unusable ones.
3. Annotator Domain Expertise
General-purpose annotators handle commodity tasks. Medical imaging annotation, legal NLP, or RLHF preference ranking requires annotators with domain knowledge. That knowledge has a cost.
4. Volume
Annotation is a volume-discount industry. Per-unit pricing at 10,000 images is materially higher than at 500,000 images. Pilots are always more expensive per unit than production runs.
5. Geographic Model
Offshore crowdsourced annotation (sub-minimum-wage piece-rate workers) operates at the low end of the market. Australian or comparable-jurisdiction managed annotation, with proper employment and oversight, operates higher. The cost difference is real. So is the quality difference.
Data Annotation Pricing by Task Type
These ranges reflect enterprise-grade, human-QA'd annotation — not crowdsourced commodity pricing. All figures are approximate and project-specific; actual costs depend on dataset complexity, volume, and quality requirements.
Image Annotation
Bounding Box Annotation
$0.05 – $0.25 per boxThe most commodity annotation task in the market. Simple single-class bounding boxes on clear images sit at the lower end. Multi-class scenes with small or partially occluded objects require more annotator time and quality review — prices move accordingly.
Polygon Annotation
$0.10 – $0.50 per polygonPolygon annotation is slower than bounding boxes because annotators trace object shapes rather than draw rectangles. Complex shapes (agricultural imagery, construction site analysis, irregular biological structures) are priced higher than simple geometries.
Semantic Segmentation
$0.50 – $3.00 per imageFull-image pixel labeling is one of the more labour-intensive image annotation types. Pricing varies significantly based on scene complexity (a single-object image vs. a dense urban street scene), number of classes, and boundary quality requirements.
Instance Segmentation
$1.00 – $5.00 per imageInstance segmentation is priced higher than semantic segmentation because it requires per-instance boundary annotation, occlusion handling, and unique instance ID assignment. Complex scenes with many small instances (histopathology, crowd imagery) command premium pricing.
Keypoint & Landmark Annotation
$0.10 – $0.60 per imageKeypoint pricing depends on the number of keypoints per instance and the precision required (assuming 10–20 keypoints). Pose estimation skeletons for sports analytics or rehabilitation AI sit at the higher end; simpler facial landmark tasks at the lower.
LiDAR / 3D Point Cloud Annotation
$2.00 – $15.00 per frame3D annotation is among the most expensive in the market. Annotators work in three dimensions, placing cuboids with accurate size, position, and orientation for every object in the frame. AV datasets with complex urban scenes or high object density sit toward the upper end.
OCR & Document Annotation
$0.05 – $0.30 per pageOCR annotation pricing is typically per page or per document, based on the number of fields being extracted and the complexity of the document structure. Handwritten documents, low-quality scans, or complex multi-table layouts increase cost.
NLP & Text Annotation
Named Entity Recognition (NER)
$0.02 – $0.15 per sentenceNER pricing depends on entity type complexity, schema size, and domain. General-purpose NER (person, organisation, location) sits toward the lower end. Domain-specific NER (clinical entities, legal references, financial instruments) requires trained annotators — pricing reflects that.
Sentiment & Text Classification
$0.01 – $0.08 per text unitThe simplest NLP task type in pricing terms. Single-label classification (positive/negative/neutral) on clear social media text is low cost. Multi-label classification, aspect-based sentiment, or classification of domain-specific professional text increases price.
Intent & Entity Labeling (Conversational AI)
$0.05 – $0.20 per utteranceConversational annotation requires annotators to understand both intent taxonomy and entity slot definitions. Complex multi-intent messages or domain-specific utterances (healthcare, legal, financial) command higher rates. Multilingual annotation for conversational AI adds additional complexity.
RLHF Preference Annotation
$0.50 – $5.00 per comparison pairRLHF pricing reflects annotator quality requirements. Meaningful preference judgments require domain understanding — an annotator without relevant expertise can't reliably distinguish a good response from a plausible-but-incorrect one. Domain-matched RLHF annotation is priced accordingly.
SFT Response Writing
$2.00 – $20.00 per responseSupervised fine-tuning data requires annotators to write high-quality, accurate, domain-appropriate responses from scratch. This is closer to expert content creation than traditional annotation — pricing reflects the skill requirement.
Speech Transcription
$0.50 – $3.00 per audio minuteTranscription pricing depends on audio quality, speaker count, accent diversity, and domain vocabulary. Medical transcription and legal transcription carry higher rates due to terminology requirements and accuracy standards.
Medical Annotation
Medical annotation pricing reflects domain expertise requirements and the QA standards appropriate for clinical AI training data.
Radiology Annotation (CT, MRI, X-ray)
$5.00 – $30.00 per scanMedical imaging annotation requires annotators trained in anatomy and clinical imaging interpretation. A CT scan annotation with organ segmentation and lesion boundary delineation across multiple slices represents significant skilled labour. Volume and complexity drive the range.
Histopathology Annotation
$10.00 – $50.00 per slideWhole-slide imaging (WSI) annotation is among the most technically demanding annotation tasks. Individual cell and nucleus segmentation at scale requires domain-trained annotators and rigorous QA. Pricing reflects this.
Surgical Video Annotation
$5.00 – $25.00 per minuteSurgical video annotation for instrument tracking, phase recognition, and skill assessment requires frame-level annotation with temporal consistency. Annotation rates per minute of video reflect the frame density and annotation type.
Clinical Document Annotation
$2.00 – $10.00 per documentClinical NLP annotation requires healthcare domain expertise. Documents with dense clinical terminology, complex temporal structures, or multiple annotation layers (NER + negation + temporal relations) sit at the upper end.
How to Build an Annotation Budget
A practical approach to annotation budgeting involves three components:
Base Annotation Cost
Estimated per-unit rate × projected volume. Always build in a 15–20% volume buffer; datasets grow.
QA & Remediation Reserve
20–30% of base cost for QA overhead and potential remediation. If your annotation vendor runs robust QA, you won't spend all of this. If they don't, you'll spend more.
Specification & Onboarding
For complex projects, allocation for annotation specification development, pilot batch, and annotator calibration. Typically 5–15% of total project cost and the highest-leverage quality investment you can make.
Budget Benchmarks
| Project Type | Volume | Typical Budget Range |
|---|---|---|
| Bounding box (with human QA) | 50,000 images | $8,000 – $20,000 |
| Instance segmentation | 100,000 images | $80,000 – $200,000+ |
| Medical imaging | Varies | Scoped per-project |
What to Ask Annotation Vendors About Pricing
When evaluating annotation vendors, these questions expose the true cost structure:
“What does your QA process look like, specifically?”
Ask for the human review rate — what percentage of annotations are checked by a human reviewer, not an automated consistency tool. A vendor running 100% human QA costs more and produces more reliable data.
“What's your remediation policy if error rates exceed agreed thresholds?”
Quality guarantees with defined remediation obligations are standard among enterprise annotation vendors. Absence of this policy is a risk signal.
“Where does our data go?”
Understand the complete data handling chain. Subcontractor layers add cost and compliance risk that don't appear in per-label pricing.
“What's included in the quoted rate?”
Format conversion, delivery in multiple formats, annotation specification support, IAA reporting, and volume management should all be accounted for. Hidden setup or delivery fees are common.
“Can we run a paid pilot before committing to full production?”
Any serious annotation vendor will offer a pilot. The pilot batch (typically 200–500 items) is your ground truth for quality and process before committing volume.
AI Taggers Pricing Philosophy
AI Taggers doesn't publish fixed rate cards because annotation pricing should reflect actual project requirements — not a generic per-image rate that ignores task complexity, QA depth, and domain expertise.
What we do offer is transparent scoping. We review your dataset, annotation requirements, quality standards, and timeline — and provide a detailed project quote that shows where cost is being allocated and why.
Our pricing sits in the mid-to-premium range for the Australian market. It reflects Australian-led QA oversight, domain-trained annotators, multi-stage human review, and data governance standards that enterprise AI teams require. It does not reflect crowdsourced annotation padded with minimal QA and offshore workforce arbitrage.
The organisations that choose AI Taggers typically arrive after a low-cost vendor experience — having spent more on remediation and retraining than they saved on initial annotation cost.
Get a Transparent Quote
A brief scoping conversation — dataset type, volume estimate, annotation type, and quality requirements — is enough to provide a ballpark estimate.
Free Pilot
Let us annotate 100–500 samples free so you can validate our quality yourself.
Results in 5–7 days | Zero risk
Request Free PilotCustom Quote
Share your requirements, get detailed proposal with transparent pricing breakdown.
Quote within 24 hours | Free consultation
Get Custom QuoteExpert Consultation
Talk through your annotation budget and get honest advice on pricing expectations.
Schedule today | Free 30-minute call
Book ConsultationFrequently Asked Questions
How much does data annotation cost per image?
Bounding box annotation ranges from $0.05–$0.25 per box. Semantic segmentation ranges from $0.50–$3.00 per image. Instance segmentation ranges from $1.00–$5.00 per image. These are enterprise-grade, human-QA'd figures; crowdsourced rates are lower and reflect lower quality.
What is the most expensive type of data annotation?
Medical imaging annotation (histopathology, CT/MRI organ segmentation), LiDAR 3D point cloud annotation, and RLHF/SFT data production are typically the highest-cost annotation categories due to domain expertise requirements and annotation complexity.
Why do annotation prices vary so much between vendors?
Price variation reflects differences in annotator expertise, QA depth, workforce model (crowdsourced vs. managed), geographic location, and included services. Per-label rates don't capture these differences — total cost of producing training-ready data is the correct comparison metric.
Is cheap data annotation worth it?
Rarely, for production AI projects. The total cost of cheap annotation — including remediation, retraining, and project delays — typically exceeds the savings on initial annotation cost. For throwaway experiments or preliminary pilots, low-cost annotation may be appropriate.
How do I budget for a data annotation project?
Estimate base annotation cost (per-unit rate × volume + 20% buffer), add 20–30% for QA and remediation reserve, and allocate 5–15% for specification and onboarding. Get a detailed vendor quote before finalising budget.
Does AI Taggers offer pilot projects before full production?
Yes. AI Taggers runs paid pilot batches (typically 200–500 items) at the start of every project. The pilot validates annotation quality, confirms specification clarity, and gives your team ground-truth data before committing production volume.
How does annotation quality affect total project cost?
Higher-quality annotation reduces downstream costs: less remediation, fewer retraining cycles, faster model deployment. The annotation budget is the smallest line item in most AI projects — optimising it at the expense of model performance is a poor trade.
What's the cost difference between Australian and offshore annotation?
Australian-managed annotation with local QA oversight typically runs 2–4× the per-unit rate of offshore crowdsourced annotation. For projects with data sovereignty requirements, compliance obligations, or medical/legal/financial domain requirements, the governance and quality benefits are not optional — and the all-in cost comparison narrows significantly when remediation is factored in.
Can AI Taggers provide a quote without a full RFP?
Yes. A brief scoping conversation — dataset type, volume estimate, annotation type, and quality requirements — is enough to provide a ballpark estimate. Contact us to discuss your project.
Does volume affect annotation pricing?
Significantly. Per-unit rates decrease with volume. Pilot batches of a few hundred items are priced higher per unit than production runs of 50,000+ items. Volume commitments typically unlock better pricing tiers.
Questions about annotation pricing for your project?
We're happy to discuss your project and provide honest pricing guidance — even if it means recommending a different approach or vendor. Our goal is helping you succeed, not just winning business.
Related Reading
Guide last updated: March 2025. Pricing ranges based on enterprise-grade annotation projects across industries.
Neel Bennett
AI Annotation Specialist at AI Taggers
Neel has over 8 years of experience in AI training data and machine learning operations. He specializes in helping enterprises build high-quality datasets for computer vision and NLP applications across healthcare, automotive, and retail industries.
Connect on LinkedIn