What we stand behind.

We publish our labor practices, compliance posture, and ethical commitments because we believe the AI industry's data standards need raising — not hiding. These are the promises we hold ourselves to, audited annually.

OUR PLEDGE

Six commitments, publicly held.

These aren't values on a website. They're structural commitments we've built our operation around — and the reason we publish our annual audit.

01

Living wage, every region

Regional pay floors set above local living-wage benchmarks, verified by independent NGOs. No commission markups on contributor pay. Our margin is in the platform and management — not wage arbitrage.

02

No rush labor

Projects planned weeks in advance. We don't run all-night surge shifts to meet client deadlines. If a project can't land on a sustainable schedule, we won't take it.

03

Trauma-informed protocols

For work involving graphic content: mandatory rest breaks, capped daily exposure, mental-health resource access, voluntary opt-out without penalty, and specialized team leads trained in trauma support.

04

Career pathways

Every contributor has a documented progression plan. 18% of our full-time staff started as annotators. QA leads, project managers, and regional directors are promoted from within wherever possible.

05

Public annual audits

Third-party labor audits published each year — including findings where we fall short. Transparency is a process, not a PR exercise.

06

Ethics refusal clause

We decline projects involving surveillance of non-consenting subjects, biometric profiling without consent, or content that could enable targeted harm. Our refusal clause is in every MSA.

WHAT WE DON'T DO

Our refusal clause.

Every Deaimer master services agreement includes a refusal clause. We decline projects that, in our good-faith judgment, would enable targeted harm, violate consent, or undermine public trust in AI systems.

Categories we don't support: non-consensual biometric profiling, surveillance systems targeting protected groups, deepfake generation for deception, and content production designed to deceive voters or regulators.

Full refusal policy is available on request under NDA.

LET'S BUILD

Let's make your AI better together.

Tell us what you're training, aligning, or evaluating. We'll map a delivery plan, staffing model, and timeline within one working week.