The Billable Hours You Have Been Giving Away
⚠️ This article explores how everyday digital tasks became unpaid work for tech companies. Some readers may find this concerning. Our aim is not hype but clarity, offering practical steps and clear ways to regain control.
Why This Matters for Creative SMEs
AI systems have been built on years of hidden contributions from ordinary users. For creative agencies and studios working on thin margins, the hours given away for free add up to real economic pressure. Understanding how this happened — and what you can do about it — is the first step to protecting your billable hours.
Here are the key points at a glance:
UK creative industries contributed £124bn GVA in 2023 and are still growing.
SMEs face margin erosion of 20 to 30 percent from unpaid digital work and changing freelance demand.
Confirmed systems include Google reCAPTCHA, Meta photo tagging, Tesla Autopilot, Apple Siri, and OpenAI’s RLHF pipeline.
Regulation is shifting in 2025: the EU AI Act now requires disclosure of training data and the UK has introduced the Data (Use and Access) Act.
Participate in our Trust Pulse, to help benchmark where time is gained, lost, and rebuilt with AI.
The Invisible Invoice
You log in for another workday. Before you can access a client file, you are asked to click all the squares with traffic lights. It takes 15 seconds. No big deal, right? But multiplied across millions, those micro tasks added up to billions of unpaid billable hours, training trillion dollar AI systems owned by Big Tech.
For the UK creative industries, where margins often hover around 10 to 15, this is more than uncomfortable. It reframes AI adoption as an economic question of ownership, value, and fairness.
These pathways are well documented and widely reported:
Google reCAPTCHA
Human image selection labelled datasets for Google Vision and WaymoMeta photo tagging
Facebook tags trained DeepFaceTesla Autopilot
Fleet corrections feed neural netsApple Siri
User corrections refined models; opt outs only added after privacy reviewsOpenAI
Paid raters plus unpaid user prompts fuelled model refinement
Which raises the bigger question: is it time for quid pro quo? Should Big Tech start contributing back, so our industry is not further decimated by the very work we have already given away?
Hidden Work, Hidden Value
Let’s take a deeper look at this in terms of the commercial impact on SMEs.
Imagine a ten person agency, spending on average five minutes per person per day on these training tasks. It doesn’t seem much, but it adds up to more than 200 hours annually, about £23,000 in lost billables at UK rates.
Again, it doesn’t seem much, Big Tech probably spend that on one team’s away day, but with profit margins in most agencies sitting around 10 to 15 percent, those hidden tasks can quietly wipe out as much as a fifth of annual profit.
Here’s how the hidden inputs translate into commercial advantage for Big Tech:
System | Hidden Input | Commercial Impact |
---|---|---|
Google reCAPTCHA | Image labelling | Lower Google Vision training cost |
Meta photo tags | Facial data | Face recognition dominance |
Tesla Autopilot | Driver corrections | Proprietary road intelligence |
OpenAI prompts | User inputs | Faster model refinement |
Apple Siri | Voice corrections | Stronger speech ecosystem |
And you’re probably now paying licences for some of these tools, right. Let that sink in, you are now paying for something that you and your team unknowingly gave away free hours to help build.
Public Discourse and Regulation
The debate about unpaid digital work is not just technical, it is ethical. For creative SMEs the key question is how to define our role. Were we victims, tricked into unpaid labor? Participants in a value exchange, trading time for free tools? Or under compensated contributors, whose efforts built systems worth far more than what we received?
Regulators are beginning to weigh in. The EU AI Act, effective August 2, 2025, introduces disclosure obligations for general purpose AI models . In the UK, the Data (Use and Access) Act passed in July 2025 requires reporting and opt outs, though there is still no dedicated AI Bill. For the creative industries, this matters directly. Clients and stakeholders will want reassurance that their partners understand and comply with these new rules.
And what about the counter argument: “We got free tools”? It felt fair when the tools were small. But in the generative AI era, that balance has broken. Free clicks now drive billion dollar models, while SMEs feel the squeeze on their margins and their teams fear the impact on their livelihoods.
This debate is not limited to SMEs. Globally, thousands of annotators in countries like Kenya and the Philippines are paid low wages to review traumatic content for AI systems. Meanwhile, AI “slop fixing” gigs, where workers edit or QA poor machine outputs, now make up a noticeable share of gig work.
Legal battles are also reshaping the landscape. Apple faces lawsuits over training data sourced from pirated books, while Anthropic has agreed to a $1.5 billion settlement over scraped content. These cases underline how unsettled the rules still are.
Strategic Relevance for SMEs
For our industry, there are four clear takeaways:
Pitching: Client facing transparency builds trust. Show how you handle AI responsibly and don’t wait to be asked.
Workflow: Audit where time is leaking and tighten up your processes.
Margins: Scope and price human oversight as a billable service line.
Empowerment: AI is also creating new paid roles, from QA to editing outputs, that can be positioned as valuable services in their own right.
The aim here is not to attack Big Tech, but to acknowledge that many of us contributed more than we realised; often without knowing. What concerns us most is how little this is being discussed among creative SMEs.
It’s why we designed Trust Pulse; our chance to change that and build a benchmark that puts our reality on record.
Further Reading
These sources were used to compile the information in this article. They provide further context and background if you would like to read more about the systems, impacts, and regulations discussed here.
DCMS. (2025). Creative Industries Economic Estimates - June 2025 Update https://www.gov.uk/government/statistics/dcms-economic-estimates-monthly-gva-to-june-2025
MIT Technology Review. (2025). A major AI training dataset contains millions of examples of personal data https://www.technologyreview.com/2025/07/18/1120466/a-major-ai-training-data-set-contains-millions-of-examples-of-personal-data/
Wired. (2021). Facebook drops facial recognition to tag people in photos https://www.wired.com/story/facebook-drops-facial-recognition-tag-people-photos/
The Verge. (2019). Why CAPTCHAs have gotten so difficult https://www.theverge.com/2019/2/1/18205610/google-captcha-ai-robot-human-difficult-artificial-intelligence
Tesla. AI & Robotics | Tesla https://www.tesla.com/AI
Apple. Privacy at Apple https://www.apple.com/privacy/
OpenAI. Aligning language models to follow instructions https://openai.com/index/instruction-following/
European Commission via Schwanke, A. (2025). The EU AI Act: A catalyst for sustainable data and AI https://medium.com/@axel.schwanke/the-eu-ai-act-a-catalyst-for-sustainable-data-and-ai-11eb2b1d8844
Bird & Bird. Ethics Guidelines for Trustworthy AI https://www.twobirds.com/en/capabilities/practices/digital-rights-and-assets/european-digital-strategy-developments/ai-as-a-digital-asset/ethics-guidelines-for-trustworthy-ai