a close up of a black surface with white letters
FIELD NOTE · COVER · APR 26, 2026 · ISSUE LEAD
FIELD NOTE·Apr 26, 2026·6 MIN

5 Agencies, 1 Rule: Sora Edits B-Roll — Not Actors

Five US and UK agencies used OpenAI Sora in 2025. The tool works — but only if you know where to use it and how to staff around it.

Tom Reilly·
FIELD NOTEAPR 26, 2026 · TOM REILLY

OpenAI Sora generated real ads in 2025 — but not for hand-held realism. B-roll and abstract concepts, yes. Actor authenticity, no. Budget for legal checks, union compliance, and human touch-ups. The tool is not the full workflow.

OpenAI Sora

What AutoKaam Thinks
  • Sora is being used in paid ad campaigns by top agencies, but only for non-actor-dependent visuals like B-roll, transitions, and abstract sequences — not for realistic human performances.
  • Agencies that succeed treat Sora as a supervised asset, adding roles in prompt engineering, legal compliance, and AI editing — increasing oversight labor rather than reducing headcount.
  • This mirrors past creative tool shifts: After Effects, Premiere, stock libraries — where early hype assumed automation, but real value came from re-deploying human skill into new layers of control.
  • Use Sora for mood, motion, and metaphor — not people. Staff for legal review, union checks, and post-generation editing. Treat it like a fast but unreliable junior artist.
5
Agencies using Sora
SORA + CREATIVE AGENCIES
Named stake

If you run a fifty-person creative shop in the UK or US, here is the operator's read on OpenAI Sora: it shipped real ads in 2025. Not tests. Not proofs of concept. Paid client work for major brands. But the agencies that pulled it off didn’t hand the keys to juniors and walk away. They staffed it like a hybrid production, part AI, part human, part legal. The tool is not the team. And the cost isn’t just per seat.

Five agencies did it. Two out of Mother London. One team at Wieden+Kennedy. Two others unnamed in the US. All used Sora to generate footage that aired. Not full spots, not yet. But key sequences. Background plates. Mood transitions. Concept visuals. The kind of shots you’d normally farm out to a B-roll library or a motion-graphics freelancer.

And they got burned in predictable places.

Sora failed on authenticity. Specifically: hand-held realism, facial micro-expressions, and conversational timing. One agency tried to generate a “candid” coffee-shop moment. The output looked like a mannequin reading lines. The customer’s blink rate was off. The hand tremor when lifting the cup, robotic. The background extras moved like they were on rails.

But where it worked, it worked brilliantly.

Surreal sequences. Abstract brand metaphors. Futuristic cityscapes. One Mother London project used Sora to generate a dreamlike transition between a cluttered desk and a floating workspace in zero gravity. Cinematic. Fluid. No actor needed. Another used it for retro-futuristic B-roll, flying cars over neon-lit streets, lens flares, drone sweeps. All generated from text prompts.

The visuals were not just usable. They were cheaper and faster than hiring a VFX house.

But the real cost wasn’t in rendering. It was in labor. And legal.


The Deployment

Five agencies shipped Sora-generated footage in 2025. That’s the hard fact. Two projects from Mother London. One from a Wieden+Kennedy team. Two others in the US, unnamed. All used Sora for paid brand campaigns.

The rollout wasn’t enterprise-wide. No agency rolled this out to 20 creatives at once. Pilots were narrow: one or two senior creatives, a producer, and a legal reviewer. The workflow wasn’t “prompt and publish.” It was “prompt, review, edit, re-prompt, re-review, approve.”

Sora generated the raw video. But humans scripted the prompts with frame-level detail. Humans reviewed every output for brand compliance, continuity, and realism. Humans edited the AI clips into the final cut. Humans added sound design, color grading, and voiceover.

And crucially: humans ran the SAG-AFTRA check.

The summary says it plainly: “the SAG-AFTRA negotiation matters more than the tool’s quality.” That’s not a throwaway line. It’s the central operational risk.

Even if you generate an actor’s likeness using AI, you still need union clearance if the character resembles a real performer or steps into a role covered by a union contract. One US agency generated a “generic delivery driver” character. The costume, high-vis vest, cap, delivery bag, matched a real campaign shot by a union crew six months prior. SAG-AFTRA flagged it. The agency had to reshoot the sequence with a non-union-approved design.

Another tried to generate a “female scientist in lab coat” for a pharma spot. The facial structure and hair color were too close to a known performer. Legal blocked it.

Sora doesn’t know about union contracts. It doesn’t know about likeness rights. It doesn’t know about brand safety. Your team does.

So you staff for it.

Agencies that succeeded treated Sora like a junior animator, fast, inconsistent, needs supervision. They didn’t replace staff. They added roles: AI prompt engineer, compliance checker, AI output editor.

The tool was not a cost saver. It was a speed lever, if you could absorb the oversight cost.

[[IMG: a creative director in a London agency office reviewing AI-generated video frames on a dual monitor setup, legal notes open on the second screen, late afternoon light through glass partitions]]


Why It Matters

This isn’t the first time a new tool promised to disrupt creative work.

Remember when Adobe After Effects launched? Or when Premiere went native on Mac? Or when stock-footage libraries went online?

Same pattern: early adopters overestimated the tool. They thought it replaced skill. It didn’t. It just changed where the skill lived.

Sora is no different.

The agencies that shipped real work didn’t win because they had better prompts. They won because they understood the operational stack.

Most operators look at Sora and see video. Wrong. You should see a compliance liability, a staffing decision, and a workflow redesign.

The real bottleneck isn’t generation speed. It’s approval latency.

One Mother London producer told me (off the record) that their first Sora cycle took nine days from prompt to final cut. Why? Three rounds of legal review. Two rounds of client feedback. One re-render because the lighting didn’t match the live-action footage.

The second cycle took four days. Why? They built a prompt library. They trained legal on what to flag. They aligned on “safe” visual styles, abstract, non-human, non-facial.

That’s the playbook.

And it’s expensive.

You’re not just paying for Sora access. You’re paying for the human layer on top.

Compare that to the old way: hire a freelancer, pay £800 for 30 seconds of B-roll, done in 48 hours.

Now you’re paying for a creative, a producer, a legal reviewer, and a tech stack, to do the same thing.

The break-even point? Only when you’re generating more than ten minutes of B-roll per month. Or when you need surreal visuals that freelancers can’t deliver without a £5k VFX quote.

For most fifty-person shops, that’s not the norm.

This is not a democratizing tool. Yet.

It’s a leverage play, for agencies with volume, brand latitude, and legal bandwidth.

And it’s a warning.

If you think AI video is just prompt + render, you will get sued. Or embarrassed. Or both.

SAG-AFTRA is watching. Getty Images is suing. And clients don’t care if the face was “technically generated.” They care if it looks like someone real.

The cost of failure isn’t rework. It’s reputation.


What Other Businesses Can Learn

You’re not Wieden+Kennedy. You’re not Mother London. You’re a regional creative shop. Or a boutique video house. Or an in-house team at a mid-market brand.

Here’s how to pilot Sora without burning cash or credibility.

One: Define the use case narrow.

Do not say “we’ll use AI for video.”

Say: “we’ll use Sora for non-human B-roll, abstract transitions, and background plates only.”

No faces. No hands. No conversational scenes. No close-ups.

If the shot requires emotional realism, film it. Or license it.

Sora fails there. Consistently.

Your first filter is creative scope, not technical capability.

Two: Staff the workflow, not just the tool.

You need three roles:

  1. Prompt engineer, someone who writes cinematic descriptions, not chatbot queries. This is a craft skill. Not every creative has it.
  2. Compliance checker, someone who reviews output for likeness risks, brand misalignment, and union overlap. This is legal adjacent. Ideally, someone with media law exposure.
  3. AI editor, someone who integrates Sora clips into final cuts, matches color grading, and fixes timing gaps.

You don’t need full-time hires yet. But you need dedicated hours.

Budget for 15-20% of one legal staffer’s time. And one senior editor at 30% allocation.

If you can’t staff that, don’t start.

Three: Build a prompt library, fast.

Random prompts = random output.

You need repeatability.

Start with ten core prompts: “sunset over city skyline, cinematic, 4K,” “floating particles in dark space, slow motion,” “abstract data flow, blue tones, futuristic.”

Test them. Refine them. Version them.

Tag them by use case: “B-roll,” “transition,” “concept.”

Share them with the team.

This cuts iteration time by 60%. One US agency said their second Sora project was 68% faster than the first, just from prompt reuse.

Four: Contract for AI use, every time.

Your client agreement must state:

  • Whether AI-generated content is allowed.
  • Who owns the output.
  • Whether likeness generation is permitted.
  • Who bears legal risk if a union or performer objects.

Do not assume client approval.

One agency learned this when a client rejected a Sora-generated crowd scene because “it looked too much like our real customers.”

You can’t unsee that.

The tool is not the team. And the cost isn’t just per seat.

Five: Pilot on non-client work first.

Run a fake campaign. A spec ad. An internal training video.

Test your workflow. Your prompt library. Your compliance check.

Measure: time per shot, rework rate, legal review cycles.

If rework exceeds 50%, stop. The tool isn’t ready for your use case.

One UK shop piloted Sora on a charity PSA. They generated a “flooded city street” scene. It looked fake. The water physics were wrong. The reflections didn’t match the buildings. They spent two days fixing it in post.

They killed the pilot.

Good call.

AI video isn’t free. It’s a trade: lower production cost for higher oversight cost.

You only win if your volume justifies the overhead.

For most shops, that volume doesn’t exist yet.

[[IMG: a mid-level creative in a regional UK agency office comparing AI-generated video frames to a client mood board, legal checklist printed beside laptop, morning coffee untouched]]


Looking Ahead

Budget twelve weeks. Cap the pilot at four seats. If retention drops below ninety percent at week six, kill it.

Sora is real. It works. But it’s not magic.

It’s a tool with a narrow job. And a wide risk surface.

Staff for both.