AI Has Become Media's Operating Layer

AI Has Become Media's Operating Layer

Netflix's $600M Deal, Hollywood's Performative Resistance, and a Structural Shift That Cannot Be Undone

AI is no longer a toolbox. It is already the infrastructure of the media industry — its operating layer. Functions once scattered across the workflow — metadata tagging, QC automation, transcript generation, recommendation engines — are now integrated into a single intelligent system. That layer routes content, enforces compliance, and drives the day-to-day operations of every workflow from the newsroom to ad insertion.

AI가 미디어 산업의 운영 레이어에 자리잡다
AI는 이미 미디어 산업의 운영 레이어. 스튜디오들은 생성 모델에는 제동을 걸면서도 프로덕션·전송·개인화 파이프라인에는 AI를 깊게 내장. 배우·크리에이터들은 AI를 비판하는 공개 서한과 디지털 더블·보이스 모델·합성 퍼포머 투자를 병행하는 ‘수행적 저항’ 전략
Korean Version

What pushed this transition was not technology itself, but scale. As global distribution exploded into multi-region, multi-platform, multi-accessibility demands, manual version management and monitoring hit a physical wall. No human team can monitor thousands of simultaneous feeds, process dubbing and subtitles in dozens of languages by hand, or predict and respond to millions of concurrent viewers surging around a major live event. Without automation, the economics of scale became impossible — and that automation has now evolved into agentic systems capable of understanding context, setting objectives, and executing multi-step processes independently.

Ironically, the entity standing at the center of this transformation is Hollywood. At the very moment hundreds of actors and directors were signing open letters opposing AI, Ben Affleck was quietly standing on the other side — through InterPositive, an AI film production company he had secretly founded in 2022. In March 2026, Netflix acquired it for up to $600 million, completing one of the most significant AI deals in the company's history.

Today, virtually every AI partnership between a major studio and a tech company focuses not on model licensing, but on embedding AI directly into the production pipeline. AI is already the backbone running inside the set, and despite more than half of consumers expressing discomfort with digital doubles of living actors, studios show no sign of slowing down. The tension between ethics, public resistance, and the structural inevitability of AI as an operating layer is what defines Hollywood's AI era — and this report is its prologue.

Source: Netflix official announcement 

Under the Label 'AI' — Three Entirely Different Wars

The reason Hollywood's response looks contradictory is that the word 'AI' has been used to lump together three fundamentally different things. Separating them makes it clear who is angry about what — and who is quietly investing in what.

The industry’s strongest backlash is directed at the first type: generative models (Sora, Kling, ByteDance Seedance) that train on copyrighted works at scale to produce entirely new footage. These are the explicit targets of the open letters and full-page newspaper ads.

The second type — editing and post-production assistance tools — and the third — digital doubles and voice cloning — are being quietly, and in some cases enthusiastically, adopted or funded by the very people who signed those letters. InterPositive belongs squarely in the second category. That is how Affleck could co-sign an anti-AI letter while founding an AI company.

Figure 1  |  Three Types of Hollywood AI and Industry Reactions

Category

Type

Representative Cases

Industry Reaction

(1)  Generative Models

Text-to-video generation

Sora, Kling, ByteDance Seedance

Strongest backlash. Open letters and full-page ads target unauthorized training on copyrighted works.

(2)  Production Tools

Editing & post-production assistance

InterPositive, VFX automation, ADR synthesis

Quietly embraced on set — even by signatories of anti-AI open letters.

(3)  Synthetic Performance

Digital doubles & voice cloning

Likeness licensing deals, digital star contracts

Moving into negotiation and monetization. Actors increasingly function as IP assets.

Source: The Ankler, 'Why Did Ben Affleck Start an AI Company in Secret?' (Erik Barmack, Mar. 12, 2026). Reconstructed by K-EnterTech Hub.

InterPositive — The Technology Fincher and Pitt Are Already Using

So what exactly did Netflix pay up to $600 million to acquire? InterPositive is not generative AI. It is a post-production tool that trains on captured footage — daily rushes — and automates lighting adjustment, shot reframing, continuity error correction, and background cleanup in post. Its design is built on two principles: it does not use other films as training data without permission, and it does not generate new content without existing source footage. This is a direct answer to the AI skepticism embedded in Hollywood labor. Director David Fincher is reported to have already applied the technology to a new film starring Brad Pitt.

Affleck founded the company in secret in 2022 with backing from RedBird Capital Partners. On the Joe Rogan Experience, he flatly dismissed the idea that AI could write scripts or replace directors, framing AI as a creative-assistance tool analogous to VFX — not a creative agent. After actively raising external capital from 2025, the company culminated in a deal with Netflix that embeds it directly into studio infrastructure.

“Filmmaking has been one long process of technological progress from its very beginning. We have always been trying to make things more real, more truthful — and I hope InterPositive becomes yet another step in that long and storied history.”

— Ben Affleck, Netflix acquisition announcement

Netflix had historically avoided large-scale M&A — until its recently attempted acquisition of Warner Bros. Discovery. Bloomberg framed this acquisition not as 'buying something' but as an extension of 'building something.' Rather than licensing AI capability from outside, Netflix is embedding it as core infrastructure. Amazon has built an in-house team; Disney has partnered with OpenAI. The methods differ, but the direction is the same.

Public Declarations, Quiet Investments — Hollywood's 'Performative Resistance'

AI adoption on the creative side is not just Affleck's story. What is unfolding looks less like individual star defections and more like a structural behavioral pattern that Hollywood as an industry has chosen: public warnings against AI and quiet investments in AI proceeding simultaneously, with the tension between them generating a new order.

Joseph Gordon-Levitt, Natasha Lyonne, and Ty Sheridan have all co-founded AI startups. Gordon-Levitt and Lyonne are also founding members of the Creators Coalition on AI, which advocates for creator rights in the AI era.

That coalition and the Human Artistry Campaign (HAC) are pushing for norms requiring prior consent and fair compensation whenever AI systems use creative work or likenesses. Their seven core principles cover transparency, licensing of creative work, consent for voice and likeness use, opposition to legal exemptions for unauthorized AI, and guaranteed representation of creators in policy processes.

And yet, on set, these same individuals use AI. The technology is already internalized across production, post-production, VFX, sound design, and script development. No single star can halt a massive production pipeline by refusing. Even an A-lister would find it nearly impossible to forfeit their paycheck, role, and multi-picture deals as a form of protest. Open letters do not stop on-set participation — they function as a practical middle path: recording a moral position without sacrificing a project or a career.

The Ankler's analysis pushes this further. These campaigns are not simple protests — they simultaneously construct the negotiating terrain on which stars can assert control over their digital identities and claim their share of revenues. An open letter warns about AI risk and, in the same move, establishes the justification for demanding a seat at the AI negotiating table. The practice of quietly participating in what one publicly condemns is what The Ankler calls 'performative resistance.' Symbolic politics and contract politics operating in the same sentence.

AI is, ultimately, accelerating a transition in Hollywood that was already underway. For decades, film marketing and global branding have treated actors as brands and assets. Generative AI has opened a new revenue structure for licensing voice, appearance, and movement. The fact that synthetic performer Tilly Norwood has already released a debut single signals that this competition is no longer hypothetical — it is live business. Resisting publicly while simultaneously building your own digital IP is the structural reality Hollywood stars are navigating, and the precise stage on which performative resistance plays out.

The Reality of Studio AI Deals — 'Pipeline,' Not 'Licensing'

While rhetoric shifts on one side, data quietly establishes direction on the other. Luminate Intelligence’s March 2026 report makes it comparatively clear what studios are actually spending their money and time on.

The center of gravity in current partnerships between major studios and AI companies is, almost without exception, weighted toward the production pipeline — not licensing. Rather than buying off-the-shelf model licenses, studios are embedding purpose-built tools and models tailored to their own catalogs and workflows. Deals involving handing over entire training datasets remain uncommon across the industry, and the clearest exception is the Disney–OpenAI agreement.

That deal is often mischaracterized as a content-training license. Its actual structure is closer to a license permitting IP replication. Sora and ChatGPT Images gain access to more than 200 characters and fictional universes from Disney, Marvel, Pixar, and Star Wars, allowing fans to generate short videos and images via prompts. Individual actor likenesses and voices are explicitly excluded. Disney simultaneously enters as a major OpenAI customer and a $1 billion strategic investor, integrating AI infrastructure into its own services including Disney+.

The Lionsgate–Runway collaboration also resembles a licensing deal on the surface but is in practice pipeline-centric. Runway built a custom video generation model trained on part of Lionsgate's portfolio, designed for internal production teams across pre-production, storyboarding, and post-production. The result is less a commercial product than an internal AI tool for making in-progress projects faster and more flexible.

This trajectory extends to other Runway partnerships — Primordial Soup (co-founded by Darren Aronofsky), Chile-based Fabula, and EDGLRD, the Miami studio led by Harmony Korine. The common thread: studios remain cautious about handing content over to AI at scale, but are considerably more aggressive about building proprietary pipelines and workflow automation for their exclusive use.

The keyword running through studio AI dealmaking in 2026 is not 'licensing' — selling off data caches — but 'pipeline': embedding AI directly across production, distribution, and personalization.

Figure 2  |  AI Strategy Comparison: Major Studios and Platforms

Company

Direction

Key Moves

Positioning

Netflix

AI Infrastructure Internalization

Acquires InterPositive for up to $600M. Following failed WBD bid, builds AI capabilities directly. Affleck joins as senior advisor.

'Building, not buying.' AI as core operating layer.

Amazon

In-House AI Team

Operates a dedicated internal team to deploy AI across film and TV production.

Self-built infrastructure model.

Disney

IP Licensing

Partners with OpenAI. Licenses Disney, Marvel, Pixar, Star Wars characters to Sora and ChatGPT Images. The only major IP replication deal.

IP ecosystem expansion + AI platform integration.

Sky Italia

Distribution Optimization

AI-driven dynamic traffic routing. Delivers buffer-free 4K streaming to millions. Self-healing broadcast network.

Cost reduction + QoE uplift.

NBCUniversal

Hyper-Personalization

AI orchestration generates millions of individualized Olympic highlights on Peacock.

Orchestration + hyper-personalization.

France TV / ARD

Content Authenticity

Routine C2PA protocol deployment across VOD workflows to protect content provenance.

First-mover on trust stack standards.

Source: Bloomberg, TV Technology, The Ankler, Luminate Intelligence (compiled)

How Far Will Consumers Accept It? — The Boundary Between Comfort and Refusal

The depth of AI penetration into the pipeline and the willingness of audiences to accept it are two entirely different questions. Luminate Intelligence’s March 2026 survey puts numbers on that gap.

Respondents expressed relative comfort with AI in areas that smooth the experience: sound effects and high-quality VFX. 38% said they were comfortable with AI-generated sound effects versus 27% uncomfortable. High-quality VFX tracked similarly: 39% comfortable to 29% uncomfortable. AI voice dubbing landed near parity: 34% comfortable, 32% uncomfortable.

Move into the creative core — especially anything involving human faces — and sentiment shifts sharply. AI involvement in scriptwriting drew 43% discomfort against 25% comfort. Digital doubles of deceased actors: 24% comfortable versus 47% uncomfortable. Living actor doubles: 22% comfortable versus 51% uncomfortable — a majority explicitly objecting. Fully synthetic virtual performers drew 48% discomfort.

Source: Luminate Intelligence, March 2026. U.S. film and TV audiences. 'Very comfortable' + 'Somewhat comfortable' = comfort total.

There is a structural irony here. The technologies studios are pushing most aggressively — synthetic performance and digital doubles — are precisely the ones generating the highest consumer resistance. The grievances articulated in Hollywood talent’s open letters align almost exactly with the areas where audiences instinctively draw the line. The intersection of these two forces is where regulatory design and marketing strategy will face their sharpest test in the years ahead.

How the Operating Layer Actually Works — Three Case Studies

While Hollywood’s debates continue, AI is descending into broadcast and streaming infrastructure at its own pace. Writing in TV Technology, Viaccess-Orca’s Einat Kahana identifies three areas where this transition is most visible: real-time monitoring, localization, and sports.

►  Real-Time Monitoring — The Self-Healing Broadcast Network

Broadcast operations teams must monitor thousands of simultaneous feeds. AI now catches what manual oversight misses — misfired graphics, muted audio, compliance violations, subtle sync drift — in real time. In one documented global sports broadcast, AI detected a graphics rendering error occurring only on mobile devices and automatically switched to a backup encoder before viewers noticed any issue.

AI-driven prediction calculates viewership surges and pre-expands resources proactively — reducing over-provisioning while improving peak-time stability. Sky Italia has evolved toward a 'self-healing' model through an AI-powered delivery platform that dynamically routes traffic and delivers buffer-free 4K streaming to millions. Agents detect congestion signals and reroute automatically; human teams intervene only at judgment-dependent points like regulatory decisions and scheduling.

►  Localization — Dubbing That Now Preserves Emotion

Localization remains the most labor-intensive area of media operations. AI is accelerating the full chain — translation, subtitling, compliance editing, metadata generation, platform-specific packaging — to the point where processing dozens of language versions in parallel is now routine. As Viaccess-Orca has demonstrated, systems automatically identify non-speech audio cues — '[sound of rain],' '[door creaking]' — to generate SDH subtitles at scale in commercial deployments.

Dubbing quality has shifted noticeably. The latest AI-based dubbing models are designed not only to transpose dialogue into another language but to preserve the original actor's tone, pacing, and emotional nuance as closely as possible. Netflix has indicated that titles using emotion-aligned dubbing show meaningful increases in completion rates. Internal analysis shared for one Spanish-language hit series found that applying this approach to English and French dubs reduced drop-off rates in non-English-speaking markets by double-digit percentage points compared to subtitled-only versions.

In the Luminate survey, AI voice dubbing split near-evenly: 34% comfortable, 32% uncomfortable — suggesting technical quality has largely reached an acceptable threshold, while public sentiment on the ethical and emotional dimensions is still being formed. That gap — technology working, audience emotion only half along for the ride — is the most important variable in localization AI strategy going forward.

►  Sports — Millions of Personalized Olympic Highlights

Sports broadcasting is emerging as the most concentrated demonstration of AI operating as a production layer. At the 2026 Milan-Cortina Winter Olympics, European media rights are jointly held by the EBU and Warner Bros. Discovery, with Italy's RAI handling domestic coverage via sub-license. The IOC and WBD announced that 2026–2032 will deliver 'the largest digital and streaming experience in European Olympic history,' with AI-driven personalized highlights and multi-screen experiences as core pillars.

Viaccess-Orca, in 'Behind the Scenes of Broadcasting the 2026 Winter Olympics,' describes Milan-Cortina as 'the maturation of cloud and virtualized production, and the first Olympics in which AI is deeply embedded across the full production chain.' AI systems detect key events — scoring, critical errors, medal-clinching moments — and auto-generate highlight clips ready for broadcast, digital, and social distribution within minutes. AI tracking cameras simultaneously produce separate athlete-centric feeds for individual broadcasters, with 360-degree replays and slow-motion analysis assembled automatically.

Within Italy, RAI is delivering extensive live coverage across RAI 2, Rai Sport, and OTT service RaiPlay while building a library of clips, highlights, and on-demand content — repackaging AI-generated highlights and data feeds from the shared EBU/WBD source into Italy-specific packages centered on Italian athletes and events.

AI operates at three layers simultaneously at Milan-Cortina: a production layer for second-by-second event detection and auto-highlight generation; a personalization layer assembling athlete-, nation-, and sport-specific feeds for individual markets and platforms; and an infrastructure layer predicting per-event concurrent load and auto-scaling CDN and cloud resources. RAI, WBD, and the EBU consortium are building the first European reference model for running all three layers at once.

On the U.S. side, NBCUniversal's Paris 2024 Peacock experiment — 'Your Daily Olympic Recap' — extracted hundreds of clips daily from 5,000 hours of coverage and generated roughly ten-minute personalized highlight reels based on each viewer's interests, narrated by an AI recreation of Al Michaels' voice. Designed to produce millions of distinct highlight versions from a single event, the American model of the AI operating layer places heavier emphasis on branded, individualized experience.

The More AI Grows, the More Sophisticated the Fakes — The Trust Stack

As the operating layer strengthens, so do the risks built on top of it. Synthetic anchors, manipulated promo clips, context-altered footage, deepfake messages impersonating public figures — AI-native operations expand the potential for forgery at the same speed as content production. This converges on precisely the same question that Hollywood actors are negotiating over digital likeness rights: who can use my face and voice, and under what conditions?

The more fundamental answer is not to layer 'trust' onto assets after the fact, but to embed it from the start: injecting trust signals at the point of ingest and ensuring they persist through localization, editing, transcoding, and multi-partner distribution. The mechanism that makes this possible is what the industry calls the trust stack.

Figure 3  |  The Trust Stack for AI Content Authenticity

Layer

Description

Status & Challenges

Digital Watermarking

Invisible identifiers that survive editing, compression, and screen capture.

Mature technology. Risk of removal during social media redistribution.

Provenance Framework (C2PA)

Cryptographically signed manifests recording asset origin and every transformation step.

Adopted by France Televisions and ARD. Uneven industry-wide uptake remains a challenge.

Hardware-Level Authentication

Hardware-based capture attestation that proves trustworthy origin at the moment of recording.

Sony ships C2PA-enabled cameras. Key management complexity is a barrier at scale.

Source: TV Technology, Viaccess-Orca (Einat Kahana, Mar. 19, 2026)

Digital watermarking is already mature. Invisible watermarks embedded in news, sports, and drama masters have been used for years to trace the source of leaked copies and unauthorized streams. The limitation is that watermarks can be degraded or stripped during social media re-uploads, user clipping, and format conversion — making them useful for legal evidence and retrospective tracing, but limited as a real-time trust signal for consumers.

The complementary mechanism is the C2PA (Coalition for Content Provenance and Authenticity) framework — a kind of 'digital passport' cryptographically recording who shot the content, when, with what equipment, and what editing steps followed. France Televisions has deployed C2PA across its newsroom and VOD workflows, allowing viewers to click an icon within the player and see the provenance and edit history of any piece of content. ARD in Germany, working with AWS, is running a serverless C2PA signing pipeline for VOD that verifies the authenticity of each HLS segment in real time. The challenge: public broadcasters and some global platforms are moving first, but until commercial streaming services and social platforms adopt it uniformly, trusted-labeled and unlabeled content will coexist in the same feed.

The final layer is hardware-level authentication. Sony introduced C2PA-based digital signing on select flagship cameras from 2024, recording at the moment of capture that 'this image was taken by a specific camera, at a specific time, from a real three-dimensional subject.' The company has since released a newsroom-targeted video authenticity solution. The challenge is key management — coordinating authentication keys across thousands of cameras in global newsrooms and production houses, and integrating that information consistently with editing tools, DAMs, and CDNs, carries significant organizational and technical overhead.

Challenges remain sharp: metadata stripping, the social media 'black hole,' uneven adoption, and key management burdens. Without these layers, AI-native operations carry existential risks for brand trust and legal liability. Critically, the trust stack conversation is not separate from Hollywood actors' likeness demands. One is the language of rights — 'don't use my face and voice without permission' — and the other is the language of technology — 'prove this content is real.' Both are attempting to solve the same underlying question from different directions: who can use a given digital identity, and under what conditions? These two conversations will inevitably converge at the level of regulation, standards, and contracts.

Building While Resisting — What the Media Industry Looks Like in Three to Five Years

The debate over AI governance is not resolved, but the broad direction of the industry is already set. The media companies that survive will move beyond attaching AI to legacy workflows. They will operate inside agentic, policy-driven systems that learn from outcomes, route work flexibly between humans and machines, and treat trust as a default — not an afterthought. As those systems mature, each output becomes training data for the next, KPIs become the decision-making standard, and the same operational model replicates across regions and services.

For organizations beginning that journey, the starting point is not a grand AI strategy — it is one or two high-value problems. Choose something tangible: a manual QC bottleneck, dubbing and subtitle throughput, drop-off rates in a specific genre. Connect them to clear KPIs: reduced broadcast preparation time, higher version throughput, improved completion rates. Define risk ownership through lightweight governance, document when and how humans intervene in agentic systems, and add C2PA or provenance support to vendor evaluation checklists from the outset.

The actor's role is also changing. The 51% consumer discomfort rate with digital doubles of living actors marks a ceiling for that market segment. But the fact that AI synthetic performer Tilly Norwood has already launched a debut single and the music video 'Take the Lead' shows the competition is live. A world in which actors increasingly function as IP is arriving, and in that world, the current pattern of performative resistance — expressing public caution about AI while quietly building digital doubles and voice models — will likely be read in retrospect as strategic positioning in a dispute over revenue share and control.

Full-page ads and solidarity letters will continue to make headlines. Behind them, AI hybrid production is already running quietly as an operating layer that does not switch off. Hollywood stars warn that AI will take their jobs while simultaneously building and managing the digital versions of themselves that will generate future revenue. When this period of simultaneous resistance and construction ends, the media industry will no longer divide itself by the question 'will we use AI or not' — but by 'on what operating model, and within what framework of trust and IP, will we work alongside it?' That is the question that will separate the survivors from the rest.

Sources

Bloomberg  'Netflix to Pay as Much as $600 Million for Ben Affleck's AI Firm'  — Lucas Shaw (Mar. 12, 2026)

The Ankler  'Why Did Ben Affleck Start an AI Company in Secret?'  — Erik Barmack (Mar. 12, 2026)  theankler.com

TV Technology  'AI Is Becoming the Operating Layer for Media and Entertainment'  — Einat Kahana, Viaccess-Orca (Mar. 19, 2026)  tvtechnology.com

Luminate Intelligence  'Film/TV Studio AI Partnerships' / 'Consumer Comfort with AI in Film & TV Production'  (March 2026)