Generative AI in Film & TV Production: Where Hollywood Really Stands

Since the 2023 Hollywood strikes, studios have adopted a dual posture on AI: cautious in public, aggressive in private. The pace of AI's penetration into production has been faster than expected — but shallower. Studios are quietly standardizing the least legally risky workflows first, while the boundaries of what's permissible remain actively contested.

할리우드 제작 공정별 AI 도입 지도: 아직은 신중
2023년 파업 이후 할리우드. 사전·후반제작의 저위험 공정(프리비즈, 스크립트 분석, 더빙)부터 AI를 빠르게 표준화하면서도, 최종 픽셀과 합성 배우처럼 법적·노동 리스크가 큰 영역은 극도로 제한적으로만 시도
Korean Version

Adoption Landscape: Uneven and Strategically Selective

In the wake of the 2023 Hollywood strikes, major studios began quietly assessing how generative AI could be integrated into production workflows — primarily as a cost-reduction lever. The pace and scope of adoption, however, has been anything but uniform.

AI studios and independent production companies have been the most open and aggressive adopters. Among the major players, technology-forward studios like Netflix and Amazon MGM are leading the charge. Lionsgate has made its intentions public, announcing a partnership with Runway to leverage AI video tools across development and production. Across the board, however, the major studios have maintained a cautious and deliberately limited posture when it comes to embedding AI in core production pipelines.

In pre-production and select post-production workflows, AI use is already becoming standard practice. The leading applications — AI-generated visualization assets, LLM-based script breakdown, and AI voice synthesis for dubbing — share a common characteristic: low legal exposure. In many cases, these outputs do not require copyright protection, which makes them significantly lower-risk to deploy.

"Cost logic remains the single most powerful driver of AI adoption decisions in Hollywood."

High-Risk Territory: Final Pixel and Synthetic Performers

Using AI-generated footage as final pixel content — material that appears directly in the finished broadcast — is treated as high-risk and remains highly exceptional. Amazon's House of David and select shots in Netflix's The Eternaut stand as rare examples. Tellingly, both projects framed their use of AI footage to investors as a VFX cost-saving measure — a signal that cost logic remains the single most powerful driver of AI adoption decisions.

AI modification of existing footage occupies a more permissible middle ground. De-aging and aging actors, AI-driven lip-sync for content localization, and AI-assisted reshoot work (with actor consent) have encountered comparatively less resistance. The common denominator across these use cases is that they augment existing performances rather than replace them.

AI Use Cases in Production: Adoption Level & Risk Profile

Workflow

Description

Adoption

Risk Level

Pre-production Visualization
(Previsualization)

AI-generated pitch decks, concept art, storyboards, reference images, costume and set design ideas. Most broadly adopted workflow.

Widespread

Low — no copyright required

LLM Script Breakdown

LLM-based automated script analysis and breakdown: character, location, and scene segmentation.

Widespread

Low

AI Dubbing
(Voice Synthesis)

AI voice synthesis for foreign-language dubbing and localization lip-sync. Requires actor consent.

Widespread

Low (consent required)

Background Replacement
(BG Replacement)

Replacing green screen backgrounds with AI-generated environments.

Selective

Medium

De-aging / Aging
(Visual Age Adjustment)

Visual age modification of existing footage. Also covers AI lip-sync for localization and AI-assisted reshoots. Actor consent required.

Selective

Medium — consent required

Final Pixel Content
(AI-generated footage)

AI-generated footage used directly in broadcast content. Examples: House of David (Amazon), select shots in The Eternaut (Netflix) — both framed to investors as VFX cost savings.

Highly Limited

High — majors avoiding

Synthetic Performer
(Fully AI actor)

Fully AI-generated actors with no real-person basis. Example: AI actress Tilly Norwood (produced by Particle6).

Early Experiment

Very High — union opposition

Analysis: A Risk-Minimization Strategy — And What Breaks It

The pattern across the industry is consistent: AI adoption in film and TV production is proceeding as a risk-minimization strategy. Workflows with the lowest legal and labor exposure are being standardized first, creating a clear sequencing from pre-production visualization down toward final pixel content.

The extreme rarity of AI-generated final pixel content is not a function of technological limitation — the tools exist. It reflects a deliberate studio judgment about legal and labor risk. That judgment is subject to change. When union negotiations reach settled outcomes and key litigation concludes, the constraints currently holding back AI's deepest penetration into production will ease. At that point, adoption is likely to accelerate sharply.

The industry is not waiting for AI to become capable enough to use in production. It is waiting for the legal and contractual environment to become clear enough to do so at scale.