AI Didn't Take the Jobs "The Real Cause Behind California's 14% Creative Economy Workforce Decline"
Otis College's 2026 report finds California's 14% creative job losses stem from Peak TV collapse and streaming restructuring, not AI — while AI is reshaping how work gets done, not who does it.
Jung Han8 min read분 읽기
Otis College Report: AI Is Rewriting the Grammar of Creative Work — Not Eliminating the Workers Who Do It
When generative AI emerged onto the scene, California's creative economy began shedding jobs at an alarming rate — 114,000 positions, or 14 percent of the workforce, vanished between 2022 and 2025. The world quickly settled on a narrative: AI was replacing creative workers.
A new report from Los Angeles-based Otis College of Art and Design, released April 7, demolishes that narrative with data. The occupations most exposed to AI — writers, software developers, artists — actually grew during this period. The sectors that contracted most were film, television, and traditional media, gutted not by algorithms but by the collapse of Peak TV and the pivot to streaming profitability.
The verdict: the real culprits are structural cost pressures and industry restructuring — not artificial intelligence.
What the Numbers Actually Show
California's creative economy lost 114,000 jobs between 2022 and 2025 — a 14 percent decline. The film, TV, and sound sector contracted by nearly 30 percent; traditional media fell by approximately 34 percent. On the surface, these losses coincide with the ChatGPT release in November 2022. But the report's authors argue the pattern of losses does not match an AI displacement scenario.
Co-author Patrick Adler, founding partner of Westwood Economics and Planning Consultants, is direct: "The pattern of job loss in terms of the types of jobs that are being lost and when they're being lost does not support the fact that there's been this displacement of workers by AI." The losses instead reflect the unwinding of Peak TV, streaming platforms' hard pivot to profitability, and the migration of lower-wage roles out of California due to soaring living costs. Meanwhile, new media employment surged 47 percent, and streaming held near the national average.
[Figure 1] California Creative Economy Employment Change by Sector (indexed, 2017 Q1 = 0). Source: Otis College / BLS QCEW
The Counterintuitive Finding: AI-Exposed Jobs Are Growing
The report's most striking finding is that the occupations most exposed to AI tools are actually adding jobs. Writers, software developers, and visual artists — the roles most frequently cited as vulnerable to generative AI — saw increases in both employment and job postings between 2022 and 2025.
This suggests that, at least at this stage of AI's development, a complementarity effect is outweighing displacement. AI tools appear to be amplifying demand for high-skill creative workers who can leverage them, rather than replacing those workers outright. The caveat: this is a snapshot, and the authors acknowledge the calculus could shift as AI capabilities mature.
[Figure 2] AI-Exposed Occupations Grow While CA Creative Economy Contracts (Index 2022=100). Source: Otis College / CPS·BLS
How AI Is Actually Used on Set and in Studios — Task-Level, Not Role-Level Replacement
The qualitative interviews paint a consistent picture. Not a single respondent described AI as having replaced their entire role or workflow. Instead, AI is being deployed for specific, bounded tasks — "where the output is verifiable, time savings are clear, and the quality of output meets expectations" (AI is replacing specific tasks instead of staffers).
In post-production, AI substantially reduces repetitive work like rotoscoping and wire removal, but human review and correction remain necessary — limiting the net cost savings. Auto-generated masks and cleanup results frequently contain errors on hair, translucent objects, and complex backgrounds, requiring senior and mid-level artists to review and correct frame by frame.
This creates a "QC workforce" that can offset, and in some feature or series contexts exceed, the cost savings from reduced rendering and outsourcing. One VFX company owner put it plainly: "They have 15 artists that are sitting at workstations fixing the AI. When you multiply the rate of the artists by 15 and put that against the cost of the work you're doing, it negates any savings that AI is giving you."
A subtler finding: some workers are hiding their AI use, fearing it marks them as replaceable. One motion creative director captured the deeper cultural risk: "The creative director said, 'At a certain point, you just have to say it's good enough.' That's the biggest danger of AI. We lower our standards."
California's Underperformance Is Not Explained by AI Exposure
Figure 3 maps California's industry-level employment growth relative to national peers against each industry's AI exposure score. The R-squared value of 0.08 tells the story: there is virtually no statistical relationship between AI exposure and California's relative underperformance.
Software, games, and publishing — high-AI-exposure sectors — underperformed, but so did fashion and performing arts, which have low AI exposure. The common thread is California-specific structural factors: high operating costs, competitive pressure, and the structural collapse of Hollywood's Peak TV model. AI is not the explanatory variable.
[Figure 3] CA Underperformance Is Not Explained by AI Exposure (2022–2025, R²=0.08). Source: Otis College / BLS QCEW
AI Is Rewriting the Grammar of Creative Work
The report distills to three core conclusions. First, AI is not currently displacing creative workers — in high-skill roles, it is augmenting demand. Second, AI is rapidly changing the nature of creative work: raising productivity expectations, automating discrete tasks, and creating pressure to accept "good enough" outputs. Third, the pace and depth of AI adoption is determined less by technology than by trust.
The authors argue that organizations which reassure workers their jobs are safe — through policies like firing freezes paired with AI investment — will see faster and deeper AI adoption. Adler summarizes: "There's pretty good evidence that AI adoption would be a lot faster, a lot deeper if creative workers had more trust in it." This is an organizational management challenge, not a technology problem.
📰 Related Case | NYT Guild Calls AI Standards 'Woefully Inadequate' — The Trust Crisis in Action (Axios, April 7, 2026)
Just as the Otis College report identifies hidden AI use and organizational trust as central issues, the New York Times is living that tension in real time — as a labor dispute.
On April 7, 2026, the Times Guild's AI subcommittee sent a letter calling the paper's AI standards "woefully inadequate." The trigger: a freelance book reviewer used AI to write a review that drew phrases from a similar Guardian piece. The guild's three demands: (1) contractual protections against AI use in performance reviews; (2) mandatory disclosure of AI use in published journalism; (3) stronger NIL protections. Management accepted only the third point in modified form.
▶ Implication: A single AI-plagiarism incident cascaded into subscriber trust damage, newsroom morale problems, and a contract breakdown. Korean broadcasters and production companies designing AI transitions must establish governance frameworks, disclosure standards, and labor protections before rollout — not after.
Note: In 2025, unionized journalists at Politico and E&E News won arbitration against management over AI tool introduction. Newsroom AI labor disputes are a structural trend.
Hollywood's Studio-AI Deals: 'Production Pipeline Integration,' Not Licensing
Virtually every Hollywood studio-AI partnership struck to date focuses not on training data licensing but on deploying AI image and video tools directly into pre- and post-production workflows. The only deal to include a training data license is Disney-OpenAI — and even that is closer to character licensing than a data sale, involving exposure of Disney IP to Sora and ChatGPT.
The Lionsgate-Runway deal, frequently mischaracterized as a training data license, is actually a fine-tuning arrangement: Lionsgate's catalog is used to build a custom video generation model for exclusive internal use by its directors and creatives. The data doesn't leave — the capability gets stronger.
The table below covers the six major studio-AI partnerships excluding Disney-OpenAI. In all six cases, the core value exchange is 'AI technology embedded in production' — not data sales, not IP replication rights.
AI/Tech Co.
Studio
Date
TrainingData Lic.
IP Repro.Lic.
Purpose / Notes
Runway
AMC Networks
Jun 2025
None
None
Integration of Runway generative AI tools into marketing and TV development (campaign ideation, previs, VFX). Production pipeline internalization.
GoogleDeepMind
PrimordialSoup
May 2025
None
None
Access to VEO video generation model and other tools for 3 directors; short film production paired with feedback collection. R&D / creative pilot.
Runway
Fabula
Apr 2025
None
None
Runway adopted across global production pipeline: pitch materials, concept development, storyboarding, VFX.
Runway
EDGLRD
Apr 2025
None
None
First-look development deal for new formats and projects using Runway tools. Development-stage focus.
Meta
Blumhouse
Oct 2024
None
None
Creative pilot program testing Meta AI image/video model suite 'Meta Movie Gen.' Exploratory / R&D.
Runway
Lionsgate
Sep 2024
Yes(fine-tuning)
None
Custom AI model fine-tuned on Lionsgate film/TV catalog for exclusive use by directors and creatives in pre/post-production. Not a data sale — internal capability enhancement.
[Table] Major Hollywood Studio-AI Partnerships (2024–2025, excl. Disney-OpenAI). Source: K-EnterTech Hub
▶ Key Observation: Runway is rapidly consolidating its position as Hollywood's production AI vendor of choice, with deals spanning AMC Networks, Fabula, EDGLRD, and Lionsgate. Google DeepMind and Meta are conducting market exploration through targeted pilots with Primordial Soup and Blumhouse respectively. The consistent pattern: AI is moving into studios, not out of them.
IMPLICATIONS FOR KOREA'S MEDIA & CONTENT INDUSTRY
The California findings offer a critical reference framework as Korea navigates its own AI-driven media transformation. The parallels — and the divergences — are instructive.
No.
Implication
1
Stop Blaming AI — Diagnose the Real Structural Issues
Korea's broadcasting and media industry faces its own structural pressures: advertising market contraction, intensifying OTT competition, and soaring production costs. Attributing job losses or revenue decline to AI is a misdiagnosis. As in California, the primary drivers are likely industry restructuring and platform economics. Accurate diagnosis must precede policy prescription.
2
K-Content's Core Strength — High-Skill Creativity — Remains Durable in the AI Era
The finding that AI-exposed, high-skill occupations grew is directly relevant to Korea's competitive advantage. K-drama and K-film are built on storytelling, directing, and visual craft — precisely the high-skill creative roles that appear resilient to AI displacement. The framework of 'co-evolution (공진화)' — humans and AI growing together — is the right lens for Korean content strategy.
3
FAST and Streaming Expansion Opens New AI Application Zones
The expansion of FAST channels, YouTube, and OTT platforms creates targeted opportunities for AI deployment in subtitling, localization, VFX cleanup, and metadata optimization. Korean media companies should pursue granular, task-level AI integration rather than wholesale automation — mirroring the bounded AI use the Otis report documents in Hollywood.
4
The 'No-Layoff' Principle — A Lesson for Korea's Legacy Broadcasters
The report's most actionable policy recommendation applies directly to KBS, MBC, SBS, and major production studios undertaking AI transitions. Workforce anxiety suppresses experimentation. Organizations that guarantee job security while piloting AI tools will achieve faster, deeper adoption — especially relevant as Korean public broadcasters negotiate AI governance frameworks with unions.
5
Evidence-Based AI Policy — A Call for Korean Regulators and the National AI Committee
As Korea's National AI Committee, MSIT, and the Korea Media Communications Commission (방송미디어통신위원회) design AI policy for the media sector, the Otis report is a caution against over-relying on an 'AI displacement' frame that may not reflect empirical reality. The Commission can play a key role by conducting sector-specific surveys — VFX, animation, post-production — on AI adoption impacts (costs, labor intensity, new roles, freelance structure changes), and using the findings to calibrate standard contracts, fair trade guidelines, and copyright/neighboring rights protections.
Distinguished Professor Samseog Ko's (고삼석) co-evolution framework — designing ecosystems where AI and human creators advance together — should anchor Korea's regulatory approach. The policy goal must shift from "how much can we restrict AI" to "how can AI raise the capability, compensation, and security of human creators" — enabling an integrated long-term ecosystem design that combines regulation and promotion.
The temptation to tighten restrictions under the intuitive slogan 'stop AI to protect jobs' risks crowding out the retraining, transition, and innovation investment the industry actually needs. Two priorities should guide evidence-based policy: first, a 'facts-first' principle — quantitative analysis of displacement vs. complementarity effects, occupational skill shifts, and income distribution impacts before designing any regulatory instrument; second, multi-layered co-evolution support — education and upskilling programs, tripartite labor-industry-platform governance structures, and strengthened rights and compensation protections for human creators.
Source: The Hollywood Reporter, Katie Kilkenny, "California's Creative Job Losses Aren't AI Casualties, Key Report Finds" (April 6, 2026)
Data: Otis College of Art and Design / Westwood Economics & Planning Consultants, '2026 Creative Economy Report'