Lionsgate Hires Hollywood’s First Chief AI Officer. An Oscar-Winning Director Says the Industry Isn’t Ready.

HOLLYWOOD & TECHNOLOGY

Kathleen Grace, an AI rights-management specialist, will oversee the studio’s enterprise-wide artificial-intelligence strategy—the same week Daniel Kwan warned Sundance that creators are ‘collateral damage’

Lionsgate announced the appointment of Kathleen Grace as its first Chief AI Officer on LinkedIn, Feb. 5, 2026.

In the span of a single week, Hollywood sent two starkly divergent signals about the future of artificial intelligence in entertainment.

On Feb. 5, Lionsgate became the first major Hollywood studio to create a dedicated C-suite position for AI, naming Kathleen Grace as its inaugural Chief AI Officer. Grace, a former executive at AI rights-management startup Vermillio, will report directly to Chief Executive Jon Feltheimer and oversee the integration of artificial intelligence across every dimension of the studio’s operations—from film production and visual effects to content distribution and intellectual-property protection.

“CAIO가 온다” vs “우리가 간접 피해자다”
라이온스게이트 할리우드 최초로 CAIO(Chief AI Officer)를 임명. 글너ㅏ 바로 그 주, 오스카 수상 감독 다니엘 콴이 “우리는 준비돼 있지 않다, 우리는 간접 피해자다”라며 ‘전원 비상’을 외쳐. 할리우드는 지금 AI를 둘러싸고 스튜디오 가속 페달, 창작자들은 비상 브레이크를 동시에 밟는 극단적 긴장 국면
Korea version

The following day, Daniel Kwan—the Oscar-winning co-director of “Everything Everywhere All at Once”—took the stage at the Sundance Film Festival and delivered a blunt warning to the entertainment industry: “We are not ready for this and we are the collateral damage.”

The juxtaposition was not coincidental. It crystallized a tension that now sits at the center of Hollywood’s AI reckoning: studios are racing to institutionalize AI as a strategic asset, while creators are mobilizing to ensure the technology doesn’t erode the rights and livelihoods it was ostensibly designed to support.

The Appointment

Lionsgate’s decision to elevate AI governance to the C-suite carries significance beyond organizational reshuffling. By placing Grace in a role that reports directly to the CEO—rather than housing AI under a chief technology officer or chief information officer—the studio signaled that artificial intelligence is no longer a back-office function. It is, in the company’s framing, a strategic imperative that touches “every facet” of the business.

“Kathleen understands the AI ecosystem from the perspective of both the creator and the IP owner,” Feltheimer said in a statement. “She is the right person to lead our team in this exciting, complex, and nuanced environment.”

Grace’s mandate spans three pillars: developing AI-powered tools that support filmmakers’ creative vision; identifying efficiency opportunities across production, marketing, distribution, and corporate operations; and spearheading initiatives to protect the studio’s intellectual property and the rights of its talent partners. In the wake of the 2023 SAG-AFTRA and WGA strikes—which placed AI and talent rights at the center of industry-wide negotiations—the integrated approach represents a potentially industry-defining model.

An AI Rights Specialist in the Corner Office

Grace’s professional biography explains why Lionsgate chose an IP-protection specialist rather than a pure technologist for the role. Before joining Lionsgate, she served as Chief Strategy Officer at Vermillio, an AI startup that built what it describes as “the first AI rights management platform.” Vermillio’s flagship product, TraceID, enables content owners and talent to track, authenticate, and receive compensation when their work is used in AI model training—a capability that has become increasingly critical as generative AI systems ingest vast libraries of copyrighted material.

Vermillio’s TraceID platform, which Grace helped build, tracks unauthorized AI use of creative content, assigns AI risk scores, and generates threat reports for rights holders.

Lionsgate’s leadership specifically cited Vermillio as “a company focused on protecting creators and talent as it relates to AI adoption.” Feltheimer called talent protection “a real priority.”

Dan Neely, Vermillio’s CEO, said he was “proud to see Kathleen step into this pioneering role as the first Chief AI Officer at a Hollywood studio.”

Before Vermillio, Grace founded New Form, a digital studio backed by Ron Howard, Brian Grazer, and Discovery Communications. Under her leadership, New Form developed 43 pilots and sold 23 series to networks and platforms including TBS, Freeform, Quibi, Refinery29, and go90. She also ran YouTube’s global Spaces program. The combination of creative-industry experience and AI-governance expertise made her, in Feltheimer’s assessment, uniquely suited to navigate what he described as the “exciting, complex, and nuanced” landscape of AI adoption in entertainment.

AI Already at Work Inside Lionsgate

Grace does not arrive at a blank slate. Lionsgate has been quietly deploying AI across multiple business functions, as Vice Chairman Michael Burns detailed during the company’s fiscal third-quarter earnings call. The studio has partnered with generative-AI startup Runway to train a custom AI model on its content library—an industry-first deal that positions Lionsgate’s vast catalog of films and television series as a proprietary training dataset.

Application Area

Details

Strategic Significance

FAST Scheduling

AI-driven automated channel programming

Scalable global content distribution

VFX / Post-Production

AI-enhanced visual effects (e.g. Spartacus)

Cost reduction with quality gains

Script Development

Writer-collaborated AI revision tools

Creative augmentation positioning

Technical Operations

Enterprise-wide AI integration

Back-office efficiency

Model Training

Runway partnership; library-based custom model

Proprietary AI asset development


Burns was notably circumspect about AI’s role in original creative work. “Maybe we are, but I’m not going to talk about it,” he said when asked whether AI was being used in content creation—a strategically ambiguous response that reflects the post-strike sensitivity around the subject. Feltheimer was more explicit about the guardrails: “I am only in favor of AI if appropriate guardrails are established.” He added that Grace would become the studio’s “point person” for partnership discussions with every major AI company.

The Financial Context

Lionsgate’s CAIO appointment arrives during a period of mixed financial performance that underscores both the opportunities and pressures driving the studio’s AI strategy.

Metric (FY2026 Q3)

Result

YoY Change

Total Revenue

$724.3M

Motion Picture Revenue

$421M

+35%

TV Production Revenue

$303.1M

Declined

TV Segment Profit

$55.7M

Declined

Trailing 12-Month Library Revenue

$1.05B

+10% (record)

The motion-picture division surged 35% year-over-year, powered by the box-office performance of “The Housemaid” and “Now You See Me: Now You Don’t.” Wolfe Research analyst Peter Supino credited “The Housemaid”’s success to “precise execution of a playbook built on existing IP, risk diversification, mid-range budgets, and franchise expandability,” maintaining a bullish outlook on Lionsgate’s FY2027/28 slate.

Perhaps the most strategically significant number: trailing 12-month library revenue crossed $1.05 billion, a record. In an era when content libraries serve a dual purpose—as both traditional licensing assets and as training data for AI models—this milestone takes on new meaning. The Runway partnership to train custom AI models on Lionsgate’s library transforms a catalog of legacy content into a proprietary AI asset, a potential model for the industry.

‘We Are the Collateral Damage’

The day after Lionsgate’s announcement, The Hollywood Reporter published a detailed account of Kwan’s remarks at a THR x Autodesk panel titled “AI and Independent Filmmaking,” held Jan. 25 at Pendry Park City during Sundance. The panel, produced in partnership with the Berggruen Institute, also featured actor-entrepreneur Joseph Gordon-Levitt, director Noah Segan, producer Janet Yang, and Autodesk’s Matthew Sivertson.

Kwan’s remarks, delivered over approximately 30 minutes, struck a consistently cautionary tone. His central thesis: the entertainment industry must establish collective guardrails before AI companies set the rules unilaterally.

“We are not ready for this and we are the collateral damage,” Kwan said, arguing that if AI companies are allowed to dictate terms, the consequences would extend far beyond Hollywood to affect “multiple industries and the general public.”

“This is an all-hands-on-deck situation,” he continued, calling on the industry to organize collectively rather than rely on individual studio-level responses.

“This technology is incompatible with our current systems, our current institutions, our current labor laws,” Kwan warned—a statement that framed AI not as an incremental tool upgrade but as a structural disruption that cuts through a century of legal and institutional frameworks.

Yet Kwan’s position was more nuanced than simple opposition. He acknowledged that AI is both “amazing” and “terrible” for filmmakers, and that it could be “a tool that could transform our industry in a much better way.” He added, with characteristic directness: “Our industry is not perfect.”

The Data-Harvesting Warning and a New Documentary

Kwan also issued a pointed warning about social-media data harvesting, specifically calling out a trending “2016 throwback photo” challenge on Instagram, TikTok, and Threads. “Don’t do it,” he urged the audience. “They’re using it to train their machines to show how people age. Stop.” The comment underscored how personal data shared on consumer platforms is being repurposed—often without explicit consent—to train AI models, a concern that extends well beyond the entertainment industry.

Kwan was at Sundance to support the world premiere of “The AI Doc: Or How I Became an Apocaloptimist,” a documentary he produced with his filmmaking partner Daniel Scheinert and producer Jonathan Wang. Directed by Daniel Roher and Charlie Tyrell, the film is set for release by Focus Features on March 27. Kwan described the project as covering “every major AI issue” and featuring “almost every major figure in the industry.”

“I am sick of talking about AI,” Kwan conceded. “The problem is human nature and entropy. Building something is always harder than tearing it down, and right now tearing things down is so much easier.” The documentary, he said, was conceived over three-plus years with the goal of showing audiences “how to regain agency” beyond the “bullshit and hype” of technology-company marketing.

A Creators’ Coalition and a Transition Framework

Kwan also referenced the recently launched Creators Coalition on AI, a collective that represents the shift from individual creators negotiating with studios to organized, systematic industry response. His framing of the current moment as a “critical juncture” carried both urgency and philosophical weight.

“Things are ending, but something else is coming,” Kwan said. “Mourn what’s ending. Protect what truly matters. And plant seeds for what’s coming next.”

The statement positioned Kwan not as a Luddite opponent of technology but as an advocate for managed transition—one that acknowledges inevitable change while insisting on the preservation of creative rights and human agency.

Where the Two Positions Converge—and Diverge


Lionsgate

Daniel Kwan

AI Stance

Active adoption with guardrails

Caution; industry-wide collective response

Talent Protection

Stated as ‘a real priority’

Warned creators are ‘collateral damage’

Key Action

Hired AI IP specialist as CAIO

Called for Creators Coalition formation

Common Ground

Both acknowledge AI’s duality; both emphasize creator protection


Core Difference

Corporate-led internal governance

Creator-led collective bargaining

The striking feature of these two developments is not their opposition but their overlap. Feltheimer and Kwan both invoke the language of guardrails and creator protection. Both acknowledge that AI carries genuine creative and operational benefits alongside serious risks. The divergence lies in a single, fundamental question: Who controls those guardrails?

Lionsgate’s answer is corporate-internal governance—a C-suite officer empowered to set and enforce policy from within the studio. Kwan’s answer is collective, external accountability—a coalition of creators who negotiate terms from outside corporate structures. The tension between these two models will likely define the political economy of AI adoption in entertainment for years to come.

What It Means for Global Content Industries

The dual signals emanating from Hollywood in the first week of February 2026 carry implications that extend well beyond the domestic entertainment market.

AI IP protection is now a C-suite concern. Lionsgate’s choice to recruit from Vermillio—a company whose entire business model centers on protecting creative IP from unauthorized AI use—sends a clear message. As K-drama, K-film, and other globally distributed content libraries grow in value, the risk of unauthorized AI training on these assets increases proportionally. Studios and broadcasters in markets like South Korea, Japan, and India will face mounting pressure to establish comparable governance structures.

Creator coalitions are going global. Kwan’s call for organized, collective creator response reflects a pattern that will inevitably cross borders. In South Korea, where legal and institutional frameworks for AI-related talent protection remain in early stages, the absence of SAG-AFTRA-style collective bargaining mechanisms creates a vacuum. Industry-level voluntary guidelines may need to precede regulatory action.

Content libraries are the new oil. Lionsgate’s record library revenue—and its decision to train proprietary AI models on that library—reframes the value proposition of legacy content catalogs. The decades-deep archives held by broadcasters like KBS, MBC, and SBS represent not just traditional licensing assets but potential “digital crude” in an AI-driven economy. Valuation models will need to account for this dual utility.

The guardrails principle is becoming universal. Feltheimer’s insistence on “appropriate guardrails” and Kwan’s demand for an “all-hands-on-deck” collective response differ in mechanism but converge on principle: AI adoption without clear rules is untenable. For content industries worldwide, this means developing integrated AI governance that combines corporate-internal oversight with creator-community external accountability.

The Road Ahead

The events of early February 2026 mark a threshold. Hollywood’s AI transformation has moved beyond experimentation into institutionalization. Studios are elevating AI to the highest levels of corporate strategy. Creators are organizing collective, systematic responses. And between these two forces, a new equilibrium is struggling to take shape.

As Kwan put it, with a filmmaker’s instinct for the essential tension: “Building something is always harder than tearing it down, and right now tearing things down is so much easier.”

For content industries globally—including South Korea’s increasingly influential entertainment sector—the question is whether they can position themselves on the building side of that equation. The answer will shape the competitive landscape for years to come.

This analysis is based on reporting by Variety (Feb. 5, 2026, Rebecca Rubin), StreamTV Insider (Feb. 6, 2026, Bevin Fletcher), and The Hollywood Reporter (Feb. 6, 2026, Chris Gardner). Analysis and commentary by K-EnterTech Hub. This document does not constitute investment advice.