This interview analysis is sponsored by CCC and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.
AI is advancing across enterprise workflows faster than governance frameworks and licensing structures can keep pace. Gaps in copyright compliance, governance, data provenance, and licensing are creating real exposure and slowing the responsible, scalable development of AI.
AI is now treated as a material enterprise risk, with AI‑related disclosures in S&P 500 filings rising from 12% in 2023 to 72% in 2025, according to an analysis by The Conference Board and ESGAUGE. The report identifies unresolved copyright, provenance, and licensing issues as sources of legal, regulatory, and reputational exposure as AI adoption accelerates.
Independent research shows governance gaps are systemic. A joint review by MIT, CSAIL, and MIT FutureTech finds that even the most comprehensive AI risk frameworks miss roughly 30% of known risk categories. MIT Sloan experts conclude that AI adoption has “exceeded the operational capabilities of most organizations,” leaving risk programs underdeveloped.
Recent high‑profile litigation underscores the exposure created by ungoverned IP and copyright practices in AI workflows. A Harvard Law Review analysis of The New York Times v. OpenAI details the Times’s allegation that OpenAI and Microsoft used a “mass of Times copyrighted content” to train GPT models without permission — a direct example of unlicensed training data and unverifiable provenance. The case illustrates how unclear rights and licensing gaps can quickly escalate into legal, operational, and reputational risk for organizations deploying or relying on AI systems.
Emerj’s ‘AI in Business’ podcast recently hosted conversations with Roanie Levy of CCC, Lauren Tulloch of CCC, and Nina Edwards of Prudential Insurance. Their discussions highlight how traditional AI practices expose organizations to copyright and data‑use risks, and why defensible strategies — from implementing proactive licensing solutions to red‑light/green‑light governance — are becoming essential in regulated sectors.
This interview analysis outlines how disciplined governance, clear policies, and integrated compliance workflows can support safe, scalable AI adoption in regulated sectors, with emphasis on:
- Modernizing copyright compliance for AI adoption: Updating legacy IP and licensing frameworks into AI‑aware governance models that reduce legal exposure and ensure compliant use of internal and external content across regulated workflows.
- Operationalizing responsible AI across daily execution: Embedding intuitive guardrails, clear escalation paths, and rigorous vendor oversight so employees can manage copyright and data‑use risks confidently without slowing decision cycles.
- Building scalable licensing and provenance foundations: Establishing enterprise‑wide visibility, structured risk tiers, and strategic licensing partnerships to support defensible AI development and protect regulatory, operational, and brand integrity.
Listen to the full episodes from the series below:
Episode 1: Why Regulated Industries Must Rethink Copyright and AI – with Roanie Levy of CCC
Guest: Roanie Levy, Licensing and Legal Advisor, CCC
Expertise: AI Governance & Risk Strategy; Intellectual Property; Generative AI Adoption; Corporate Digital Transformation
Brief Recognition: Roanie Levy, Licensing and Legal Advisor at CCC, combines over 20 years of intellectual property and copyright law expertise with a strong entrepreneurial and technological background. As Access Copyright’s former President and CEO, Levy successfully navigated complex legal landscapes while driving innovation and growth. Her deep understanding of technology’s impact on the creative industries informs her current focus on the ethical and responsible use of AI. At CCC, she supports initiatives to develop licensing frameworks that balance technological advancement with protecting creators’ rights, ensuring that AI technologies are deployed transparently and fairly.
Episode 2: Copyright Risk in Financial Services and the Rise of Responsible AI – with Lauren Tulloch of CCC
Guest: Lauren Tulloch, Vice President & Managing Director, CCC
Expertise: Corporate Compliance & Licensing Solutions; Enterprise Product Strategy; Acquisition Integration
Brief Recognition: Lauren Tulloch is Vice President & Managing Director at CCC. In that role, she is responsible for the Corporate Business Unit, which includes copyright licenses, the RightFind product suite, and managed knowledge services. Prior, Tulloch held several product management leadership roles in the organization. Before joining CCC, she served as a group publisher at a healthcare education & training company. Tulloch began her career as a newspaper reporter and editor. She holds a Bachelor’s degree in journalism and political science from Boston University.
Episode 3: Copyright & Compliance for Enterprise AI From Demos to Defensible – Nina Edwards of Prudential Insurance
Guest: Nina Edwards, Vice President of Emerging Technology and Innovation at Prudential Insurance
Expertise: Enterprise AI Strategy, ROI Metrics & Scaling, Emerging Technology Leadership, Applied Intelligence, Financial Services Innovation
Brief Recognition: Nina Edwards is Vice President of Emerging Technology & Applied Innovation at Prudential Financial, where she drives AI strategy and scaling initiatives across the enterprise. She previously served as Global Chief of Staff for Accenture’s Applied Intelligence practice, supporting growth and strategy in data, AI, machine learning, and more. Her career spans strategy development, partnerships, financial planning, performance metrics, executive reporting, and operations across financial services.
Modernizing Copyright Governance for AI Adoption
As enterprises adopt generative AI, they are discovering that modern AI workflows collide with copyright and licensing structures that were not originally designed with machine learning, model training, or automated content generation in mind.
In the first episode of the series, Roanie Levy, Licensing and Legal Advisor, CCC, underscores that this is a structural, not superficial, mismatch. Generative AI collapses traditional human-centered copyright use categories — quoting, referencing, summarizing, transforming — making legacy corporate policy frameworks hard to apply consistently in model-driven contexts, where machines ingest and generate content in ways those policies were not designed to address.
She notes that enterprises need governance models that explicitly recognize machine use and continuous ingestion, not just human reading or one‑time reuse. To modernize at scale, Levy points to two practical mechanisms:
- Collective licensing streamlines rights across large content catalogs and reduces the need for one‑off negotiations.
- Integrated rights-checking tools and workflows make it easy for employees to verify whether content is licensed for AI use before entering it into AI workflows, reducing inconsistent ad-hoc decisions across teams.
Lauren Tulloch, Vice President & Managing Director at CCC, connects these conceptual gaps to everyday enterprise reality — particularly in financial services. Institutions frequently fine‑tune models on a mix of internal materials and subscription‑based external sources, but many enterprise agreements were written for human consumption, not computational use.
Tulloch emphasizes the crucial need to determine whether a content license includes AI uses at all, and, if so, whether the covered uses are sufficient for your purposes — for training, fine‑tuning, prompting/embedding, automated summarization, and derivative outputs. Even paid sources can remain off‑limits for AI workflows if the underlying licenses aren’t broad enough.
She adds that much of this exposure originates in procurement, where historical contracts rarely contemplate machine‑use. Modernization, therefore, requires standardized AI clauses and renewal playbooks so that rights are negotiated proactively rather than discovered reactively by technical teams.
Tulloch stresses that this isn’t about employees ignoring rules; it’s about policies built for a different technological era. Setting the stage for responsible adoption, she explains:
“Copyright cannot be an afterthought; it needs to be a foundational element of a responsible AI program. Most employees already understand that large language models require content to work, so introducing the idea that someone owns that content is actually not difficult. From there, you can build education and processes that help employees understand what’s appropriate, what’s licensed, and what needs escalation.”
— Lauren Tulloch, Vice President & Managing Director at CCC
Drawing on Levy’s governance framework and Tulloch’s licensing guidance, the following controls operationalize modernization in practice, aligning rights with real AI use and minimizing downstream fixes:
- Rewrite legacy corporate policies to explicitly cover training, fine‑tuning, prompting/embedding, automated summarization, derivative outputs, and internal vs. external use.
- Ensure that your vendor agreements and standardized procurement templates include AI rights for your AI-related uses, as needed.
- Leverage direct and collective licensing to secure comprehensive rights.
- Design compliance processes to be frictionless for employees, so verifying content rights becomes a routine habit rather than a burden.
- Embed copyright checks into responsible AI frameworks so rights validation happens before content enters any AI workflow.
Operationalizing Responsible AI Across Daily Execution
Enterprises often underestimate the degree to which responsible AI depends on daily behavior and workflow design, not just policy language. The goal is to make the right thing the easy thing, so that compliance becomes a habit rather than a hurdle.
Roanie Levy emphasizes that most exposure doesn’t come from intentional misuse; it comes from teams moving fast without clear, accessible guidance. She warns of an “illusion of compliance” in internal‑only environments: content that seems contained during experimentation can later flow into customer‑facing outputs, creating risk after the fact.
Levy also notes that inconsistent interpretations across teams — not just individuals — are a major operational failure mode, which is why guidance must be embedded where people work, not buried in documentation.
“You want to have tools that make it easy for your staff to be able to check the rights regarding a given piece of content before they use it in an AI workflow. And you want to make it easy, because if you don’t have those three elements in place — policy, licensing, and tools — compliance will become a burden rather than a habit. You want to turn it into a habit.”
— Roanie Levy, Licensing & Legal Advisor at CCC
Lauren Tulloch observes that confusion typically arises in gray zones that employees can’t resolve on their own, for example, whether summarizing a licensed report through an AI tool is permitted, or when a “quick check” evolves into workflow drift (from a sentence to a paragraph to an entire document). She argues that employees don’t need more rules; they need predictable processes: simple rights checks, clear escalation paths with SLAs, and visibility into who approves what.
Tulloch also highlights vendor oversight as a daily discipline: maintaining an approved vendor list, surfacing permitted uses per vendor, and ensuring external tools are configured to respect organizational constraints.
In the final episode of the series, Nina Edwards broadens the lens to focus on enabling fast, safe experimentation. She emphasizes the importance of centralized intake and cross-functional review structures that consistently capture AI requests, data needs, and risk factors, bringing legal, compliance, and technology teams together early in the process.
Edwards distinguishes demo-quality from production-grade AI and warns against allowing experimentation to move into operations without appropriate governance gates. She advocates for review processes scaled to the level of risk — matching controls to impact and data sensitivity — along with separate test environments and clear approval timelines so governance accelerates execution instead of slowing it down.
Drawing on the combined operational insights from Levy, Tulloch, and Edwards, the following workflow controls help translate policy into consistent daily behavior without adding friction:
- Integrate rights‑checking directly into daily tools so validation happens in‑flow across browsers, productivity apps, and code assistants.
- Publish clear escalation paths with response SLAs and named contacts, supported by a simple intake that captures purpose, data, model, and intended audience.
- Create centralized intake pathways for AI requests and route them through cross‑functional triage spanning legal, compliance, and technology to prevent shadow AI and duplicate efforts.
- Define risk tiers with right‑sized controls, using pre‑approved patterns for low‑risk work and stricter review for higher-impact or sensitive data use cases.
- Provide isolated sandboxes with guardrails, including data egress limits and restricted connectors, to enable safe experimentation.
- Embed guardrails in tools by default, such as pre‑approved models and connectors, privacy‑preserving defaults, and in‑product guidance that nudges correct choices.
- Maintain an approved vendor list with permitted uses surfaced at the point of choice, and regularly review vendor behavior for alignment with policy.
- Offer scenario‑based training and office hours, and appoint team‑level champions who reinforce good habits and resolve gray zones quickly.
Building Scalable Licensing and Provenance Foundations
Provenance is emerging as a backbone of defensible AI at scale — an idea Nina Edwards highlights in her episode as enterprises move toward provenance-by-default: approved models, licensed inputs, and origin signals embedded into workflows, enabling lineage to be validated quickly.
When inputs are licensed, models are approved, and outputs are traceable, risk teams can evaluate impact more quickly and consistently:
“There is something building around the defensible AI ecosystem where every output, whether it’s code, a customer letter, or a workflow, has verifiable lineage. That means provenance by default — approved models, licensed inputs, watermarked inputs. The point is to make lineage verifiable in minutes, not weeks.”
— Nina Edwards, Vice President of Emerging Technology & Innovation at Prudential Insurance
Lauren Tulloch maps the licensing foundation required to support that vision at scale. Financial institutions depend on external expert content for forecasting, research, and risk modeling, yet many agreements do not extend to AI use. She notes that rights often differ across several dimensions, including:
- Model stage — pretraining, fine‑tuning, retrieval/embedding, and automated summarization.
- Audience — internal analysis versus external distribution.
- Downstream handling — embedding retention, vector storage, and output caching, which frequently require explicit negotiation.
Without portfolio‑level visibility — a rights matrix mapping each content source to permitted AI uses and constraints — teams risk building systems on unclear or insufficient rights.
“There are two failure points to watch: inputs you don’t have rights to, and outputs that look too much like someone else’s work,” says Roanie Levy in her episode.
That dual exposure means enterprises must manage input risk (unlicensed training or prompting) and output risk (look‑alike generation or insufficient human authorship). To remain defensible at scale, organizations should enforce human review for externally facing materials and apply tiered safeguards for higher‑risk content categories, such as premium research and proprietary market data.
Together, the guests point to a clear set of actions that help align licensing with real AI use and keep outputs defensible:
- License the full AI lifecycle: ensure agreements cover pretraining, fine‑tuning, retrieval/embedding, automated summarization, derivative outputs, and both internal and external use.
- Build a rights matrix: maintain an enterprise catalog showing each content source, the AI uses it allows, and any downstream constraints such as embedding retention or output caching.
- Work with licensing partners: collaborate with aggregators and collective licensing organizations to secure standardized secondary rights and reduce one‑off negotiations.
- Implement provenance telemetry: log model versions, approved connectors, prompt artifacts, content‑source identifiers, and approval tickets so lineage can be validated in minutes.
- Strengthen output safeguards: use human‑in‑the‑loop review for external‑facing outputs; apply stricter safeguards to premium research, news wires, and proprietary datasets.
- Maintain an approved catalog with attestations: keep a registry of approved models, datasets, and connectors, including who approved them and the constraints under which they were approved; ensure vendor terms align with enterprise rights.
- Apply origin signals consistently: establish when watermarks or other provenance markers are required for customer‑facing outputs and published materials.
- Review and re‑validate regularly: refresh licensing, telemetry, and provenance controls as regulations evolve, and re‑check artifacts before promoting anything from demo to production.



















