Keeping Up with AI Regulations in a New Age of Data Privacy – with Leaders from OneTrust, Microsoft, and TELUS

Riya Pahuja

Riya covers B2B applications of machine learning for Emerj - across North America and the EU. She has previously worked with the Times of India Group, and as a journalist covering data analytics and AI. She resides in Toronto.

Keeping Up with AI Regulations
in a New Age of Data Privacy-min

This interview analysis is sponsored by OneTrust and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.

The rapid evolution of privacy laws, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), presents significant legal risks for businesses handling customer data, especially as AI technologies become more integrated into operations. Organizations must continuously adapt their data management practices to comply with these changes.

Analysis from the University of California, Berkeley Center for Long-Term Cybersecurity highlights that the GDPR and CCPA impose fines and penalties to punish infringing companies and deter noncompliance by introducing new types of risks. Violating the GDPR could result in a fine of 20 million euros or 4% of worldwide annual revenue, whichever is greater. 

In contrast, violations of the CCPA may result in up to $7,500 per violation. Companies must take these consequences seriously, as noncompliance can undermine consumer trust and lead to costly legal battles. 

The International Association of Privacy Professionals (IAPP) emphasizes the importance of consumer trust in data privacy. Their research indicates that many consumers are concerned about privacy online, affecting behaviors like phone use, web browsing, and purchasing decisions. 

When data is mishandled, it invites penalties and results in long-term brand damage, highlighting the cost of noncompliance in terms of lost business.

Emerj Artificial Intelligence Research recently sat down with Stephanie McReynolds, Vice President of Product Marketing at OneTrust, Dean Carignan, Partner Program Manager, Office of the Chief Scientist at Microsoft, and  Moutie Wali, Director of Digital Transformation and Integrated Planning, Telus, to talk about the challenge of adapting privacy laws to AI’s rapid evolution. 

Together, each executive emphasized the importance of safeguarding sensitive customer data from exposure to the internet and ensuring privacy while adopting AI technologies.

This article examines the following key insights from these conversations:

  • Conducting an AI systems audit to ensure compliance
  • Implement responsible AI frameworks with clear principles and empowered teams
  • Collaboration among industry leaders, AI experts, and policymakers for balanced AI regulation

Conducting an AI Systems Audit to Ensure Compliance

Episode: Keeping Up with AI Regulations in a New Age of Data Privacy – with Stephanie McReynolds of OneTrust

Guest: Stephanie McReynolds, Vice President, Product Marketing, OneTrust

Expertise: Product Marketing, Category Creation, Demand Generation

Brief Recognition: Stephanie McReynolds is the Vice President for Product Marketing at OneTrust. She earned her Master’s Degree in International Policy Studies from Stanford University. With over two decades of experience, she has successfully built and scaled marketing, sales development, product marketing, and product management teams from the ground up to 40 employees.

Stephanie talks about the significance of the EU AI Act as the first significant legislation governing AI, drawing parallels to the GDPR and suggesting that it could create a “Brussels effect,” influencing global standards and practices. 

She emphasizes that even organizations not operating in the EU or serving EU residents should pay attention to how companies respond to both the AI Act and any new legislation, as regulations may have broader implications:

“What’s particularly pressing right now is that the act’s ban on unacceptable risk takes effect on February 2nd, 2025, and that’s less than six months away. This means that organizations need to document their AI systems in use because they may not know that they have an unacceptable risk application.

So even to determine that you should start an inventory of all of your AI applications and AI projects, what the intention is – what data is being used with those applications – and start some of your assessments so that you have a handle on your inventory and can get a heads up on whether you might have any unacceptable risk applications in your organization.”

– Stephanie McReynolds, Vice President, Product Marketing, OneTrust

She also suggests organizations start training on responsible AI use, ensuring everyone with access to gen AI chatbots understands the concept of responsibility. 

Such a foundational step, McReynolds says, is crucial, as governance teams have traditionally relied on training, attestation, and manual intervention to enforce data governance policies. However, she warns that reliance on training may create a compliance gap between policy setting and execution, a challenge that has persisted with regulations like GDPR.

Stephanie also notes that AI’s rapid growth is changing the landscape of data use. AI applications require more data than traditional analytics and reporting tools, incorporating both structured and unstructured data such as documents, videos, and audio files. The shift requires business and technology leaders to rethink their data governance frameworks and infrastructures to manage increased demand effectively.

Implement Responsible AI Frameworks with Clear Principles and Empowered Teams

Episode: Lessons from Microsoft’s Responsible AI Journey – with Dean Carignan of Microsoft

Guest: Dean Carignan, Partner Program Manager, Office of the Chief Scientist, Microsoft

Expertise: Innovation management, Responsible AI, Research 

Brief Recognition: Dean Carignan is the Partner Program Manager at Microsoft’s Office of the Chief Scientist, where he leads all program management efforts for the newly established organization focused on advancing scientific thinking within Microsoft and society. Dean has been with Microsoft since 2004 and holds an MBA in Strategic Management from INSEAD.

Dean shares with Emerj’s audience a solution called Ether — AI Ethics and Effects in Research and Engineering — created by bringing together researchers, engineers, and policy experts to anticipate challenges and develop solutions. This initiative became the foundation for a comprehensive, Responsible AI framework, now supported by over 350 dedicated professionals, including more than 100 focused exclusively on ethical AI practices.

He believes the challenge of keeping pace with AI’s rapid evolution is not just technical — it also involves navigating the complex landscape of data privacy and ethical regulations. 

Traditional privacy laws are not designed to accommodate the swift advancements in AI, and as Dean points out, “Traditional risk management systems are not designed to change every two to three months or to incorporate new risk categories every two to three months, and that is needed in AI because of the pace of change.”

Dean then emphasizes the importance of guiding principles as a “North Star” in the rapidly evolving AI landscape. He compares these principles to a constitution, clarifying what a company stands for, the boundaries it will not cross, and how it navigates day-to-day decision-making. 

With AI models advancing at an unprecedented pace, often monthly, such principles are essential for maintaining consistency and ensuring decisions align with the organization’s core values.

In the end, he outlines key steps for advancing responsible AI within an organization:

  1. Define core principles based on company values and ensure clear organizational communication. 
  2. Assess practices using tools like Microsoft’s Responsible AI Maturity Model to identify areas for growth.
  3. Leverage both external and internal resources, recognizing the importance of ongoing research and support from various sectors. 
  4. Build a passionate team by identifying enthusiastic employees about AI and empowering them to lead awareness and implementation efforts.

“I’m a former consultant, so I’ve always appreciated maturity models as a way to understand where you’re starting. This tool is designed to do just that—it asks a series of questions about your current efforts in responsible AI, if any, and uses your responses to place you within one of five maturity phases. From there, it provides clear steps and actions to help you advance to higher levels.

This framework is based on three years of internal research, drawing from insights across Microsoft and the journeys different teams have taken. It’s a valuable way to assess your starting point and develop a roadmap for strengthening your AI capabilities.

—Dean Carignan, Partner Program Manager, Office of the Chief Scientist at Microsoft

Collaboration Among Industry Leaders, AI Experts, and Policymakers for Balanced AI Regulation

Episode: Overcoming Barriers to AI Adoption in Telecom and Beyond – with Moutie Wali of TELUS

Guest: Moutie Wali, Director of Digital Transformation and Integrated Planning, Telus

Expertise: Innovation, Digital Transformation, AI

Brief Recognition: Moutie Wali is the Director of Digital Transformation and Integrated Planning at TELUS, where he played a key role in streamlining complex planning processes, saving 7,200 employee hours annually. With an MBA in Organizational Leadership from the University of Victoria, Moutie has been with TELUS for 13 years, driving innovation and efficiency across the organization.

In his podcast, Moutie highlights two key challenges in AI adoption for telecom: 

  • Safeguarding sensitive customer data from internet exposure and 
  • Overcoming fragmented data stored across various systems. 

To leverage AI effectively, he implores businesses to ensure data privacy while digitizing and centralizing their data for seamless access and analytics.

He also talks about balancing the need for AI innovation with ensuring data privacy and security. Along the way, Moutie notes that another challenge in regulating AI is distinguishing between genuine risks and unfounded fears or myths. Separating fact from stigma in the enterprise requires experts with deep AI knowledge and experience to work closely with regulators to guide decision-making.

He emphasizes that regulation should not be a unilateral effort by regulatory bodies. Instead, it must create collaborative communities that include industry leaders, AI experts, and policymakers. Without such collaboration, there is a risk of overregulation, which could hinder AI’s ability to drive economic growth and improve organizational capabilities. 

“Regulatory does not want to lag behind. However, they also need to engage industry leaders to analyze and strike the right balance between protecting citizens and leveraging AI as a driver of economic growth.

Achieving this balance requires building communities around AI and fostering collaboration between regulators and industry leaders. It must be a collective effort to ensure responsible AI development while unlocking its full potential.”

– Moutie Wali, Director of Digital Transformation and Integrated Planning at Telus

Moutie accepts that regulators face pressure to act quickly because of the rapid pace of AI development. However, they must also carefully balance two key objectives:

  • Protecting citizens from potential risks posed by AI
  • Fostering economic growth by leveraging AI’s transformative potential. 

Achieving this balance requires input from industry leaders who can help regulators understand AI’s capabilities and limitations while advocating for policies that enable innovation.

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

Join over 20,000 AI-focused business leaders and receive our latest AI research and trends delivered weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.