Navigating Global AI Compliance


3 minute read | October.03.2025

Christian Schröder, Shannon Yavorsky and Matthew Coleman joined with Dr. Avishay Klein from Barnea for an in-depth discussion on AI compliance across jurisdictions.

The compliance challenge: Companies operating globally face a complex maze of emerging AI regulations. Our panel provided practical guidance for those navigating these fast-moving and divergent legal regimes.

United States: Operating in a Regulatory Patchwork

Shannon presented the U.S. perspective.

Current landscape: No comprehensive federal AI law exists, forcing companies to navigate a patchwork of sectoral laws.

State-level activity is extensive:

  • 160+ AI-specific laws addressing transparency and automated decision-making
  • Employment-focused laws in Illinois and New York City
  • Comprehensive AI frameworks in Colorado and Texas
  • Mental health bot regulations and chatbot disclosure requirements

Federal developments:

Litigation trends: Growing disputes over training data and fair use, online privacy and biometric privacy suits, consumer protection claims and employment discrimination cases.

Open questions: Key questions remain on whether the use of training data will qualify as fair use and how to allocate liability when AI systems cause harm.

Israel: Innovation-Forward Policy Approach

Avishay presented the Israeli perspective.

Regulatory philosophy: No comprehensive AI act, with policy explicitly favoring innovation over restrictive regulation.

Current framework:

  • Non-binding national policy aligned with OECD principles.
  • Sectoral regulation beginning in financial services.
  • Existing laws already impact AI development and deployment.

2025 Privacy Protection Authority guidance:

  • Draft rules require DPOs, DPIAs and internal procedures for AI systems processing personal data.
  • Most controversial requirement: explicit consent for online data scraping.
  • Strong industry criticism as impractical.

Regulatory outlook: Israeli officials view separate AI rules as unnecessary since companies must already comply with EU and U.S. frameworks. Revised guidance expected following September 2025 consultation period.

European Union: Comprehensive Risk-Based Framework

Christian presented the European perspective.

The EU AI Act overview: Single framework designed to prevent member state fragmentation and build AI trust across the bloc.

Implementation timeline:

  • February 2025: Prohibitions and AI literacy obligations active.
  • August 2025: General-purpose AI model requirements effective.
  • August 2027: High-risk system rules fully applicable.

Key transparency requirements:

  • Disclosure when content is AI-generated.
  • Notice when individuals interact with AI systems.
  • Code of Practice is currently being discussed.

Critical compliance note: The AI Act doesn’t replace the GDPR or the Data Act.

Getting Started with AI Governance

Comprehensive AI mapping:

  • Document all AI systems, their uses, data sources and purposes.
  • Classify systems by regulatory risk level.
  • Update internal policies to reflect AI-specific requirements.

Leadership structure:

  • Technology teams should lead governance efforts.
  • Legal and compliance provide support, not leadership.
  • Cross-functional approach is essential for success.

Formal accountability framework:

  • European regulators expect documented organizational structure.
  • Recommended steering committee including legal, IT, product, HR and procurement.
  • Clear documentation of roles and responsibilities.

Accountability and Risk Management Strategies

Assigning responsibility:

  • Unlike privacy law, AI regulations do not define specific officer roles. Companies must internally assign roles and document accountability.
  • European regulators will expect written evidence of these allocations.

Risk mitigation priorities:

  • Assess legal compliance and perform data protection impact assessments where required.
  • Implement contracts, insurance and technical safeguards.
  • Regular monitoring and assessment procedures.

Immediate Action Items for Legal and Compliance Teams

Essential next steps from our panel:

  1. Define roles under the EU AI Act and other applicable frameworks.
  2. Maintain comprehensive documentation of AI systems and governance decisions.
  3. Update contracts to address AI-related risks and responsibilities.
  4. Ensure AI literacy training across the organization.

Bottom line: Developing or using AI now comes with AI compliance obligations. Companies that establish robust governance frameworks now will be better positioned as regulations continue evolving across jurisdictions.