Insights
AI Compliance and the General Counsel’s Playbook
By Ritvik Lukose
Published on: 12 November 2024
Business Legal
AI Governance
AI is here and rapidly progressing. Organisations that are creating and deploying AI systems face a significant regulatory shift, urging them to take immediate action to align with emerging standards.
Globally, governments are introducing AI legislation that focuses on safety and risk-based ethical use of technology. The EU’s AI Act and U.S. sector-specific guidelines exemplify legislative efforts pushing for technical standards, robust governance frameworks, and data quality and privacy protocols.
Current AI regulations adopt a risk-based approach. The EU’s AI Act, for example, categorises AI applications based on their use and potential impact. This means businesses need risk-based and role-based strategies for managing the risks posed by the AI system and its provider. High-impact applications in areas like healthcare and finance, in particular, need to comply with standards of safety and transparency. This includes managing AI training data, and maintaining technical records. Similarly, the U.S. AI Bill of Rights and the UK’s AI Rule Book set guidelines for transparency, accountability, and human oversight.
Issues like copyright, data privacy, and algorithmic bias are emerging in AI litigation, highlighting the legal complexities of AI. Hence, companies need to embed AI safeguards into project governance from the outset, across ideation, scoping, contracting, and launch. A checklist-driven approach throughout the AI project lifecycle can help delineate regulatory obligations, guaranteeing compliance at each step.
Register for our upcoming webinar ‘Navigating AI Legislations: Anticipating Impact and Preparing for Integration’ on November 14th.
We are also seeing the emergence of ‘AI Governance Committees or Boards’.
Recently, IBM established an AI Ethics Board, co-chaired by Christina Montgomery (VP, Government & Regulatory Affairs, former Assistant General Counsel and Chief Privacy & Trust Officer).
Wipro has created a Responsibile AI taskforce, led by Ivana Bartoletti, the Chief Privacy and AI Governance Officer. The Wipro Policy on Responsible Use and Development of AI clearly sets out the responsibilities of different functions, including Legal. The revision history of the document points to the key role played by the General Counsel in finalising and approving the AI governance policy.
In-house legal departments uniquely bring both a ‘horizontal view’ across the organisation, and an ‘outside-in’ perspective on decisions. In addition to staying on top of a rapidly evolving legal and compliance framework for AI use, it’s imperative that the General Counsel plays a key role in shaping the AI policies for the corporation and potentially for the sector in which one operates. AI-related risk management will soon be one of the key strategic areas where a General Counsel can help foster a culture of accountability and ethical standards in AI deployment across the organisation.
Register for our upcoming webinar ‘Navigating AI Legislations: Anticipating Impact and Preparing for Integration’ on November 14th.
Saibal Mukherjee, in-house lawyer from Switzerland, will address current developments in artificial intelligence and data governance in our upcoming webinar ‘Navigating AI Legislations: Anticipating Impact and Preparing for Integration’ on November 14th. The talk will cover a comparative overview of the AI regulatory/ legislative landscape, its impact on legal departments and actionable steps on how to translate legislation into simple governance frameworks.