
Summary Bullets:
• With the passage of the EU AI Act, staying on top of issues related to AI and ethics is going to become increasingly important to multinational organizations.
• The need for individuals that can help organizations adapt business processes to meet evolving ethical AI requirements will become increasingly urgent.
The EU AI Act is groundbreaking legislation that strives to hold organizations more accountable for their use of artificial intelligence. It categorizes use cases by risk, stipulates greater oversight of riskier AI use cases, bans certain use cases outright, and requires increased transparency over the use the technology, in addition to many other requirements. While these new obligations provide much-needed consumer protections, they create increased complexity for enterprises already struggling to scale their use of AI. To meet the requirements outlined by the EU AI Act, organizations operating in Europe must start devising a strategy to enhance documentation and oversight of AI technology.
A top priority should be to invest in teams that can navigate the varied international regulatory landscape of the markets in which they operate. For the foreseeable future, enterprises will be dealing with an environment comprised of a patchwork of requirements. There are no global standards for the use of AI, and even if there were, enforcement would be difficult. In the absence of broad international agreement, various countries and multinational groups have attempted to implement some sort of ethical standards. The US issued an Executive Order for AI, Canada passed the Artificial Intelligence and Data Act, China passed the Ethical Norms for GenAI, Korea issued an Implementation Strategy for Ethical AI, and Singapore launched a Model Framework, to name a few. Staying on top of these local and regional laws will be highly resource intensive, but essential for businesses with operations that cross borders.
Additionally, organizations need to ensure they have a strong understanding of where and how they are using AI. They need to establish a centralized team that will be responsible for AI compliance and ethics, including management of AI assets, review and classification of applications, management and identification of data sources, and maintenance of audit trails of models in use. All of these steps are essential to good governance and responsible AI usage; however, the task is not as straightforward as it may sound. Over time, some lines of business may have adopted AI-driven software on their own, without the knowledge of a centralized IT team. Data used for analysis may reside in silos; or it may not be adequately labeled. To help with the challenge, tools that work across platforms to manage AI models are coming to market. However, since enterprises already face a skills gap when it comes to AI expertise, they will need to move quickly to upskill internal teams while also identifying external experts to help them.
And finally, although not specially addressed in the EU AI Act, organizations should evaluate their stance towards other issues related to AI and ethics. For example, the issue of copyright infringement of training data remains unresolved and could have a profound impact on select industries. Additionally, AI processing is resource intensive and can impact an organization’s carbon footprint and sustainability objectives.
Staying on top of issues related to AI and ethics is going to become increasingly important. Furthermore, goalposts will likely change when looking across borders and with the passage of time. And while the industry has long talked about the lack of AI experts in data science, there will now be a need for individuals that can help organizations adapt business processes to meet evolving ethical AI requirements.
