Responsible AI: Managing Risk in an Evolving Regulatory and Legal Landscape

Share

Artificial Intelligence (“AI”) is driving innovation across industries and is playing an increasing role in everyday life. AI (and, more broadly, algorithms) is being used in diagnostics, enabled medical devices, device manufacturing, drug discovery and manufacturing, industrial manufacturing, smart home and wearable devices, and vehicles. AI has the potential to benefit society in a number of ways, including by boosting efficiency, providing invaluable insights, and informing decisions. But to borrow from the likes of Winston Churchill (or Spider-man), with great AI capabilities comes great responsibility.

Many of the risks and benefits of AI systems are unique in comparison to other technologies. Regulators, governmental agencies, and consumer advocates are keenly focused on addressing unintended consequences that may result from the development and use of algorithms and AI. These issues, like the technology, are complex, and policymakers, stakeholders, and regulators are grappling with how best to address them.

For example, the National Institute of Standards and Technology (“NIST”) is developing an AI Risk Management Framework that addresses risks in the design, development, use, and evaluation of AI systems. The FDA has released guidance on AI- and machine learning-enabled medical devices and has long supported the use of AI in drug development. The Department of Health and Human Services recently issued a notice of proposed rulemaking that would prohibit the use of discriminatory clinical algorithms under the Affordable Care Act. And last month, the FTC issued an advance notice of proposed rulemaking seeking public comment on automated decision-making and algorithmic discrimination, as well as other data-related issues.

Though the regulatory, public policy, and legal landscapes are far from settled, companies can be proactive in managing risks affiliated with the development and use of AI systems through a thoughtful, responsible AI program. A well-designed responsible AI program (or risk management framework) will incorporate well-accepted principles concerning the development and use of AI, many of which are at the core of ongoing regulatory and public policy discussions. There is no one size fits all responsible AI program. How and to what extent the principles apply will depend on the AI system and a company’s use of the AI system. Below, we identify some of those guiding principles and provide practical considerations for incorporating these principles into a company’s risk management framework.

Principles Supporting Responsible AI

Stakeholders, industry experts, and policymakers tend to agree that certain principles should guide the development and use of AI systems. These principles include reliability, fairness, explainability, transparency, accountability, privacy, and security. Organizations may call these principles by different names or categorize them differently, but the meat of the principles is generally similar.

Reliability: AI is expected to perform reliably and safely when deployed. This requires, among other things, an understanding of the operational factors and environment within which an AI system is expected to perform, evaluation of training and test data sets, consideration of generalizability, if applicable, and determining acceptable error rates for the AI system.

Fairness: Various stakeholders are concerned that the use of AI and algorithms can lead to unexpected outcomes, including biased decision-making. In a recent draft of its AI Risk Management Framework, NIST suggests there are three primary categories of AI bias – systemic, computational, and human. Companies can be proactive in addressing fairness concerns by reviewing data collection practices, identifying factors that could lead to unexpected outcomes, and employing diverse teams to design, develop, and evaluate AI systems.

Explainability: AI systems are complex, highly technical, and can be difficult to understand. Explainability ensures user-appropriate descriptions of how a model works and why an AI system reached a particular prediction or result.

Transparency:  Transparency means enabling a user to understand when and how an AI system is being used. In the regulatory space, for example, this may include describing how decisions related to the AI system were made, including those pertaining to the design, development, intended use case, modeling, and deployment, among other things.

Accountability: Companies developing and deploying AI should understand the impact of their AI system and how to assess the proper functioning of their AI systems throughout their lifecycle.

Privacy: Data is at the core of every AI system. Unsurprisingly, keeping data private and secure is of the utmost importance when developing and supporting an AI system. Companies are therefore expected to design and maintain AI systems in a way that protects user data and complies with already existing privacy laws.

Security: Cybersecurity is always top of mind these days, and it should be top of mind when developing and deploying AI systems. Among other things, AI systems should be protected from affirmative attacks, environmental instability, and misuse.

Considerations for Developing Your Responsible AI Program

Responsible AI programs can often be built into an organization’s existing risk management framework. Many of these principles, including privacy and security, are likely already well-established in an existing framework. The overarching goal of a responsible AI program is to mitigate risk by making thoughtful well-reasoned decisions related to AI that are consistent with the above principles, and to document the responsible AI journey. Below are some things to consider as you develop your responsible AI program.

  1. Identify goals and acceptable levels of risk at the outset.
  • Is there a clearly articulated goal or desired outcome for the AI system?
  • Have you identified the risks affiliated with the AI system, including those pertaining to fairness and safety?
  • Have you identified an acceptable level of risk? If so, how?
  • Is there a process in place for documenting the risks and impacts of the technology throughout the AI system’s lifecycle?
  • Are there diverse stakeholders and leadership involved in risk assessment?
  • Have you considered risks related to third-party technology and data?
  1. Validate and test.
  • How are you measuring the qualitative and quantitative risks of the AI system?
  • How are you documenting testing, evaluation, validation, and verification of the AI system?
  • Are you considering explainability when documenting validation, test, evaluation, and verification?
  • Are you soliciting and incorporating feedback from end users and stakeholders?
  1. Evaluate and audit.
  • Is there a process in place for evaluating and auditing the AI system throughout its lifecycle? What cadence is appropriate for the AI system?
  • Is there an accountability process in place for when things go wrong?
  • Is there a system in place that empowers team members to think critically and voice concerns?
  • Are there any regulatory developments that impact the development, deployment, or use of the AI system?
  • How are you documenting evaluations and audits?
  • How often are you reevaluating your responsible AI program?
  1. Engage diverse stakeholders.
  • Is your team multi-disciplinary?
  • Do your stakeholders bring diversity of experience, expertise, and backgrounds?
  • Do you maintain a diverse team through the AI system’s lifecycle?

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.

About the Author: Hannah R. Anderson

Hannah Anderson brings enthusiasm and a critical eye to Faegre Drinker’s litigation team. She represents domestic and international manufacturing companies, medical device manufacturers, and other corporate clients in both product liability/mass tort and environmental litigation.

©2024 Faegre Drinker Biddle & Reath LLP. All Rights Reserved. Attorney Advertising.
Privacy Policy