What to Know About Implementing Artificial Intelligence Compliance Controls
In the last year, artificial intelligence has gone from vaporware to viable business-ready reality. From inboxes to manufacturing integration, machine learning is changing the way we at Dayspring think and work.
In fact, over 80% of this blog was written by and then re-vetted for accuracy by Bard, the Google chat-based AI tool.
As we have considered the use of AI in our marketing, our approach to FDA compliance, financial tools and our recommendations for security and privacy logging and monitoring, we’ve also discussed the need for a comprehensive compliance standard for the varied uses of AI in critical business functions. We are not alone!
On October 30, 2023, President Biden issued an executive order on AI titled “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The order establishes a number of new initiatives aimed at promoting the responsible development and use of AI, including:
- Establishing new standards for AI safety and security.
- Protecting Americans’ interests as regards AI, including privacy and unfair or deceptive practices related to AI, helping workers transition to new jobs in the age of AI and working with international partners to develop and implement standards for the responsible development and use of AI.
- Promoting innovation by supporting research and development in AI.
- Establishing a new National AI Initiative Office to coordinate the implementation of the order across the federal government.
The order is a significant step forward in the Federal Government’s efforts to promote the responsible development and use of AI. It is important to note that the order is still in its initial stages, and it will take time for the federal government to implement its provisions.
Why Artificial Intelligence Compliance Controls Are Needed
There are a number of reasons why regulating AI is important.
Make Impactful Decisions Without Bias
First, AI systems can make decisions that have a significant impact on people’s lives. For example, AI systems are used to make decisions about who gets a loan, who is eligible for healthcare, what drug studies are viable, and what information is actionable for data privacy concerns or reportable security breaches. It is important to ensure that these decisions are made fairly and without bias.
Avoid Actions With Unintended Consequences
Second, AI systems can be used to create new products and services that can have unintended consequences. For example, self-driving cars are a bit of a dud, but driver-assisted AI is now a part of most luxury automotive lines.
It has been suggested that driver-assisted AI leads to more distracted driving and accidents than either fully AI or fully manual vehicles. It is important to assess the potential risks and benefits of new AI products and services before they are widely deployed.
Keep Control Over a Complex System
Third, AI systems are becoming increasingly complex and opaque. This makes it difficult for people to understand how AI systems work and to identify potential problems or fraudulent and criminal activity. Artificial intelligence compliance controls can help to ensure that AI systems are transparent and accountable.
Related Article: 2023 ISO Regulatory Compliance and Quality Management Standards Updates
What Should Be Regulated?
There are a number of distinct aspects of AI that could be regulated. Some potential areas of regulation include:
- Data collection and use: AI systems are trained on large datasets of data. It is important to ensure that this data is collected and used ethically and in compliance with privacy laws.
- Algorithm design and validation: AI systems are powered by algorithms. It is important to ensure that these algorithms are designed to be fair and unbiased, and that they are validated in real-world, performance qualified situations.
- Transparency and accountability: AI systems can be complex and opaque. It is important to ensure that people can understand how AI systems work and that they are held accountable for their decisions.
- Safety and security: AI systems can be used to create new products and services that could pose safety and security risks. It is important to assess these risks before AI products and services are widely deployed.
How Should AI Be Regulated?
There are a number of ways to regulate AI. One approach is to develop new laws and regulations specifically for artificial intelligence compliance controls. Another approach is to adapt existing laws and regulations to cover AI. It is also possible to develop voluntary standards and guidelines for AI developers and users such as NIST/ANSI or ISO/IEC controls.
Here are some specific examples of how AI could be regulated:
- Life Sciences: AI systems are increasingly being used to make decisions about patient care. Regulators could require that AI systems used in healthcare be tested for accuracy and bias and validated with real-world data. They could also require that AI systems be used in conjunction with human judgment.
- Transportation: AI systems are being developed for self-driving cars and other autonomous vehicles. Regulators could require that self-driving cars be equipped with safety features such as kill switches and emergency braking systems. They could also require that self-driving cars be tested extensively before they are deployed on public roads.
- Finance: AI systems are being used to make decisions about lending and investment. Regulators could require that AI systems used in finance be tested for bias and accuracy. They could also require that AI systems be used in conjunction with human judgment.
- Security: AI systems are being used to develop new identity management, data and log analysis techniques, surveillance technologies and digital behavior analysis. Regulators could restrict the use of these technologies to protect people’s privacy and civil liberties.
Related: Discover More About Regulatory Compliance Controls
Current Artificial Intelligence Compliance Controls
At this point in time, both the NIST and ISO/IEC organizations have released a number of voluntary controls which organizations committed to AI development should consider very carefully. Many domestic and international laws and regulations are based on controls established by these organizations and adopting them early may prove to be a leg up on future regulatory action.
- ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations, is an international standard that provides guidance on the governance of AI within organizations. The standard addresses the following topics:
- System scope: The nature and mechanisms of AI, assets and risk, and roles and responsibilities
- System lifecycle: The standard describes the lifecycle of an AI system, from the initial requirements gathering phase to the final retirement phase.
- System quality: The standard identifies the key quality attributes of an AI system, such as accuracy, reliability, robustness, and fairness.
- System security: The standard provides guidance on how to secure AI systems from cyberattacks and other threats.
- ISO/IEC 23053:2022 is an Artificial Intelligence (AI) and Machine Learning (ML) framework that is applicable to organizations of all sizes and industries that are developing or deploying AI systems using machine Learning. The standard is also applicable to suppliers of AI systems and ML components. The standard helps organizations:
- Develop and deploy AI systems that are fair, unbiased, transparent, accountable, secure, and reliable.
- Build trust with customers, employees, stakeholders, auditors and regulators.
- Comply with current and future laws and regulations.
- The NIST Artificial Intelligence Risk Management Framework (AI RMF1.0) is a formal framework to better manage risks to individuals, organizations, and society associated with artificial intelligence. AI RMF is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
Contact an Experienced Dayspring Technology Consultant
Regardless of your organization’s current AI posture, it is likely that you’ve already begun to utilize and interact with AI related services and technology internally or via third party relationships. These services should be carefully reviewed and subject to artificial intelligence compliance controls and known management framework standards that benefit your employees, stakeholders and clients long before federal or state regulations are imposed! Take the first step to implement artificial intelligence compliance controls – speak with one of our experienced consultants today.