- Rob Carson, CEO @ Semper Sec
In the rapidly evolving landscape of Artificial Intelligence and its adoption, regulations are emerging to ensure responsible and ethical deployment. ISO/IEC 42001, the world's first AI management system standard, is a new landmark in AI governance.
ISO/IEC 42001 is an international standard for certifying, establishing, implementing, and continually improving Artificial Intelligence Management Systems (AIMS).
ISO/IECO 42001 is the first AI management standard of its kind, designed to establish responsible use of AI. The standard guides the rapidly changing field of AI technology.
“Some controls will be more applicable than others, but essentially, if you are selling a software product that is using AI or providing an AI product or service, it is likely going to apply to you.”
- Rob Carson, CEO @ Semper Sec
AI Platform Providers: Offer platforms for AI development and deployment.
AI Product or Service Providers: Supply AI-based products and services to end-users.
AI Consumers: Companies that use AI internally for various applications
AI Producers:
AI Partners:
AI Subjects:
Relevant Authorities:
“A lot of the questions revolve around how AI models interact with customer data. How does my data get into the model, what is your agreement with the model provider, and really, are you treating the data you use to train a model differently than how you treat any other customer data?”
- Cody Wright, HyperComply CTO
“OWASP Top 10 for Large Language Models” is a guide to the top 10 things you should think about when building these systems. Any developer that is touching anything to do with AI should read this. We get a ton of questions relating to it.”
- Cody Wright, HyperComply CTO
“Making sure that there aren’t any shortcuts being taken to expedite AI deployment is important”
- Cody Wright, HyperComply CTO
Whether you are trying to get certified or are just trying to answer the questions, start with creating your internal AI manual. First, identify who are the stakeholders, internal and external, that can be impacted by the use of AI. Once you identify this, your internal manual should outline your AI governance policy, raci matrix, impact assessment, and system design document. If you put these processes and rules in place early, it is easier to be compliant.
“ISO/IEC 42001 does not stack. ISO 27001 is info security that has a bit of privacy, 27701 can stack on top of that and be part of that scope, whereas ISO/IEC 42001 is a separate cert. So if you’re a SOC2 company you can, and should, go do this independently of an ISO 27001 cert. If you have ISO 27001 you won’t be able to integrate everything into what you already have, but you will feel comfortable going into 42001 because they have mirror documents and similar processes for certification.”
- Rob Carson, CEO @ Semper Sec
“Once you go through with one of these ISO/IEC 42001 audits, you will have much better answers for any AI-related questions”
- Cody Wright, HyperComply CTO
Short answer - it depends. If you are, for example, already ISO 27001 certified, it can be quick - around a week, but it depends on how you’re using AI. If you are developing AI and creating an AI platform, your audit will be way longer and far more detailed than someone who is using an external algorithm as part of your services.
If you want to get ahead, there are companies out there that do pre-certifications. The AI audit guidance hasn’t been published, so you can get a preemptive certification so you are ready for when it comes out officially.
There are many, but aside from just considering who you want to perform your ISO/IEC 42001 audit, consider the types of firms that do other kinds of certifications as well. Future-proof yourself so you don’t have to do extra work.
If you haven’t already been doing these things, you will have a lot of cleanup to do (training, etc.) People have gotten quite good at getting these certs early in a company’s life, but since AI is so new, almost nobody has done this right from the start. Lots of companies will be re-moulding the org under these new rules.
Secondly, this is an ever-changing and evolving landscape. So keeping up with changes and educating your team accordingly.
Unknown what specific frameworks were used, but ethics was taken heavily into consideration. Generation and interpretation are the areas when it comes to AI, as soon as you’re making decisions based on these models that can affect human beings, is when ethics comes into play.
The customer is concerned about the ownership of their data. Are you training on their data? If so, what happens if they terminate the contract?
They want to make sure that their private data can’t leak into other accounts. Gen AI/LLM workflows are new and developers are less familiar with how to secure the systems. It's easy to create a lot of problems.
Action models need to be carefully constrained as they open up a large amount of surface area. If for instance, you were thinking about opening up your AWS account to a user-controlled AI Agent (I wouldn’t!), you would need to make sure the role for that account is very tightly controlled. In general, best practice is to minimize the number of actions available and make sure that there are guard rails around the actions to ensure the account boundaries are maintained.
Generally, companies/employees should not be uploading private data to public LLMs. Any system that’s used needs to be well understood in terms of data ownership. Decision-making based on LLMs is a nasty subject and prone to significant error and bias.
We are probably a year out from seeing this certification become mandatory, but you’re going to need to attest to it in some shape or form very soon. Within the next 6 months, you will start to see questions relating to it come up in things like security addendums in contracts.
Company dependent - reach out to Semper Sec for a quote!
Reach out to Semper Sec for a free strategy session.
Want to spend less time on security questionnaires? Book a demo with HyperComply to learn how we help companies automate the process.
View the full webinar here.