AI Risk Management – Part 4: Measuring
Fra Annette Poulsen
Relateret medie
Measuring in the AI Risk Management Framework is about learning to assess, analyse and track the risks related to the use of AI in your organisation.
Understand and address risks, impact and harms that AI can have on your organisation. During this series we will look at AI Risk Management Framework (AI RMF) developed by NIST (National Institute of Standards and Technology) and at different AI Implementation plans that can give you a structured way to work with AI Compliance and Documentation.
This webinar will give you an overview of different areas of measurement:
- Identifying and applying appropriate methods and metrics to measure AI Risks
- Properly documenting risks or trustworthiness characteristics that are cant be measured
- Continous assessment of the effectiveness of the controls, error reports and their impact
- Measure metrics during the test, evaluation, validation and verification of AI risks
- Documenting qualitative and quantitative measures of AI system performance
- Monitoring the functionality and behaviour of the AI system and its components in production
- Documenting limitations of the AI system, its security and resiliency, privacy risks, fairness and bias
- Regularily tracking and identifying existing, unanticipated and emergent AI risks.
Target audience:
You don't need to have a deep knowledge of AI-systems and AI-models, but you are curious of how to implement AI in your organisation in a safe and effective way.
If you work in a large and complex organisation (1.st or 2.nd line of defence), this webinar series will give you a deep dive on how to understand AI compliance and implement suitable controls.
If you work in an Audit function, this webinar will be a good inspiration on how to structure your audits.
If you work in a small organisation, this webinar can introduce you to the complexities and intricacies of working with high-risk AI and what sort of requirements you will have to live up to if you are supplier to a large organisation and want to use AI in your services. That will prepare you better to the kinds of compliance burdens you will have to live up to and help you make decisions if the AI-solutions are right for your business.
Outcomes:
- Skills to perform AI Risk Assessment
- Skills to create AI Risk management for your organisation
- Specific controls that are relevant for your organisation
- Overview of the documentation you need to create to manage AI risks effectivelly.
- Links to relevant recourses for further reading
Speakers:
Julia Sommer is a chair of IDA DataCompliance and works as an Internal Audit Manager at Nordea. Julia has 9 years of experience in compliance, data protection, cybersecurity and AI from both Public and Private sector in such industries as Government, Defence, Finance and Pharmaceutical production, where among other topics she worked with ICT incident management, development and execution of audits, management and change management in compliance programs.
Valdemar Østergaard is Head of AI and Products at Capacit, where he leads the development, implementation, and operation of AI solutions for some of Denmark’s largest companies. His primary focus is on delivering high-impact AI in highly regulated sectors, where quality, governance, and compliance are not just requirements - but essential design principles. Valdemar combines technical expertise with a strong understanding of operational risk, helping organizations translate advanced AI into real business value while ensuring responsible and trustworthy deployment.
- Tags
-