AI Risk Management - Part 2: Governance
Fra Annette Poulsen
Relateret medie
Governance is the first step in getting started with AI Risk Management Framwork. We will take you through the areas, topics and questions that are relevant to address, when creating risk governance for the use of AI in your organisation.
Governance in the AI Risk Management Framework is about learning to cultivate and practive the a healthy culture of risk appetite and management.
Understand and address risks, impact and harms that AI can have on your organisation. During this series we will look at AI Risk Management Framework (AI RMF) developed by NIST (National Institute of Standards and Technology) and at different AI Implementation plans that can give you a structured way to work with AI Compliance and Documentation.
This webinar will give you an overview of:
- Alignment of AI risks with applicable AI legislation
- Organisational structures, roles and responsibilities involved in appropriate governance of AI risks
- Which policies, processes, procedures and practices you need to preare across your organisation to ensure appropriate mapping, measurment and management of your AI Risks
- How to ensure transparency, public trust and confidence in your use of AI
- What is the appropriate level of documentation, performance metrics and risk appetite
Target audience:
You don't need to have a deep knowledge of AI-systems and AI-models, but you are curious of how to implement AI in your organisation in a safe and effective way.
If you work in a large and complex organisation (1.st or 2.nd line of defence), this webinar series will give you a deep dive on how to understand AI compliance and implement suitable controls.
If you work in an Audit function, this webinar will be a good inspiration on how to structure your audits.
If you work in a small organisation, this webinar can introduce you to the complexities and intricacies of working with high-risk AI and what sort of requirements you will have to live up to if you are supplier to a large organisation and want to use AI in your services. That will prepare you better to the kinds of compliance burdens you will have to live up to and help you make decisions if the AI-solutions are right for your business.
Outcomes:
- Skills to perform AI Risk Assessment
- Skills to create AI Risk management for your organisation
- Specific controls that are relevant for your organisation
- Overview of the documentation you need to create to manage AI risks effectivelly.
- Links to relevant recourses for further reading
Speakers:
Julia Sommer is a chair of IDA DataCompliance and works as an Internal Audit Manager at Nordea. Julia has 9 years of experience in compliance, data protection, cybersecurity and AI from both Public and Private sector in such industries as Government, Defence, Finance and Pharmaceutical production, where among other topics she worked with ICT incident management, development and execution of audits, management and change management in compliance programs.
Valdemar Østergaard is Head of AI and Products at Capacit, where he leads the development, implementation, and operation of AI solutions for some of Denmark’s largest companies. His primary focus is on delivering high-impact AI in highly regulated sectors, where quality, governance, and compliance are not just requirements - but essential design principles. Valdemar combines technical expertise with a strong understanding of operational risk, helping organizations translate advanced AI into real business value while ensuring responsible and trustworthy deployment.
- Tags
-