AI Risk Management – Overview, Part 1
Fra Annette Poulsen
Relateret medie
All organisations use AI, but very few have a clear picture of the risks and opportunities AI brings to daily operations and organisational strategies. You can use AI Risk Management Framework to stay on top of AI-related risks.
Understand and address risks, impact and harms that AI can have on your organisation. During this series we will look at AI Risk Management Framework (AI RMF) developed by NIST (National Institute of Standards and Technology) and at different AI Implementation plans that can give you a structured way to work with AI Compliance and Documentation.
This webinar series will give you an overview of:
- AI Life cycle and Key AI actors across AI Lifecycle
- Key roles and responsibilities
- Characteristics of trustworthy AI
- 4 key control areas of AI RMF:
- AI Governance
- AI Mapping
- AI Measurement
- AI Management
Target audience:
You don't need to have a deep knowledge of AI-systems and AI-models, but you are curious of how to implement AI in your organisation in a safe and effective way.
If you work in a large and complex organisation (1.st or 2.nd line of defence), this webinar series will give you a deep dive on how to understand AI compliance and implement suitable controls.
If you work in an Audit function, this webinar will be a good inspiration on how to structure your audits.
If you work in a small organisation, this webinar can introduce you to the complexities and intricacies of working with high-risk AI and what sort of requirements you will have to live up to if you are supplier to a large organisation and want to use AI in your services. That will prepare you better to the kinds of compliance burdens you will have to live up to and help you make decisions if the AI-solutions are right for your business.
Outcomes:
- Skills to perform AI Risk Assessment
- Skills to create AI Risk management for your organisation
- Specific controls that are relevant for your organisation
- Overview of the documentation you need to create to manage AI risks effectivelly.
- Links to relevant recourses for further reading
Speakers:
Julia Sommer is a chair of IDA DataCompliance and works as an Internal Audit Manager at Nordea. Julia has 9 years of experience in compliance, data protection, cybersecurity and AI from both Public and Private sector in such industries as Government, Defence, Finance and Pharmaceutical production, where among other topics she worked with ICT incident management, development and execution of audits, management and change management in compliance programs.
Valdemar Østergaard is Head of AI and Products at Capacit, where he leads the development, implementation, and operation of AI solutions for some of Denmark’s largest companies. His primary focus is on delivering high-impact AI in highly regulated sectors, where quality, governance, and compliance are not just requirements - but essential design principles. Valdemar combines technical expertise with a strong understanding of operational risk, helping organizations translate advanced AI into real business value while ensuring responsible and trustworthy deployment.
- Tags
-