ChaptersCircleEventsBlog

Working Group

AI Controls

This committee aligns with NIST Cybersecurity Framework to establish a robust, flexible, and multi-layered framework.
View Current Projects
CSA Large Language Model (LLM) Threats Taxonomy
CSA Large Language Model (LLM) Threats Taxonomy

Download

The CSA AI Control Framework Working Group’s goal is to define a framework of control objectives to support organizations in their secure and responsible development, management, and use of AI technologies. The framework will assist in evaluating risks and defining controls related to Generative AI (GenAI). The control objectives will cover aspects related to cybersecurity. Additionally, it will cover aspects related to safety, privacy, transparency, accountability, and explainability as far as they relate to cybersecurity.

Working Group Leadership

Marina Bregkou
Marina Bregkou

Marina Bregkou

Principal Research Analyst, Associate VP

Daniele Catteddu
Daniele Catteddu

Daniele Catteddu

Chief Technology Officer, CSA

Daniele Catteddu is an information security and risk management practitioner, technologies expert and privacy evangelist with over 15 of experience. He worked in several senior roles both in the private and public sector. He is member of various national and international security expert groups and committees on cyber-security and privacy, keynote speaker at several conferences and author of numerous studies and papers on risk management, ...

Read more

Working Group Co-Chairs

Marco Capotondi
Marco Capotondi

Marco Capotondi

Agency for National Cybersecurity, Italy

Marco Capotondi is an Engineer specialized in applied AI, with a focus on AI Security and AI applied to Autonomous Systems. Bachelor’s degree in Physics and Master’s degree in AI Engineering, he got a doctoral degree through a research around Bayesian Learning techniques applied to Autonomous Systems, on which he published many papers. His actual focus is helping the community to define and manage risks associated with Artificial Intelligen...

Read more

Siah Burke Headshot Missing
Siah Burke

Siah Burke

Ken Huang
Ken Huang

Ken Huang

CEO & Chief AI Officer, DistributedApps.ai

Ken Huang is an acclaimed author of 8 books on AI and Web3. He is the Co-Chair of the AI Organizational Responsibility Working Group and AI Control Framework at the Cloud Security Alliance. Additionally, Huang serves as Chief AI Officer of DistributedApps.ai, which provides training and consulting services for Generative AI Security.

In addition, Huang contributed extensively to key initiatives in the space. He is a core contribut...

Read more

Alessandro Greco Headshot Missing
Alessandro Greco

Alessandro Greco

Publications in ReviewOpen Until
Navigating the Human Factor: Understanding and Addressing Resistance to AI AdoptionJun 09, 2025
AICM mapping to NIST 600-1Jun 16, 2025
Analyzing Log Data with AI ModelsJun 20, 2025
Agentic AI Identity and Access Management: A New ApproachJul 03, 2025
View all
Who can join?

Anyone can join a working group, whether you have years of experience or want to just participate as a fly on the wall.

What is the time commitment?

The time commitment for this group varies depending on the project. You can spend a 15 minutes helping review a publication that's nearly finished or help author a publication from start to finish.

Virtual Meetings

Attend our next meeting. You can just listen in to decide if this group is a good for you or you can choose to actively participate. During these calls we discuss current projects, and well as share ideas for new projects. This is a good way to meet the other members of the group. You can view all research meetings here.

Jun

11

Wed, June 11, 6:00pm - 7:00pm
CSA AI Control Framework - recurring meeting
See details

Agenda:

Report progress on Ongoing Tasks:

  • Task 1 (Sam and Faisal leading) - Implementation guidelines

    • Meeting every Tuesday at 08:00 a.m. E.T. / 11:00 a.m. E.T. / 15:00 UTC

  • Task 3 (Betina+Jochen leading) - Mapping AICM to BSI AI C4, ISO 42001, EU AI Act, NIST 600-1

    • Meeting every Friday at 08:00 a.m. P.T. / 11:00 a.m. E.T. / 15:00 UTC

  • Task 4 (Ken Huang leading) - Auditing guidelines.

    • Meeting every Thursday  at 09:00 a.m. E.T. / 12:00 p.m. E.T. / 16:00 UTC


To Connect on the call:

Url: https://cloudsecurityalliance.zoom.us/j/86970351963?pwd=DuGoxcZSiItdv5pLqtuI0OaWtaCYT7.1

Jun

25

Wed, June 25, 6:00pm - 7:00pm
CSA AI Control Framework - recurring meeting
See details

Agenda:

Report progress on Ongoing Tasks:

  • Task 1 (Sam and Faisal leading) - Implementation guidelines

    • Meeting every Tuesday at 08:00 a.m. E.T. / 11:00 a.m. E.T. / 15:00 UTC

  • Task 3 (Betina+Jochen leading) - Mapping AICM to BSI AI C4, ISO 42001, EU AI Act, NIST 600-1

    • Meeting every Friday at 08:00 a.m. P.T. / 11:00 a.m. E.T. / 15:00 UTC

  • Task 4 (Ken Huang leading) - Auditing guidelines.

    • Meeting every Thursday  at 09:00 a.m. E.T. / 12:00 p.m. E.T. / 16:00 UTC


To Connect on the call:

Url: https://cloudsecurityalliance.zoom.us/j/86970351963?pwd=DuGoxcZSiItdv5pLqtuI0OaWtaCYT7.1

Jul

9

Wed, July 9, 6:00pm - 7:00pm
CSA AI Control Framework - recurring meeting
See details

Agenda:

Report progress on Ongoing Tasks:

  • Task 1 (Sam and Faisal leading) - Implementation guidelines

    • Meeting every Tuesday at 08:00 a.m. E.T. / 11:00 a.m. E.T. / 15:00 UTC

  • Task 3 (Betina+Jochen leading) - Mapping AICM to BSI AI C4, ISO 42001, EU AI Act, NIST 600-1

    • Meeting every Friday at 08:00 a.m. P.T. / 11:00 a.m. E.T. / 15:00 UTC

  • Task 4 (Ken Huang leading) - Auditing guidelines.

    • Meeting every Thursday  at 09:00 a.m. E.T. / 12:00 p.m. E.T. / 16:00 UTC


To Connect on the call:

Url: https://cloudsecurityalliance.zoom.us/j/86970351963?pwd=DuGoxcZSiItdv5pLqtuI0OaWtaCYT7.1

Jul

23

Wed, July 23, 6:00pm - 7:00pm
CSA AI Control Framework - recurring meeting
See details

Agenda:

Report progress on Ongoing Tasks:

  • Task 1 (Sam and Faisal leading) - Implementation guidelines

    • Meeting every Tuesday at 08:00 a.m. E.T. / 11:00 a.m. E.T. / 15:00 UTC

  • Task 3 (Betina+Jochen leading) - Mapping AICM to BSI AI C4, ISO 42001, EU AI Act, NIST 600-1

    • Meeting every Friday at 08:00 a.m. P.T. / 11:00 a.m. E.T. / 15:00 UTC

  • Task 4 (Ken Huang leading) - Auditing guidelines.

    • Meeting every Thursday  at 09:00 a.m. E.T. / 12:00 p.m. E.T. / 16:00 UTC


To Connect on the call:

Url: https://cloudsecurityalliance.zoom.us/j/86970351963?pwd=DuGoxcZSiItdv5pLqtuI0OaWtaCYT7.1

Aug

6

Wed, August 6, 6:00pm - 7:00pm
CSA AI Control Framework - recurring meeting
See details

Agenda:

Report progress on Ongoing Tasks:

  • Task 1 (Sam and Faisal leading) - Implementation guidelines

    • Meeting every Tuesday at 08:00 a.m. E.T. / 11:00 a.m. E.T. / 15:00 UTC

  • Task 3 (Betina+Jochen leading) - Mapping AICM to BSI AI C4, ISO 42001, EU AI Act, NIST 600-1

    • Meeting every Friday at 08:00 a.m. P.T. / 11:00 a.m. E.T. / 15:00 UTC

  • Task 4 (Ken Huang leading) - Auditing guidelines.

    • Meeting every Thursday  at 09:00 a.m. E.T. / 12:00 p.m. E.T. / 16:00 UTC


To Connect on the call:

Url: https://cloudsecurityalliance.zoom.us/j/86970351963?pwd=DuGoxcZSiItdv5pLqtuI0OaWtaCYT7.1

Open Peer Reviews

Peer reviews allow security professionals from around the world to provide feedback on CSA research before it is published.

Learn how to participate in a peer review here.

AICM mapping to NIST 600-1

Open Until: 06/16/2025

The Cloud Security Alliance (CSA) invites public peer review of its draft mapping between the AI Controls Matrix (AICM) and NIST 600-1. This initiative suppo...

Analyzing Log Data with AI Models

Open Until: 06/20/2025

In a Zero Trust environment, logs play a critical role in the visibility and analytics cross-cutting capability. Architectu...

Agentic AI Identity and Access Management: A New Approach

Open Until: 07/03/2025

Traditional Identity and Access Management (IAM) systems, primarily designed for human users or static machine identities v...

Premier AI Safety Ambassadors

Premier AI Safety Ambassadors play a leading role in promoting AI safety within their organization, advocating for responsible AI practices and promoting pragmatic solutions to manage AI risks. Contact [email protected] to learn how your organization could participate and take a seat at the forefront of AI safety best practices.