Blog-Layout

Outsmarting the smart guys - The ultimate cheat sheet for AI security frameworks

Michael Hannecke

Introduction

Keep Your Bots in Check: The Who's Who of AI Security Frameworks!


AI continues to be an area of impressive speed in development, bringing almost daily innovations. In this, security must be a imminent concern for developers, policymakers, and end-users. With the expansion of AI into every sector of modern (business) life, the benefit of being guided throughout the complete application lifecycle by comprehensive and holistic security frameworks is indisputable.

The following compilation aims to provide an initial overview of existing and developing AI security frameworks established by government agencies, international organizations, private companies, and non-profit organizations. These frameworks, although sometimes extensive and complex in implementation, can still be of great help in providing appropriate security measures throughout the entire lifecycle of an AI system and supporting as safe an operation as possible.


  • The overview does not claim to be complete and will be expanded over time. Also, not all of the following links are to be understood as comprehensive security frameworks for AI, but at least always provide assistance in driving or using AI systems securely!


 

Governmental and International Bodies

 

BSI (Germany)

The German Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik, BSI) is the national authority responsible for managing computer and communication security for the German government, dedicated to promoting IT security across all sectors to protect the infrastructure and interests of Germany. It provides a range of services, including risk analysis, security recommendations, and the development of security standards, to safeguard against threats to the digital environment.

 

BSI General ,

Informations and Recommendations from BSI



NIST (U.S.)

The United States National Institute of Standards and Technology (NIST) is a federal agency that develops technology, metrics, and standards to drive innovation and economic competitiveness in various industries, including cybersecurity, manufacturing, and commerce. NIST's work includes creating the standards and guidelines that help ensure the reliability and security of technologies and information systems, such as those used in artificial intelligence.


AI Risk Management Framework (RMF) ,

General NIST AI Research Center


EU Parliament

The EU AI Act is a landmark proposal by the European Commission aimed at creating the first comprehensive legal framework for artificial intelligence. The act aims to address the risks associated with AI and to create an ecosystem of excellence and trust. Even though it's not yet final, the proposal outlines several provisions for high-risk and non-high-risk AI applications, providing a clear legal framework for AI development and deployment within the EU.


EU AI Act,

AI Rules by the European Parliament ,

Information on AI in the EU



US AI Executive Order

The US AI Initiative Act is a legislative proposal aimed at promoting and coordinating federal investment in artificial intelligence research and development, enhancing the competitiveness of the United States in AI, and ensuring that America is a leader in the international AI landscape. It focuses on establishing guidelines and strategic plans to accelerate and unify AI initiatives across various federal agencies.


Fact Sheet on AI


Institutional Bodies


ETSI

The European Telecommunications Standards Institute (ETSI) is an independent, not-for-profit organization that develops globally applicable standards for information and communications technologies, including fixed, mobile, radio, converged, broadcast, and internet technologies. Its work is central to the creation of interoperable solutions that enable services to be offered across the diverse spectrum of communication technologies used worldwide.


Securing Artificial Intelligence (SAI)



ISO/IEC

The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) are independent, non-governmental international organizations that develop and publish international standards for a wide range of industries, encompassing technology and manufacturing processes. The ISO/IEC collaboration is particularly focused on the standardization of information technology, including the field of artificial intelligence.


Workshop


IEEE

The Institute of Electrical and Electronics Engineers (IEEE) is a professional association dedicated to advancing technology for the benefit of humanity. It develops and promotes standards, publishes scientific research, hosts conferences, and provides educational services within the fields of electrical, electronics, and computing sciences and engineering.


IEEE Trustworthy Artificial Intelligence (TAI)



MITRE

MITRE is a not-for-profit organization that manages federally funded research and development centers (FFRDCs), supporting various U.S. government agencies with scientific research and analysis, development and acquisition, and systems engineering and integration. They are known for their work in cybersecurity through the creation of widely adopted resources like the MITRE ATT&CK framework, which is a comprehensive matrix of tactics and techniques used by threat hunters, red teamers, and defenders to better classify attacks and assess an organization's risk.


MITRE ATLAS™



OWASP

The Open Web Application Security Project (OWASP) is a non-profit foundation that works to improve the security of software through community-led open-source software projects, hundreds of local chapters worldwide, tens of thousands of members, and by hosting educational events and training sessions. OWASP provides impartial, practical information about application security and is well-known for its comprehensive guidelines, particularly the OWASP Top 10, which lists the most critical security risks to web applications.


Top 10 for Large Language Model Applications,

AI Top Ten, Machine Learning Security Top 10 ,

AI Security and Privacy Guide 


Private Sector Companies and Industry Consortia


AWS

Amazon Web Services (AWS) offers a broad array of AI services and machine learning platforms, which enable developers and data scientists to build, train, and deploy AI models quickly and at scale. AWS provides tools and environments tailored for various AI applications, from language processing to image recognition.


AWS Security Framework

Securing Generative AI



Google

Google Cloud Platform (GCP) offers a suite of AI and machine learning services that allow developers and enterprises to build and deploy AI-enhanced applications more efficiently. These services include powerful data analytics, natural language processing, and machine learning tools like AutoML, AI Platform, and pre-trained AI APIs.


Google’s Secure AI Framework



IBM

IBM AI encompasses a broad array of AI-powered products and solutions, including Watson, which provides enterprises with robust AI tools for data analysis, natural language processing, and automated decision-making.


Adversarial Robustness Toolbox (ART),

AI Fairness 360 (AIF360)



Microsoft

Microsoft Azure offers a suite of cloud-based AI services and cognitive APIs designed to enable developers to build intelligent applications using capabilities such as machine learning, knowledge mining, and cognitive services, such as vision, speech, language, and decision-making.


AI Principles by Microsoft

AI Security Risk Assessment



Research Institutes and Non-profit Organizations


Partnership on AI

Partnership on AI (PAI) is a non-profit partnership of academic, civil society, industry, and media organizations creating solutions so that AI advances positive outcomes for people and society.


Safety Critical AI



Concluding Words

As AI continues to integrate into the fabric of global infrastructure, the task of securing it becomes both more challenging and more crucial. The list above, while extensive, is not exhaustive. It is curated to provide stakeholders with a foundation for understanding the current landscape and to serve as a springboard for further investigation. Given the pace at which AI and related threats evolve, continuous vigilance and updates to security frameworks are necessary.

We encourage readers to maintain an active engagement with emerging standards and to contribute to the ongoing discourse on AI security. This compilation, timely today, may require adaptation tomorrow, echoing the dynamic nature of AI itself. With this understanding, we march forward into the AI era, equipped with knowledge and an eye towards a secure and prosperous future.


By Michael Hannecke 27 Dec, 2023
How to deploy kubernetes nodes with NVIDIA GPU support on GCP using Terraform as Infrastructure as code.
05 Dec, 2023
Summary of responsible AI topics
By Michael Hannecke 01 Dec, 2023
Tips for ensuring security in developing AI applications.
By Michael Hannecke 15 Nov, 2023
Typography of adversarial attacks in generative AI, Process and Countermeasures.
More Posts
Share by: