Cybersecurity for AI-Augmented Systems

The main goal of Sec4AI4Sec is to develop security techniques to analyze vulnerable components. Our focus goes beyond traditional software and hardware components. Our goal is to test and secure AI-enabled components. This is a challenging new frontier of vulnerable components and a strategic asset that must be protected as part of Europe’s resilience and digital sovereignty strategy. As outlined in ENISA’s AI Cybersecurity Challenges document, these security assets include data, software components containing AI models, the execution platforms on which the components are deployed, the pipelines used for development and tools, and people such as developers and data scientists. The quote says, “AI makes for better security, and security makes for better AI.”

Sec4AI4Sec recognizes the new reality that AI components appear in two places

SEC4AI. AI is part of a system that is deployed as an intelligent component. These components, the data to train and update them, and their deployment platforms expand the attack surface and create new vulnerabilities that traditional SAST and DAST testing tools cannot cover.
AI4SEC. AI is used in DevOps to help developers and testers secure their coding and fix vulnerabilities.

SEC4AI. Testing the extended attack surface of AI-based systems.

As AI models become widely adopted, their reliability and security will become increasingly important issues, as the level of autonomy and trustworthiness required of AI will inevitably increase. A further challenge is to continually evolve AI components in response to feedback, new training data, code updates, hyperparameter updates, etc.
The main problem with AI is that the attack surface for these systems is much wider and the security threats to such dynamic and complex AI systems are not fully understood. Only recently has the research community begun to study threats to AI components.
However, there is a significant lack of knowledge about how to identify and mitigate these vulnerabilities in AI-based systems, which is one of the key objectives of this project. Sec4AI4Sec achieves this goal through two complementary approaches:
• Testing at development time
• Monitoring and re-configuring at run-time

AI4SEC- Automated intelligence to find and fix software vulnerabilities.

The problem of resolving software vulnerabilities before deployment remains one of the software industry's biggest challenges and presents an interesting scenario for Sec4AI4Sec.
Fixing security vulnerabilities is an essential task for developing reliable software solutions, but finding and applying patches is a problematic task, making it difficult to recognize vulnerable code. The development of secure software components is therefore moving from a knowledge-based approach to increased automation and the use of multiple artificial intelligence components within the toolchain.
The overall goal of Sec4AI4Sec is to improve the practical effectiveness of AI-based security testing toolchains and provide the building blocks for intelligent toolchains that can automatically resolve software vulnerabilities.
This project will achieve this overall objective through an integrated approach that includes:
• Reliable vulnerability localization without false positives
• Reliable automated creation of vulnerability fixes

Objectives

Supporting cybersecurity certification via AI components

Objective 01

Develop security benchmark data

Objective 02

Testing at development time

Objective 03

Monitoring and re-configuring at run-time

Objective 04

Reliable vulnerability localization without false positives

Objective 05

Reliable automated creation of vulnerability fixes

Objective 06

Test our research approach with concrete use cases

Objective 07

Want to know more?

Let's keep in touch. Subscribe to our newsletter.

    I authorize Sec4AI4Sec to process my data (privacy policy) in order to receive promotional information on services, events, and products. *