Cybersecurity for AI-Augmented Systems
The main goal of Sec4AI4Sec is to develop security techniques to analyze vulnerable components. Our focus goes beyond traditional software and hardware components. Our goal is to test and secure AI-enabled components. This is a challenging new frontier of vulnerable components and a strategic asset that must be protected as part of Europe’s resilience and digital sovereignty strategy. As outlined in ENISA’s AI Cybersecurity Challenges document, these security assets include data, software components containing AI models, the execution platforms on which the components are deployed, the pipelines used for development and tools, and people such as developers and data scientists. The quote says, “AI makes for better security, and security makes for better AI.”
Sec4AI4Sec recognizes the new reality that AI components appear in two places
SEC4AI. AI is part of a system that is deployed as an intelligent component. These components, the data to train and update them, and their deployment platforms expand the attack surface and create new vulnerabilities that traditional SAST and DAST testing tools cannot cover.
AI4SEC. AI is used in DevOps to help developers and testers secure their coding and fix vulnerabilities.
Objectives
Supporting cybersecurity certification via AI components
Objective 01
Develop security benchmark data
Objective 02
Testing at development time
Objective 03
Monitoring and re-configuring at run-time
Objective 04
Reliable vulnerability localization without false positives
Objective 05
Reliable automated creation of vulnerability fixes
Objective 06
Test our research approach with concrete use cases
Objective 07