How the project and its name "SEC4AI4SEC" was born?

Sec4AI4Sec is a project funded by the EU that aims to develop security-by-design testing and assurance technology for AI-enhanced systems, software, and assets.

Sec4AI4Sec is a project funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101120393 that aims to develop security-by-design testing and assurance technology for AI-enhanced systems, software, and assets.

This project aims to create a range of cutting-edge technologies, open-source tools, and new methodologies for designing and certifying secure AI-enhanced systems and AI-enhanced systems for security. Additionally, it will provide reference benchmarks that can be utilised to standardise the evaluation of research outcomes within the secure software research community.

The project is divided into two main phases, each with its name.

AI4Sec –stands for using artificial intelligence in security. Democratise security expertise with an AI-enhanced system that reduces development costs and improves software quality. This part of the project improves via AIs the secure coding and testing.

Sec4AI – involves AI-enhanced systems. These systems also have risks that make them vulnerable to new security threats unique to AI-based software, especially when fairness and explainability are essential.

3 WHY’S

SECURITY

Improving security systems is one of the project’s central objectives. With Sec4AI4Sec, improving the technological systems defending one’s perimeter will be possible via AI-enhanced systems

OPPORTUNITY

The use of new technologies, such as artificial intelligence, has made it possible to create more functions, lower costs, and improve the overall software ecosystem - particularly in terms of software quality.

FAIRNESS

It is crucial to consider the importance of having AI systems that are fair and transparent in their decision-making process. This means that we should be able to understand how the system generates its output and ensure that it is not biased. These two qualities are essential in dealing with new threats and vulnerabilities in the AI system.

Latest news and events

Read more about our latest news posts and be informed.

Paper: Domain-aware graph neural networks for source code vulnerability detection

As software systems become more complex, detecting vulnerabilities early remains a major challenge. Traditional tools often generate too many false positives, while AI […]

If You Can Understand It, You Can Trust It

The @SEC4AI4SEC initiative promotes trustworthy, transparent, and secure AI systems. As illustrated in our comic, AI decisions can sometimes appear restrictive or overly […]

Subscribe to more news

Subscribe to newsletter. We'll be pleased to keep you up to date about our project

    I authorize Sec4AI4Sec to process my data (privacy policy) in order to receive promotional information on services, events, and products. *