Cyber Risk GmbH | Training for the Board of Directors



As part of their governance responsibilities, the members of the Board of Directors must not only review the information presented to them, but also actively engage by asking the right questions to ensure that strategies, policies, and plans are adequate, well-implemented, and compliant with applicable legal and regulatory requirements.

Around the world, laws and regulations are increasingly holding Boards accountable for cybersecurity and risk management. Governments, regulatory bodies, and industry standards demand that Boards actively engage in cybersecurity oversight to protect stakeholders, customers, and the critical infrastructure.

As an example, according to the SEC’s cybersecurity disclosure rules, directors who fail to properly oversee risk management could face regulatory scrutiny and shareholder lawsuits. The NIS 2 Directive of the EU introduces more board-level accountability for cybersecurity compliance, meaning directors of essential and important entities could face legal consequences if their organization fails to meet cybersecurity requirements.

We provide concise, yet comprehensive briefings on critical issues that Boards must understand to exercise sound judgment and effective risk oversight.


Cyber Risk GmbH | Some of our clients



Our Briefings for the Board:

We offer customized briefings designed to address specific needs. Whether you require a focused session on a particular topic or a broader discussion on emerging risks, we can tailor the content to align with your priorities. Please feel free to discuss your needs with us, and we will develop a briefing that best supports the Board’s oversight responsibilities.

Alternatively, you can select from our existing briefing topics, designed to provide strategic insights and practical guidance on key governance and risk management challenges and opportunities:


1. No, it is not cyber risk. It is hybrid risk.

Overview

We will keep it simple and clear: Cyber risk must be seen as part of hybrid risk.

There are still companies and organisations that consider cyber risk a technical risk. But even the most advanced organizations must adapt and build their risk management framework on the foundation that we now operate in a fundamentally different world—one where cyber risk is a core component of hybrid risk. The old mindset is dangerously outdated. Today, cyber operations are embedded in economic warfare, political conflict, supply chain disruption, and military strategy. Cyber risk today is not just about protecting networks, it’s about protecting societies from hybrid threats.

A hybrid risk management framework should identify primary cyber threats, map their cascading effects on financial, legal, and business operations, and develop cross-functional response strategies.

For centuries, Newtonian mechanics was considered a complete and stand-alone framework for understanding motion and forces. It worked well for most practical applications but failed to explain phenomena at very small (quantum) or very large (cosmological) scales. Eventually, the theory of relativity and quantum mechanics showed that Newtonian physics was just a subset of a much broader and more complex reality.

Similarly, cyber risk has traditionally been seen as a stand-alone issue, much like Newtonian mechanics. However, just as physics evolved to integrate quantum and relativistic perspectives, cyber risk must now be understood as part of the larger hybrid risk environment, where cyber operations interact with economic, political, military, and psychological dimensions.

Instead of thinking “cyber risk”, decision-makers should think “hybrid risk with a cyber component”, to develop a more realistic and effective response strategy. Security strategies must address the full spectrum of hybrid threats, not just cybersecurity in isolation.


Target Audience

This presentation will be delivered exclusively in person during a quarterly Board meeting, featuring tailored case studies specific to an organization’s needs. It will not be available online or via Zoom or similar applications.


Duration

Our briefings can be as short as 30 minutes while remaining comprehensive, or longer, depending on the needs, the program content, and the case studies. We always tailor the program to the needs of each client.


Instructor

George Lekatis. For information about his background and experience, you may visit: https://www.cyber-risk-gmbh.com/About.html


George Lekatis

2. Counter-Elicitation for Professionals With Privileged Access.

Overview

The trusting executive. The accomplished professional who speaks at conferences. The passionate scientist sharing ideas in online communities. Individuals with access to sensitive information will network at conventions, participate in discussions, or engage in interviews, unknowingly exposing themselves to risk.

Could they effectively protect proprietary information from a skilled individual who befriends them to gain access to what they know?

Elicitation is an effort in which a seemingly regular conversation is contrived to extract sensitive information, without raising suspicion that specific facts are being sought. The elicitation techniques are subtle, non-threatening, deniable, and effective. Elicitors manipulate individuals into sharing valuable information without realizing its significance. This is also one of the oldest forms of espionage.

Like other social engineering tactics, elicitation exploits a person’s psychological and social weaknesses, including:
• The tendency to be polite and helpful, especially with new acquaintances.
• The desire to appear knowledgeable and credible in professional discussions.
• A failure to recognize the true value of information being shared during an “interesting” conversation.

Executives and other high-value targets are unaware that some of their encounters are in reality carefully orchestrated attempts to gather sensitive information.

This briefing raises awareness about elicitation risks and equips participants to understand:

• How elicitation techniques are used to manipulate conversations and extract valuable information.
• Why trusting or accomplished individuals are often prime targets.
• Practical strategies to recognize and deflect elicitation attempts while maintaining professionalism.

By providing the key people in an organization with these critical skills, we empower them to safeguard sensitive information, protect their organizations, and navigate professional interactions with greater confidence and security.


Target Audience

The program is highly beneficial for the Board of Directors, C-suite executives, and professionals with privileged access to sensitive corporate information.


Duration

Our briefings can be as short as 30 minutes while remaining comprehensive, or longer, depending on the needs, the program content, and the case studies. We always tailor the program to the needs of each client.


Instructor

Christina Lekati, psychologist, security training expert. To learn about her you may visit: https://www.cyber-risk-gmbh.com/About_Christina_Lekati.html


Christina Lekati, Social Engineering Training Expert

3. AI-Powered Social Engineering and The Psychology Of Our Weaknesses.

Overview

With social engineering ranking within the top 5 attack vectors in most major threat reports for year after year, it is time to finally give some credit to Niccolo Machiavelli who once said:

“The one who deceives will always find those who allow themselves to be deceived.”

In today’s digital age, that remark takes on new urgency as artificial intelligence tools have elevated the art of deception to unprecedented levels.

With artificial intelligence now being widely available, cybercriminals have added some powerful capabilities to their arsenal and enhanced their capabilities in psychological exploitation. Large language models (LLMs), voice cloning, and deepfake technology are making social engineering harder to detect and easier to scale.

In today’s world, seeing is no longer believing.

This session will delve into:
• The latest AI-enhanced social engineering tactics.
• The behavioral science that social engineering is based on and the psychology of our weaknesses.
• Practical defense strategies to recognize, counter, and mitigate AI-driven social engineering attacks.

Through real-world case studies, this session provides actionable insights to help individuals and organizations strengthen their defenses and maintain vigilance in an era where trust is easily exploited, and deception knows no bounds.


Target Audience

The program is highly beneficial for the Board of Directors, C-suite executives, and professionals with privileged access to sensitive corporate information.


Duration

Our briefings can be as short as 30 minutes while remaining comprehensive, or longer, depending on the needs, the program content, and the case studies. We always tailor the program to the needs of each client.


Instructor

Christina Lekati, psychologist, security training expert. To learn about her you may visit: https://www.cyber-risk-gmbh.com/About_Christina_Lekati.html


Christina Lekati, Social Engineering Training Expert


4. An effective cybersecurity culture and the Board of Directors.

Overview

The Board of Directors, as the culture owner, must ensure that the beliefs, the perceptions, the attitudes, the assumptions, and the norms regarding cybersecurity are in line with the mission and the vision of their organization. They must also ensure that information security considerations are an integral part of every employee’s and manager’s job, habits, and conduct.

The majority of data breaches within organisations are the result of human actors. Cybersecurity is not only a technical challenge. As long as managers and employees can provide access to systems and data, cybersecurity depends on them too.

Employees that have access to critical assets of an organization, become targets. Those that have access to technology and organizational assets are also responsible for the protection of those assets. Are they fit and proper to handle this responsibility? Do they have the awareness and skills necessary to protect themselves and their organisation?

The economic costs of cyberattacks and breaches are more important than many directors and managers believe. There are direct and indirect costs, that include downtime of services, compromise of confidential information, fines, decreased profits through reputational damage, supervisory scrutiny etc.

We must tailor the program, to include the organization’s cybersecurity compliance obligations and their implications across all relevant jurisdictions, the specific threat actors the organization faces, and how is the organization more likely to be breached.


Target Audience

The program is highly beneficial for the Board of Directors and C-suite executives.


Duration

Our briefings can be as short as 30 minutes while remaining comprehensive, or longer, depending on the needs, the program content, and the case studies. We always tailor the program to the needs of each client.


Instructor

Our instructors are professionals with extensive, real-world experience in their respective fields. They are equipped to deliver full-time, part-time, or short-form programs, all customized to suit your specific requirements. Beyond teaching, our instructors provide hands-on guidance, offering real-world insights that help bridge the gap between theory and practice. You will always be informed ahead of time about the instructor leading your program.



5. Social engineering and the Board of Directors

Board members must understand better the social engineering modus operandi. We will cover:


The Social Engineering Kill-chain.

1. Reconnaissance: The research phase used to identify and select targets.

2. Targeting: Who is the most vulnerable person to attack? What is the biggest vulnerability of the target?

3. Pretexting: The attacker’s cover story.

4. Establishing trust with the target.

5. Manipulating, exploiting, and victimizing.

6. Case studies.


Typical Social Engineering Attacks from a Distance.

1. Phishing Emails.

2. Spear Phishing.

3. Vishing.

4. Smishing.

5. Watering Holes.

6. Spoofing.

7. Baiting.

8. Whaling phishing.

9. Emotional triggers that will make you want to respond - but you shouldn’t.

10. Case studies.

11. Defence.


Is your social media content making you a target?

1. Social media is a primary source of information for attackers.

2. How your social media content can be used against you.

3. Cybersecurity hygiene advice for social media.

4. Attacks through social media.

5. Examples.

6. Defense.


In- Person attacks and manipulation techniques.

1. USB traps.

2. Emotional elicitation & exploitation.

3. Time pressure.

4. Authority.

5. Likeability.

6. Intimidation.

7. Reciprocity.

8. Impersonation.

9. Pity & Helpfulness.

10. Commitment & Consistency.

11. Reverse Social Engineering.

12. Examples & Case Studies.

13. Defence.


Physical security.

1. Why social engineers will try to enter your establishment.

2. What assets can be stolen/ compromised?

3. Gaining unauthorized access to physical spaces.

4. Tailgating and bypassing physical security measures.

5. Locked does NOT mean secure - lockpicking capabilities.

6. Defence.


Identifying a social engineering attack.

1. Identifying manipulation and deceit.

2. Emotional triggers, emotional exploitation & what to do about it.

3. Verifying intentions - subtly.

4. Case studies.

5. Responding to and deterring a social engineering attack.


Target Audience

The program is highly beneficial for the Board of Directors and C-suite executives.


Duration

Our briefings can be as short as 30 minutes while remaining comprehensive, or longer, depending on the needs, the program content, and the case studies. We always tailor the program to the needs of each client.


Instructor

Christina Lekati, psychologist, security training expert. To learn about her you may visit: https://www.cyber-risk-gmbh.com/About_Christina_Lekati.html


Christina Lekati, Social Engineering Training Expert

6. Social engineering: the targeting and victimization of key people through weaponized psychology

Overview

Threat actors are not interested in attacking everyone and anyone in an organization. High value individuals are the ones with elevated access to information, assets, and systems. Board members and the C-Suite become by default high-risk targets for cyberattacks.

The most effective and frequent method to attack high value individuals is weaponized psychology. Board members and C-Level executives must learn the answers to the following questions:

- Which is the advanced psychological game that threat actors use to compromise their targets?

- How do they find their targets’ vulnerabilities?

- What can we do to avoid being exploited from a determined adversary with a carefully planned attack?

High-value individuals must understand the threat, to protect themselves and their organisation from cyber attacks, industrial espionage, competitors, and other threat actors lurking online and offline.


Target Audience

The program is highly beneficial for the Board of Directors, C-suite executives, and professionals with privileged access to sensitive corporate information.


Duration

Our briefings can be as short as 30 minutes while remaining comprehensive, or longer, depending on the needs, the program content, and the case studies. We always tailor the program to the needs of each client.


Instructor

Christina Lekati, psychologist, security training expert. To learn about her you may visit: https://www.cyber-risk-gmbh.com/About_Christina_Lekati.html


Christina Lekati, Social Engineering Training Expert

7. State-sponsored but independent hacking groups. The long arm of countries that exploit legal pluralism and make the law a strategic instrument

Overview

According to Article 51 of the U.N. Charter: “Nothing in the present Charter shall impair the inherent right of individual or collective self-defense if an armed attack occurs against a Member of the United Nations, until the Security Council has taken measures necessary to maintain international peace and security.”

But is a cyber-attack comparable to an armed attack?

There is no international consensus on a precise definition of a use of force, in or out of cyberspace. Nations assert different definitions and apply different thresholds for what constitutes a use of force.

For example, if cyber operations cause effects that, if caused by traditional physical means, would be regarded as a use of force under jus ad bellum, then such cyber operations would likely also be regarded as a use of force.

Important weaknesses of international law include the assumption that it is possible to isolate military and civilian targets with sufficient clarity, and to distinguish a tangible military objective to be attained from an attack.

More than 20 countries have announced their intent to use offensive cyber capabilities, in line with Article 2(4) and Article 51 of the United Nations (UN) Charter.

Unfortunately, these capabilities will not help when the attackers are State-sponsored groups, and the States supporting them, claim that not only they are not involved, but also that their adversaries (the victims) have fabricated evidence about it. This is a very effective disinformation operation.

Adversaries have already successfully exploited weakness of non-authoritarian societies, especially the political and legal interpretation of facts from different political parties. It is difficult to use offensive cyber capabilities in line with democratic principles and international law, as it is almost impossible to distinguish with absolute certainty between attacks from States and attacks from State-sponsored independent groups.

Even when intelligence services know that an attack comes from a State that uses a State-sponsored independent group, they cannot disclose the information and the evidence that supports their assessment, as disclosures about technical and physical intelligence capabilities and initiatives can undermine current and future operations. This is the “second attribution problem” – they know but they cannot disclose what they know.

As an example, we will discuss the data breach at the U.S. Office of Personnel Management (OPM). OPM systems had information related to the background investigations of current, former, and prospective federal government employees, U.S. military personnel, and those for whom a federal background investigation was conducted. The attackers now have access to information about federal employees, federal retirees, and former federal employees. They have access to military records, veterans' status information, addresses, dates of birth, job and pay history, health insurance and life insurance information, pension information, data on age, gender, race, even fingerprints.

But why?

Aldrich Ames, a former intelligence officer turned mole, has said: “Espionage, for the most part, involves finding a person who knows something or has something that you can induce them secretly to give to you. That almost always involves a betrayal of trust.”

Finding this person is much easier, if you have data easily converted to intelligence, like the data stolen from the U.S. Office of Personnel Management (OPM). This leak is a direct risk for the critical infrastructure.

There are questions to be answered, and decisions to be made, not only about tactic and strategy, but also political and legal interpretation.

We tailor the program to meet specific requirements. You may contact us to discuss your needs.


Target Audience

The program is highly beneficial for the Board of Directors, C-suite executives, and professionals with privileged access to sensitive corporate information.


Duration

Our briefings can be as short as 30 minutes while remaining comprehensive, or longer, depending on the needs, the program content, and the case studies. We always tailor the program to the needs of each client.


Instructor

George Lekatis. For information about his background and experience, you may visit: https://www.cyber-risk-gmbh.com/About.html


George Lekatis

8. Deception, disinformation, misinformation, propaganda, and the role of the Board.

Overview

Misinformation is incorrect or misleading information.

Disinformation is false information, deliberately and often covertly spread, in order to influence public opinion, or obscure the truth.

Propaganda is a broader and older term. Propaganda uses disinformation as a method. While the French philosopher Jacques Driencourt asserted that everything is propaganda, the term is most often associated with political persuasion and psychological warfare.

Psychological warfare is the use of propaganda against an enemy (or even a friend that could become an enemy in the future), with the intent to break his will to fight or resist, or to render him favorably disposed to one's position.

In deception (according to Bell and Whaley), someone is showing the false and hiding the real. Hiding the real is divided into masking, repackaging, and dazzling, while showing the fake is divided into mimicking, inventing, and decoying.

People are remarkably bad at detecting deception and disinformation.

They often trust what others say, and usually they are right to do so. This is called the “truth bias”. People also tend to believe something when it is repeated. They tend to believe something they learn for the first time, and subsequent rebuttals may reinforce the original information, rather than dissipate it.

Humans have an unconscious preference for things they associate with themselves, and they are more likely to believe messages from users they perceive as similar to themselves. They believe that sources are credible if other people consider them credible. They trust fake user profiles with images and background information they like.

Citizens must understand that millions of fake accounts follow thousands of real and fake users, creating the perception of a large following. This large following enhances perceived credibility, and attracts more human followers, creating a positive feedback cycle.

People are more likely to believe others who are in positions of power. Fake accounts have false credentials, like false affiliation with government agencies, corporations, activists, and political parties, to boost credibility.

Freedom of information and expression are of paramount importance in many cultures. The more freedom of information we have, the better. But the more information we have, the more difficult becomes to understand what is right and what is wrong. The right of expression and the freedom of information can be used against the citizens. We often have the weaponization of information.

The Internet and the social media are key game-changers in exploiting rights and freedoms. In the past, a secret service should work hard to get disinformation in the press. Today, the Internet and the social media give the opportunity for spreading limitless fake photos, reports, and "opinions". Many secret services wage online wars using Twitter, Facebook, LinkedIn, Google+, Instagram, Pinterest, Viber etc. Only imagination is the limit.

Social media platforms, autonomous agents, and big data are directed towards the manipulation of public opinion. Social media bots (computer programs mimicking human behaviour and conversations, using artificial intelligence) allow for massive amplification of political views, manufacture trends, game hashtags, add content, spam opposition, attack journalists and persons that tell the truth.

In the hands of State-sponsored groups these automated tools can be used to both boost and silence communication and organization among citizens.

Over 10 percent of content across social media websites, and 62 percent of all web traffic, is generated by bots, not humans. Over 45 million Twitter accounts are bots, according to researchers at the University of Southern California.

Machine-driven communications tools (MADCOMs) use cognitive psychology and artificial intelligence based persuasive techniques. These tools spread information, messages, and ideas online, for influence, propaganda, counter-messaging, disinformation, espionage, intimidation. They use human-like speech to dominate the information-space and capture the attention of citizens.

Artificial intelligence (AI) technologies enable computers to simulate cognitive processes, such as elements of human thinking. Machines can make decisions, perceive data or the environment, and act to satisfy objectives.

The rule of the people, by the people, and for the people, requires citizens that can make decisions in areas they do not always understand. When citizens understand the online environment, they will be way more prepared to protect their families, their working environment, and their country.


Target Audience

The program is highly beneficial for the Board of Directors, C-suite executives, and professionals with privileged access to sensitive corporate information.


Duration

Our briefings can be as short as 30 minutes while remaining comprehensive, or longer, depending on the needs, the program content, and the case studies. We always tailor the program to the needs of each client.


Instructor

Our instructors are professionals with extensive, real-world experience in their respective fields. They are equipped to deliver full-time, part-time, or short-form programs, all customized to suit your specific requirements. Beyond teaching, our instructors provide hands-on guidance, offering real-world insights that help bridge the gap between theory and practice. You will always be informed ahead of time about the instructor leading your program.



9. Cyber espionage, intellectual property theft, and the role of the Board.

Overview

Intelligence is the collection of information that have military, political, or economic value.

Intelligence refers to both:

- information that is collected by clandestine means,

- information available through conventional means.

Espionage is a set of intelligence gathering methods.

The Oxford’s English Dictionary defines espionage as “the practice of spying or of using spies, typically by governments, to obtain political and military information.”

The Merriam-Webster's Dictionary has a slightly different opinion. Espionage is “the practice of spying or using spies, to obtain information about the plans and activities especially of a foreign government or a competing company.”

The U.S. Federal Bureau of Investigations (FBI) defines economic espionage as "the act of knowingly targeting or acquiring trade secrets to benefit any foreign government, foreign instrumentality, or foreign agent."

According to the 2019 Situation Report of the Swiss Federal Intelligence Service (FIS): "Espionage is driven by a variety of different motives and has more than one aim. For example, states strive, using information obtained by their intelligence services, to gain a fuller picture of the situation in order to improve the effectiveness of their actions.

It can furthermore be observed that information is increasingly being procured with the aim of influencing (in so-called influence operations) or damaging the actions of rivals. Both can be achieved through the selective publication of information. The aim of such activities is often to weaken the cohesion of international groups or institutions and thereby to restrict their ability to act."

Cyber is a prefix used to describe new things that are now possible as a result of the spread of computers, systems, and devices, that are interconnected. It relates to data processing, data transfer, or information stored in systems.

With the word cyber we also refer to anything relating to computers, systems, and devices, especially the internet.

The prefix cyber has been added to a wide range of words, to describe new flavors of existing concepts, or new approaches to existing procedures.

Intelligence gathering involves human intelligence (HUMINT - information collected and provided by human sources), signals intelligence (SIGINT - information collected by interception of signals), imagery intelligence (IMINT), measurement and signature intelligence (MASINT), geospatial intelligence (GEOINT), open-source intelligence (OSINT), financial intelligence (FININT), etc.

HUMINT is the oldest form of intelligence gathering. Cyber-HUMINT refers to the strategies and practices used in cyberspace, in order to collect intelligence while attacking the human factor.

Cyber-HUMINT starts with traditional human intelligence processes (recruitment, training, intelligence gathering, deception etc.), combined with social engineering strategies and practices.

Cyber espionage includes:

- unauthorized access to systems or devices to obtain information,

- social engineering to the persons that have authorized access to systems or devices, to obtain information.

Cyber espionage involves cyber attacks to obtain political, commercial, and military information.

Cyber espionage and traditional espionage have similar or the same end goals. Cyber espionage exploits the anonymity, global reach, scattered nature, the interconnectedness of information networks, the deception opportunities that offer plausible deniability.

Economic and industrial espionage, including cyber espionage, represents a significant threat to a country’s prosperity, security, and competitive advantage. Cyberspace is a preferred operational domain for many threat actors, including countries, state sponsored groups, the organized crime, and individuals. Artificial Intelligence (AI) and the Internet of Things (IoT) introduce new vulnerabilities.

Cyber economic espionage is the targeting and theft of trade secrets and intellectual property. It is usually much larger in scale and scope, and it is a major drain on competitive advantage and market share.

According to Burton (2015), cyber threats can be classified into four main categories: Cybercrime, cyber espionage, cyberterrorism, and cyber warfare.

Cybercrime is crime enabled by or that targets computers. Criminal activities can be carried out by individuals or groups who have diverse goals such as financial gain, identity theft, and damaging property. Usually cybercrime is financially motivated.

Cyber espionage activities are conducted by state-sponsored cyber attackers "for the purpose of providing knowledge to the states to obtain political, commercial, and military gain" (Burton, 2015).

According to Denning, cyberterrorism is “the convergence of cyberspace and terrorism" that covers politically motivated hacking and operations intended to cause grave harm such as loss of life or severe economic damage.

Cyber Warfare involves the use of computers and systems to target an enemy’s information systems. The use of cyber power in military operations is an important force multiplier. Since the armed forces are highly dependent on information technologies and computer networks, disruption of these systems would provide great advantages.

Cyberspace is regarded as the fifth domain of warfare after land, sea, air, and space. NATO Secretary General Jens Stoltenberg announced in June 2016 that “the 28-member alliance has agreed to declare cyber an operational domain, much as the sea, air and land are”.

According to the 2019 Situation Report of the Swiss Federal Intelligence Service (FIS): "Espionage operations which have come to light reveal that cyber tools and other communications reconnaissance instruments are being used in parallel and in interaction with human sources.

Depending on the objective, information is also being procured exclusively via cyberspace. The latter has gained in importance insofar as the use of cyber-based information-gathering tools has proven successful for many actors.

Cyber espionage is difficult to detect, the perpetrators can hardly be successfully prosecuted, as the purported country of origin does of course not help to elucidate the affair and determination by the means of intelligence of the origins of the cyber-attack (ʻattributionʼ) can simply be denied based on the lack of provability."

A major challenge today is the lack of awareness and training. Many organizations and companies continue to believe that cyber security is a technical, not a strategic discipline. They believe that cyber security involves the protection of systems from threats like unauthorized access, not the awareness and training of persons that have authorized access to systems and information.


Target Audience

This presentation will be delivered exclusively in person during a quarterly Board meeting, featuring tailored case studies specific to an organization’s needs. It will not be available online or via Zoom or similar applications.


Duration

Our briefings can be as short as 30 minutes while remaining comprehensive, or longer, depending on the needs, the program content, and the case studies. We always tailor the program to the needs of each client.


Instructor

George Lekatis. For information about his background and experience, you may visit: https://www.cyber-risk-gmbh.com/About.html


George Lekatis

10. Steganography in business intelligence and intellectual property theft, and the role of the Board.

Overview

At first glance, the connection between steganography and the Board might seem unconventional, but it effectively highlights an important overlooked risk in critical sectors.

Steganography is the art and the science of concealing a message, image, or file within another message, image, or file, and communicating in a way that hides the existence of the message and the communication. For example, a message can be hidden inside a graphic image file, an audio file, or other file format, in a way that it is difficult for steganography experts and impossible for all the others to find it.

The word steganography comes from the Greek words στεγανός (covered or concealed) and γράφω (write). Payload is the data that has been hidden, and carrier is whatever (like a file) hides the payload.

Steganography is different from cryptography. Cryptography is the art of secret writing, it makes a message unreadable by a third party, but it does not hide the existence of the message. Steganography is about concealing the message.

It is relatively easy to identify an encrypted file, but it is usually not so easy to decrypt it. The analysts might be able to identify the encryption method by examining the file header, identifying encryption programs installed on the system, or finding encryption keys (which are often stored on other media).

With steganography, everything is more complex and difficult. The analysts must first find the file that hides another encrypted file (looking for multiple versions of the same image, identifying the presence of grayscale images, searching metadata and registries, using histograms, and using hash sets to search for known steganography software), then the analysts might be able to extract the embedded data, and they still have to find the encryption key (as the hidden file is usually encrypted too).

Steganography can be very useful. Using digital watermarking, an author can embed a hidden message in a file so that ownership of the intellectual property can be proved. Artists can post artwork on a website, and if others claim the ownership of the work, the artists can prove ownership as they can recover the watermark. Steganography has also a number of nefarious applications. Criminals can easier hide records of illegal activity and financial crimes, and terrorists can easier exchange messages.

Steganalysis is the analysis of steganography, and involves the detection of hidden data, the extraction of the hidden message, and sometimes the alteration of the hidden message so that the recipient cannot extract it, or receive a different message.

Many steganalysis tools are signature-based (similar to antivirus and intrusion detection systems). There are also anomaly-based steganalysis systems, more flexible and better for new steganography techniques.

New complex steganography methods continue to emerge. Spread-spectrum steganography methods are similar to spread-spectrum radio transmissions (where the signal is spread across a wide-frequency spectrum rather than focused on a single frequency, in an effort to make detection and jamming more difficult). In spread-spectrum steganography, small distortions to images are less detectable in bright colors, so the hidden message is stored in bright colors only, not each color. You can also check the Biosteganography link at the top of the webpage.


Case study, steganography used in espionage, organized crime, and terrorism.

Consider the following scenario. Every Friday afternoon (for the target's time zone) a member of a foreign state-sponsored group puts an item for sale on eBay, and posts a photograph of the item. The item for sale is real, and it will be sold according to the rules of eBay. Bids are accepted, money is collected, and items are delivered. The photograph of the item hides a message, but this is just one from so many millions of photos that can be found at eBay. Anybody in the world can download the photo, but only members of the same foreign state-sponsored group know how to extract the encrypted message and how to decrypt it.


What can we do?

Corporate security and acceptable use policies, that detail what employees are authorized to do within the corporate environment, can always help and must be in the first line of defense. Awareness training for all employees, that explains the reasons they have to respect policies and includes the modus operandi and risks of steganography attacks is of paramount importance.

User policies explain what is prohibited, and they provide an organization with the legal means to punish or prosecute violators.

We must clearly explain in policies that every line of code or piece of software that is not approved, is strictly prohibited. In this way, we will avoid most of the following:

- anti-forensics tools (used to thwart digital forensic investigations, like drive wiping tools, cache and history erasers, file property and time alternators, VPNs, e-mail, and chat log erasers),

- encryption or steganography tools (there are over 1,000 free steganography tools online, most of them very dangerous for everybody that downloads the "free" tool, or even visits these websites. In some websites we read: "This application does not require installation. You can copy the program files to an external data device, so as to run it on any computer you can get your hands on, with just a click of the button. It is not adding new items to the Windows registry or hard drive without your approval, as installers usually do, and it will not leave any traces behind"),

- exploit kits (programs designed to exploit a known vulnerability in a piece of software or online resource. They are often distributed as a package, which will enable attackers with limited knowledge to launch a sophisticated attacks),

- toolkits (that enable unsophisticated users to construct new malware applications, sometimes not detectable by standard signature-based virus scanning engines),

- keyloggers (designed to covertly monitor keystrokes on a device. Once a device has been compromised, all keystrokes, including passwords, can be monitored, and recorded),

- password cracking tools (designed to break password-protected files and accounts),

- sniffers (that capture and analyze network traffic. Many protocols, including FTP and chat, are not encrypted. These programs obtain cleartext information, and also collect packets that can be used to crack network passwords and find protected files, servers, and user accounts),

- spyware tools (for industrial espionage, unauthorized monitoring, and collection of proprietary data),

- piracy tools (that allow users to bypass copyright protection in various forms of media, making illegal copies, and saving to a storage medium).

There are unlimited methods of steganography, only imagination is the limit. We usually learn about encrypted messages hidden in large files (images, sound files, videos etc.), and nothing more. Although steganography is usually considered a technical problem, it is not. It is also a business intelligence (or just intelligence) problem. If we do not know where to look for hidden messages, it is very unlikely to find them. Only the cooperation of the public and the private sector can protect against these security threats.


Target Audience

This presentation will be delivered exclusively in person during a quarterly Board meeting, featuring tailored case studies specific to an organization’s needs. It will not be available online or via Zoom or similar applications.


Duration

Our briefings can be as short as 30 minutes while remaining comprehensive, or longer, depending on the needs, the program content, and the case studies. We always tailor the program to the needs of each client.


Instructor

George Lekatis. For information about his background and experience, you may visit: https://www.cyber-risk-gmbh.com/About.html


George Lekatis

11. Cyber Proxies and the role of the Board.

Overview

The word proxy is interesting. In Latin, procuro means manage, administer - from pro (“on behalf of”) and curo (“I care for”).

Today a proxy is a person or entity who is authorized to act on behalf of another person or entity.

Countries expand their global intelligence footprint to better support their growing political, economic, and security interests around the world, increasingly challenging existing alliances and partnerships. They employ an array of tools, especially influence campaigns, to advance their interests or undermine the interests of other countries. They turn a power vacuum into an opportunity.

Countries use proxies (state-sponsored groups, organizations, organized crime, etc.) as a way to accomplish national objectives while limiting cost, reducing the risk of direct conflict, and maintaining plausible deniability.

With plausible deniability, even if the target country is able to attribute an attack to an actor, it is unable to provide evidence that a link exists between the actor and the country that sponsors the attack.

According to Tim Maurer, proxy is an intermediary that conducts or directly contributes to an offensive cyber operation that is enabled knowingly, actively or passively, by a beneficiary who gains advantage from its effect.

Cyber proxies are valuable actors in political warfare. This is the employment of military, intelligence, diplomatic, financial, and other means, short of conventional war, to achieve national objectives. It encompasses the exploitation of computer networks and platforms, electronic warfare, psychological operations, and information operations.

For some countries, the main battlespace is the mind. With information and psychological warfare, these countries can morally and psychologically depress the enemy’s armed forces personnel and civil population.

In 2019, the United States spent $732 billion on defense, compared to Russia’s $65.1 billion. It is obvious that Russia and other countries in similar position will try to find less expensive means to counter big, expensive U.S. weapons and systems. Cyber espionage is especially economical when countries conduct activities through proxies.

Countries actively create fertile grounds for malicious activities to occur. Cyber actors (which include cyber criminals, hacktivists, and political, economic and religious groups) are continually operating from within the sphere of influence of the sponsoring country with the understanding that their illegal activities will be tolerated, as soon as they will also support the objectives of the sponsoring country.

As John Carlin, former Assistant U.S. Attorney General for National Security has stated, what you’re seeing is the world’s most sophisticated intelligence operations when it comes to cyber espionage, using the criminal groups for their intelligence ends, and protecting them from law enforcement.

Cyber threats posed by cyber proxies must be managed, and the laws must be changed in this area. Publicly attributing malicious cyber activity to a country in a timely manner and holding that country accountable is difficult, but necessary. If international law is unable to solve these problems, national policies will ignore international law and confront cyber adversaries through rapid attribution and offensive countermeasures, to deter future aggression.


Target Audience

This presentation will be delivered exclusively in person during a quarterly Board meeting, featuring tailored case studies specific to an organization’s needs. It will not be available online or via Zoom or similar applications.


Duration

Our briefings can be as short as 30 minutes while remaining comprehensive, or longer, depending on the needs, the program content, and the case studies. We always tailor the program to the needs of each client.


Instructor

Our instructors are professionals with extensive, real-world experience in their respective fields. They are equipped to deliver full-time, part-time, or short-form programs, all customized to suit your specific requirements. Beyond teaching, our instructors provide hands-on guidance, offering real-world insights that help bridge the gap between theory and practice. You will always be informed ahead of time about the instructor leading your program.



12. This is an exotic risk, but this is not an excuse for inaction.

Examples

Exotic risk, example 1 - The external HR departments.

How many HR departments exist in a critical entity? Officially one, but the threat actors' shadow HR teams are always hiring and promoting. This is not a joke. Threat actors aren’t just profiling; they’re running a shadow leadership program. They are working to promote their targets that can be blackmailed, bribed and manipulated, into roles with more access and responsibility.

Sophisticated threat actors commonly develop detailed profiles of individuals working in critical entities. They increasingly deploy personalized cyber attacks that exploit psychological vulnerabilities. By analyzing an individual’s behaviors and patterns, these attackers can design highly targeted and effective attacks.

Their favorite targets often include individuals with psychological disorders. An example is the Obsessive-Compulsive Disorder (OCD), a mental health condition characterized by persistent, intrusive thoughts (obsessions) and repetitive behaviors (compulsions) performed to reduce anxiety or distress. This disorder affects approximately 1–2% of the global population and can significantly affect daily functioning, relationships, and quality of life.

Obsessions are intrusive and unwanted thoughts or urges that cause significant distress or anxiety. Examples include contamination fears (excessive worry about germs, dirt, or illness), symmetry and order, and intrusive aggressive thoughts. Compulsions are repetitive behaviors or mental acts performed to neutralize obsessions or prevent perceived harm. Examples include cleaning (excessive handwashing or cleaning of objects and spaces), excessive checking (repeatedly ensuring doors are locked, appliances are off, or mistakes have not been made), performing actions a specific number of times, and reassurance-seeking (asking others for validation to alleviate anxiety).

According to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), OCD is diagnosed if obsessions/compulsions are time-consuming (e.g., take more than 1 hour per day), cause significant distress/impairment, and if the disorder is not better explained by another mental health issue, like the generalized anxiety disorder. What if the disorder takes 50 minutes per day, or causes significant distress / impairment that is covered, hidden, or tolerated by polite colleagues?

Threat actors have a window of opportunity due to the challenges in addressing OCD. Individuals with OCD symptoms have a high rate of treatment resistance, and there is a delay in diagnosis and treatment, often spanning several years.

Where is the opportunity for threat actors? Individuals with OCD symptoms have an overwhelming desire to correct errors or achieve flawless outcomes. They are often very sensitive to threats related to viruses (physical and digital).

The most sophisticated cyber attacks often begin with a simple and common first step, one that opens the door for a highly complex operation.

Example 1 – Exploiting the need for perfection: A phishing email claims there’s an error in the victim’s online profile. The victim clicks on a link to “correct” the issue. This attack plays on the victim’s obsession with avoiding mistakes or achieving perfection.

Example 2 – Exploiting intrusive thoughts: Threat actors send threatening messages claiming knowledge of the victim’s “secrets”. The victim is coerced into cooperating and providing sensitive information.

Persons with OCD symptoms are not inherently less intelligent or capable than anyone else. In fact, many individuals with OCD possess high levels of intelligence, focus, and problem-solving abilities. They can learn to recognize that they are targeted. This is supported by research on cognitive-behavioral approaches to addressing vulnerabilities.

We can use OCD against threat actors, by explaining to possible victims the troubles they will get in by responding to phishing emails, fake job offers, or urgent requests exploiting psychological vulnerabilities. Once trained, they will follow security protocols rigorously. Their natural inclination to prevent mistakes can be channeled into careful adherence to security and cybersecurity best practices.

We are no doctors, and our opinion about disorders does not constitute medical advice or medical assistance of any kind.


Exotic risk, example 2 - Cyber Espionage-as-a-Service (CEaaS).

Cyber Espionage-as-a-Service (CEaaS) refers to a professionalized, commodified approach to cyber espionage where actors provide espionage tools, techniques, and operational capabilities to clients for a fee. These services are often marketed through dark web forums, making them accessible to a range of actors, from nation-states to corporations to organized crime groups.

What could CEaaS providers offer? We can start with custom malware development (tailored spyware, ransomware, or backdoors designed to exploit specific targets). They also offer phishing kits (ready-to-deploy phishing campaigns with custom research, templates, domains, and hosting). They offer harvesting services (data extraction from targeted organizations, including passwords, proprietary information, and trade secrets). They can offer one-stop shop solutions, adding exfiltration infrastructure (secure, anonymized channels for transmitting stolen data).

We will not be surprised if these providers offer loyalty programs too (every 10 hacks, you get one for free). Does it look like a joke? Have a look at their business model: They do offer subscription-based services (monthly fees for continuous access to tools and updates), one-time payments (single transactions for specific attacks or tools), profit-sharing agreements (a results-oriented model, taking a share of the profits derived from stolen data), and service level agreements (SLAs – they offer guarantees for results, data delivery, or attack success rates).

Cyber Espionage-as-a-Service (CEaaS) can also be used as a tool to overwhelm targets, creating distractions or diversions that make attacks by the real threat actor more effective and harder to understand and attribute. This tactic leverages the noise and chaos created by multiple simultaneous or consecutive cyber events to obscure the true origin or intent of the primary attack.

By launching attacks on various aspects of the target's infrastructure, CEaaS creates a multi-front challenge. This confuses defenders and forces them to spread their resources thin, reducing their ability to identify and counter the real threat. CEaaS providers may plant fake indicators of compromise (IOCs), or use tools associated with known cybercriminal groups. It shifts suspicion away from the true perpetrators and complicates attribution efforts.

The use of CEaaS actors introduces layers of plausible deniability. Even if the tools or methods point to specific groups, it becomes challenging to establish direct links to state actors.

With repeated, visible cyberattacks, they create an environment where stakeholders focus on immediate damage control, overlooking covert activities.

Albert Einstein once observed, “Confusion of goals and perfection of means seems to characterize our age.” His insight resonates deeply in the realm of cyber espionage. The rise of CEaaS exemplifies the perfection of means (sophisticated tools, professionalized services, and efficient execution), all available for hire. The confusion of goals is accomplished with attribution masking (employing proxies or third-party groups to carry out the attacks), false flags (leaving behind evidence to implicate another actor), and global infrastructure (leveraging servers and systems worldwide to confuse attribution efforts).


Exotic risk, example 3 - The connection between country risk and environmental compliance.

Environmental compliance and country risk may appear as separate concerns, but they can be deeply interconnected. Environmental factors influence country risk, and can be weaponized, as they increasingly impact political stability, economic performance, and legal frameworks.

Country risk refers to the potential risks and uncertainties associated with investing in or conducting business in a particular country. These risks stem from a variety of factors, including political, economic, legal, and social conditions in the country. Country risk can impact businesses, investors, and governments, and it is often analyzed to determine the feasibility of entering a market or engaging in financial activities in a given region.

For example, country risk is a critical consideration under Basel III, influencing capital adequacy, liquidity management, and overall risk governance. By integrating country risk into their frameworks, banks can better withstand the challenges of cross-border operations and contribute to global financial stability.

Political Risk refers to the potential for losses or adverse effects on business operations, investments, or assets due to political changes or instability in a country. These risks arise from decisions or events within a country's political or legal framework that can impact businesses, investors, or other stakeholders. It is a key component of country risk and is critical for businesses operating internationally or investing in foreign markets.

Hybrid warfare strategies increasingly incorporate environmental risk as a tool to shape geopolitical landscapes, destabilize nations, and influence political risk. Adversaries recognize that environmental conditions—both natural and artificially induced—can serve as force multipliers in conflict, economic coercion, and disinformation campaigns.

Artificially induced environmental disasters can trigger ecological crises, contamination, and accidents. We must recognize that environmental risk is no longer just a passive factor but an active domain in geopolitical conflict.


Exotic risk, example 4 - The “Harvest Now, Decrypt Later” risk.

Adversaries already follow the “Harvest Now, Decrypt Later” strategy. It refers to a security threat where adversaries collect (“harvest”) encrypted data today, with the intention of decrypting it in the future when they have access to quantum computers powerful enough to break the encryption methods currently in use.

The assumption in cryptography that “they will see it, but they will not understand it” has historically hinged on the strength of encryption: adversaries might intercept encrypted data but cannot decipher it without the appropriate key. This belief is rooted in the computational infeasibility of breaking encryption with classical methods. However, the emergence of quantum computing fundamentally changes this dynamic, challenging long-standing assumptions about cryptographic security.

For years, no explicit legal framework addressed the post-quantum threat. Laws were technology-agnostic and assumed the continued robustness of cryptography. For example, Article 32 of the General Data Protection Regulation (GDPR) in the EU required organizations to implement appropriate technical and organizational measures, including encryption, to ensure data security. It implied that existing encryption could ensure security. Under GDPR Article 25, encryption must be incorporated into the design of systems handling personal data. But what constitutes “appropriate measures” and “data protection by design and by default” in the quantum era?

Governments and international bodies are beginning to draft regulations that address the anticipated challenges of quantum decryption. Bodies (like the US National Institute of Standards and Technology) are creating post-quantum cryptography standards, which may become mandatory under future laws.

Should we worry about retroactive exposure? In a legal context, this is the situation where an organization becomes liable for events, actions, or omissions in the past that were considered secure at the time but later proved to be problematic. In the realm of cybersecurity and data protection, particularly in light of quantum computing, retroactive exposure takes on new dimensions. Breaches that occur years later due to quantum decryption could still trigger liability if the organizations failed to adopt protective measures when the threat was foreseeable.

Personally Identifiable Information (PII) such as names, social security numbers, and birthdates can be exploited decades after the collection. Medical histories and genetic data remain sensitive. Bank account records, credit histories, and tax records can be exploited over extended periods. Biometric identifiers like fingerprints, retina scans, or facial recognition data can be misused indefinitely.

The timeline for adversaries to decrypt data using quantum computers depends on several factors, including the pace of advancements in quantum computing. We often read that it could take 10 years. In my opinion, it could take way less for well-funded entities, like a secret service or a government.

It is very unlikely that secret services would openly disclose that they possess quantum computing capabilities capable of breaking encryption. The primary advantage of having quantum decryption capabilities is information asymmetry. An entity with such capabilities can intercept and decrypt communications without the target knowing, giving them a strategic edge in intelligence and counterintelligence operations.

Quantum computing disrupts the foundation of classical cryptography, creating an environment where today’s secrets may become tomorrow’s vulnerabilities. The transition to post-quantum cryptography is not just a technical upgrade but a strategic necessity to ensure the longevity of data security in an evolving threat landscape. The game has changed, and organizations must act to adapt.

What can we do? Which is the first step? Well, we must identify sensitive data with a long security lifespan. We must also start understanding and preparing for quantum-resistant algorithms.


Exotic risk, example 5 - Space risk.

Space risk presents a dynamic legal landscape where innovation often outpaces regulation. Space risk refers to the potential for financial, operational, legal, or reputational harm arising from activities conducted in outer space.

The Outer Space Treaty (OST, 1967) forms the cornerstone of space law, establishing the principle that outer space, including the moon and other celestial bodies, is the “province of all mankind.” It prohibits the placement of nuclear weapons in space and emphasizes that space activities must be conducted for the benefit of all countries. Unfortunately, the OST has no explicit provisions against anti-satellite (ASAT) weapons.

Today, dual-use technologies blur the line between peaceful and military purposes. Also, private companies are exploring mining of asteroids and the moon. Although the OST prohibits national appropriation of celestial bodies, raising questions about private ownership, there are national laws that permit private entities to claim resources, creating potential conflicts with international law.

The growing privatization of space activities introduces unique legal challenges, like contractual disputes between operators, manufacturers, and insurers.

- The weaponization of space refers to the placement and use of weapons in outer space, which can include kinetic, non-kinetic, directed energy, and cyber-based systems.

- The militarization of space refers to the use of space-based assets (e.g., satellites) for military purposes, such as communication, reconnaissance, and navigation.

Kinetic weapons rely on physical force, impact, or explosion to damage or destroy a target. Examples include anti-satellite missiles (designed to destroy satellites or space infrastructure), and kinetic kill vehicles (devices that collide with targets in space, neutralizing them through impact). Kinetic weapons create space debris, which can endanger other space assets for decades.

Non-kinetic weapons disable, disrupt, or degrade a target’s capabilities without physical impact or destruction. They operate through non-physical means, like electromagnetic interference or manipulation. Examples include jamming devices (that interfere with communication signals, rendering satellites or ground systems inoperable), and electromagnetic pulse weapons (that emit bursts of electromagnetic energy to disable electronics). Detection and attribution is very challenging.

Space is no longer just an enabler of hybrid warfare, it is becoming a direct arena for strategic conflict. The consequences extend far beyond governments and militaries. Private companies, financial markets, and everyday citizens are highly dependent on space-based infrastructure, making them vulnerable to disruption, manipulation, and coercion.


Exotic risk, example 6 - Geoengineering risk.

Geoengineering risk refers to the potential unintended consequences, legal challenges, geopolitical conflicts, and ethical dilemmas associated with the deliberate large-scale manipulation of Earth's climate systems. Geoengineering could also be an effort to mitigate or reverse climate change, and involves technologies designed to alter atmospheric, oceanic, or terrestrial processes.

Solar Radiation Management (SRM) technologies reflect a portion of the sun’s energy back into space to cool the earth. A good example is the stratospheric aerosol injection, where reflective particles like sulfur dioxide are injected into the stratosphere to mimic volcanic eruptions.

Carbon Dioxide Removal (CDR) technologies remove CO₂ from the atmosphere and store it safely. A good example is ocean fertilization, where nutrients are added to oceans to stimulate algae growth, which absorbs CO₂.

Unfortunately, geoengineering interventions could disrupt ecosystems, weather patterns, and biodiversity. They may cause irreversible damage, such as altering ocean chemistry through fertilization.

Geoengineering leads to geopolitical risks. A single nation or entity acting independently could lead to international disputes, especially if adverse effects are felt by others. Geoengineering technologies could also be exploited as tools for geopolitical leverage or conflict. Weaponization of geoengineering refers to the use of geoengineering technologies as tools of geopolitical leverage, conflict, or coercion. Climate manipulation strategies, originally designed to address climate change, could be intentionally exploited to harm adversaries, disrupt ecosystems, or achieve strategic dominance. Given the transformative potential of geoengineering technologies, their misuse poses serious risks to global security and stability.

Cyber weaponization of geoengineering systems includes cyberattacks targeting geoengineering deployment mechanisms to redirect or misuse technologies. For example, hacking such systems can cause targeted disruptions in a rival nation’s climate and infrastructure.

Identifying the actors responsible for weaponized geoengineering could be difficult. Existing international laws do not specifically address the weaponization of geoengineering, leaving a regulatory vacuum.

Climate is a new battlefield. Geoengineering could expand the concept of warfare from land, sea, air, space, and cyber to climate itself. Nations will disrupt economies without firing a shot, and turn climate into a geopolitical bargaining chip.


Exotic risk, example 7 - AI-driven attacks, AI-driven defences.

Emerging trends like AI-driven attacks (and AI-driven defences), quantum cryptography arms race, deepfake-as-a-service (DFaaS), and cyber-physical espionage (covert activities that exploit vulnerabilities in interconnected cyber-physical systems (CPS) to gather intelligence, sabotage operations, or influence critical infrastructure) are reshaping the landscape.

We are entering an era where AI-powered attackers and defenders compete in a dynamic and escalating arms race. This “AI vs. AI” development transforms traditional strategies and introduces complex challenges and opportunities for all stakeholders.

Attackers use AI to enhance reconnaissance, as AI automates intelligence gathering, identifying potential targets and weak points in real time. Natural language processing (NLP) can analyze communications, social media, or documents to reveal exploitable information. AI identifies vulnerabilities faster than human attackers, including zero-day vulnerabilities, and generates exploits dynamically. AI can also tailor phishing emails or social engineering attempts by analyzing target behavior, language patterns, and preferences.

AI-powered malware can learn and evolve during an operation, adapting to bypass detection mechanisms and defenses. AI can modify attack payloads based on the real-time responses of a target system.

Defenders are leveraging AI to counteract advanced attacks. In threat detection and response, AI analyzes vast amounts of data to detect anomalies and potential threats that traditional methods might miss. Behavioral analysis can identify unusual patterns in network traffic, user behavior, or system activity.

AI systems can quarantine infected devices, block suspicious activity, and deploy patches autonomously. Predictive AI models help security teams anticipate the attacker’s next move. AI can also create dynamic honeypots that lure attackers into fake environments, gathering intelligence on their tactics without risking real assets. Machine learning helps ensure these traps remain convincing and adaptive.

The real challenge emerges when both attackers and defenders deploy AI systems simultaneously, leading to complex, real-time interactions. Attackers may probe defender AI systems to understand their algorithms and find ways to exploit blind spots. For example, AI-powered attacks may generate data patterns that confuse or overwhelm detection systems. Defenders may probe attacker AI systems in adversarial machine learning to predict and neutralize attacker strategies.

Where both attacker and defender AIs evolve simultaneously, a feedback loop may arise, with each AI learning from the other's responses. This could lead to rapid escalation in attack sophistication and response. AI vs. AI interactions can result in unpredictable behaviors that experts will struggle to understand.

The rise of AI vs. AI in cyberespionage represents a shift from human-centric operations to an automated, high-speed conflict. Automated systems can respond aggressively to perceived threats. Feedback loops between adaptive systems may escalate conflict beyond initial intentions. AI has blurred the lines between offense and defense, as it can perform pre-emptive actions that resemble offensive measures.

Autonomous decision-making complicates liability, as actions may not directly align with human intentions. The idea that the most aggressive AI could dominate an AI conflict is both compelling and deeply concerning. An aggressive AI will strike first, overwhelming the defender before their countermeasures can adapt. It prioritizes success over caution, exploiting opportunities others might avoid due to ethical concerns. This suggests a future where AI systems prioritize offensive actions over restraint, potentially leading to significant damage, legal ambiguities, and ethical dilemmas.


Target Audience

The program is highly beneficial for the Board of Directors and C-suite executives.


Duration

Our briefings can be as short as 30 minutes while remaining comprehensive, or longer, depending on the needs, the program content, and the case studies. We always tailor the program to the needs of each client.


Instructor

George Lekatis. For information about his background and experience, you may visit: https://www.cyber-risk-gmbh.com/About.html


George Lekatis




Cyber Risk GmbH | Cyber Security Training

Cyber security is ofter boring for employees. We can make it exciting.


Online Cybersecurity Training

Online Training

Recorded on-demand training and live webinars.

In-house Cybersecurity Training

In-house Training

Engaging training classes and workshops.

Social Engineering Training

Social Engineering

Developing the human perimeter to deal with cyber threats.


Cybersecurity Training for the Board

For the Board

Short and comprehensive briefings for the board of directors.


Cybersecurity Assessment

Assessments

Open source intelligence (OSINT) reports and recommendations.


High Value Targets Cybersecurity Training

High Value Targets

They have the most skilled adversaries. We can help.