Generative AI is not making cyberattacks fully autonomous. What it is already changing is something far more concrete: the speed, scale and accessibility of several offensive operations. It is also changing how organisations expose data, open connectors, distribute permissions and add new layers to their information systems. For technical teams, the question is no longer just “should we use generative AI?”, but “where should we use it, on which perimeter, and with which architectural and operational safeguards?”.
The wrong diagnosis would be to overestimate the magic
The summary published by ANSSI in early February 2026 is useful because it avoids two excesses.
The first would be to treat generative AI as a gadget with no real impact on the threat landscape.
The second would be to believe that it already allows end-to-end cyberattacks to be carried out without human expertise.
The current situation is simpler, and more interesting.
Generative AI has not removed the need for expertise on the attacker side. What it already brings is a clear productivity gain across several steps: target profiling, social engineering content, code generation or adaptation, industrialisation of repetitive tasks, and faster learning for less experienced profiles.
In other words, it does not replace offensive capability. It improves the productivity of those who already have it, and partially lowers the entry barrier for others.
What really changes beyond the cyber angle
For technical teams, it is better to think in terms of “augmented attacks” rather than “autonomous attacks”.
Today, generative AI is mainly used to:
- generate more phishing and social engineering variants, faster;
- adapt content to a sector, a role or a specific context;
- help write, rewrite or transform offensive code and scripts;
- accelerate the sorting, summarisation and exploitation of collected information;
- save time for already organised threat actors.
This matters because it changes the technical response.
If you assume the main threat is a fully autonomous AI attacker, you prepare the organisation poorly.
If you understand that the real issue is acceleration, personalisation and scale, you also start asking better architecture and governance questions:
- what allows an attacker to personalise their approach quickly in our environment?
- which data are we exposing too easily?
- where is our traceability still too weak?
- which internal AI uses could create a new path for exfiltration or compromise?
AI also becomes a new layer of the information system to control
This is the other value of the ANSSI summary.
Generative AI is not only a tool that attackers can abuse. As a system, it also becomes a new layer to govern inside the information system.
As soon as an organisation deploys:
- an internal assistant,
- a coding copilot,
- a search layer backed by a LLM,
- an agent connected to documentation bases or to business applications,
it adds a new layer to its information system. And that layer has to be designed like the rest: permissions, flows, logging, responsibilities and long-term maintenance.
In practice, the risks do not stop at “the model gave a bad answer”.
They also include:
- data exfiltration through prompts, outputs or connectors;
- poisoning of data or knowledge injected into the system;
- compromise of the model supply chain, its dependencies or orchestration components;
- bypassing of safeguards;
- excessive access granted to an assistant that should never see certain resources;
- side effects caused by automating too quickly on top of poorly mapped systems.
The key point is simple: a generative AI system is not outside the information system. It is inside it. It inherits existing weaknesses, and it can create new ones. This is not only a protection issue. It is also a visibility and control issue.
Where many teams get it wrong
In practice, the most common mistake is not technical at first. It is architectural.
Many organisations introduce generative AI through use cases:
- internal assistant;
- writing aid;
- coding aid;
- retrieval-augmented knowledge base;
- support chatbot;
- cross-system search.
Only later do they realise that these use cases require deeper decisions:
- which data can the tool access?
- with which permissions?
- in which environments?
- with which logging?
- with which human validation?
- with which separation between public, internal, sensitive or regulated data?
If these questions are addressed too late, the project looks fast at the beginning, then slows down sharply as security, compliance, operations and architecture take control again.
The logic is the same as with technical debt: what looks fluid at first becomes expensive when the architecture was never really thought through.
The real issue: not adding one more opaque layer
For many organisations, the main risk is not “missing AI”. It is adding one more layer to an information system that is already hard to read.
An assistant connected to too many sources, a copilot plugged into a poorly tested codebase, a cross-system search layer without clear permission boundaries: all of this can create a strong sense of quick wins while increasing dependency, confusion and future cost.
From that perspective, generative AI does not only raise a security issue. It reveals a maturity level:
- quality of the system map;
- cleanliness of permissions;
- separation of perimeters;
- quality of logs;
- ability to bound uses;
- ability to take control back when the system gets it wrong.
What technical teams should do now
The right reflex is not to block generative AI everywhere.
The right reflex is to classify uses and apply safeguards at the right level.
1. Distinguish use cases by sensitivity level
Not all use cases are equal.
A writing assistant for public content does not carry the same level of risk as a copilot connected to proprietary code, a retrieval layer linked to internal documents, or an assistant connected to customer data.
The first decision is not “which model should we choose?”
It is “which use case should be allowed on which perimeter?“
2. Map connectors and exposed data
In many projects, the risk does not come from the model itself. It comes from the bridges opened around it:
- document repositories;
- internal wiki;
- source code repositories;
- ticketing tools;
- CRM systems;
- knowledge bases;
- business APIs.
A clear map of consulted sources, associated permissions and outbound flows quickly becomes essential.
3. Strictly frame code and administration use cases
Coding assistants and operational assistants create a strong sense of productivity.
But on top of poorly tested or poorly documented systems, they can also accelerate the spread of bad patterns, fragile configurations and decisions taken without enough distance.
The right level of use is neither “let it run” nor “ban it completely”. It is:
- assistance on clearly bounded perimeters;
- systematic human review;
- logging;
- clear separation between suggestion, validation and execution.
4. Treat the AI stack as a real supply chain
Models, libraries, orchestration frameworks, plugins, connectors, inference layers, third-party services: all of this forms a software chain.
It should be governed as such:
- inventory;
- dependency tracking;
- provenance;
- updates;
- vulnerability monitoring;
- replacement plan.
5. Test abuse scenarios, not only response quality
Many teams test:
- relevance;
- hallucination rate;
- search quality;
- user satisfaction.
That is necessary, but not sufficient.
They also need to test:
- retrieval of unauthorised information;
- bypassing of safeguards;
- effects of poisoned corpora;
- malicious inputs injected into documents or retrieved sources;
- unexpected behaviours when the system acts through a connector or a tool.
6. Define forbidden zones
Some data, actions or functions simply should not be exposed to a generative AI system, even with safeguards.
In many organisations, the hardest part is not defining what they want AI to do.
It is defining clearly what they refuse to delegate to it.
What this changes for technical decision-makers
For a CTO, CISO, architecture lead or platform lead, the question is no longer just “do we have an AI strategy?”
A better question is:
- is our use of generative AI classified, bounded and governed?
- do we know which systems, data and connectors are actually exposed?
- have we defined forbidden use cases?
- have we planned security reviews specific to AI components?
In 2026, maturity will not be measured by how many copilots were deployed.
It will be measured by the ability to use generative AI where it creates real value, without introducing it as yet another opaque layer inside an already hard-to-master information system.
Conclusion
Generative AI does not make the fundamentals obsolete. It makes them more visible. Architecture, mapping, access rights, traceability, segmentation: everything that was already necessary becomes critical when a new autonomous component enters the information system.
Teams that already control their existing systems will integrate AI faster and more safely than those stacking one more layer on top of a system they can no longer read.
Deploying generative AI internally and looking to build in the right safeguards from the start? We support technical teams on these topics: digital trust and software development. Let’s talk.