NIS2 in 2026: why the stakes just rose
The NIS2 Directive ((EU) 2022/2555) has entered the phase that every cybersecurity directive eventually reaches — the phase where preparation ends and enforcement begins. Member States were obliged to transpose it into national law by 17 October 2024. Most missed that deadline. By March 2026, however, 21 of the 27 EU Member States have completed transposition, and the European Commission has spent the past twelve months systematically applying pressure to the remaining holdouts.
In May 2025, the Commission issued reasoned opinions to nineteen Member States that had not yet notified full transposition — including Germany, France, Spain, Poland, Ireland and the Netherlands. A reasoned opinion is the last formal step before the Commission refers a case to the Court of Justice of the European Union, where financial penalties become possible. The message to capital cities was unambiguous: get this done, or pay for it.
Throughout late 2025 and the first months of 2026, that pressure produced results. Germany completed its national NIS2 act in December 2025. Sweden's Cyber Security Act took effect in January 2026. The Czech Republic's Zákon o kybernetické bezpečnosti began applying from 1 November 2025. Austria, Italy, Portugal, the Netherlands and others have either entered into force or are in the final stages of doing so during 2026. The grace period — the implicit understanding that nobody will be fined while transposition is still incomplete — is gone.
On 20 January 2026, the Commission also proposed targeted amendments to NIS2 itself, aiming to simplify compliance for an estimated 28,700 entities. These amendments do not relax the core obligations; they clarify language and reduce paperwork. They are an acknowledgement that NIS2 will now be enforced for the long term, and that the framework needs to be workable at scale across an estimated 160,000 entities in scope across the Union.
For organisations subject to NIS2, this means one thing: the operational test is here. Cybersecurity policies that look good on paper now have to perform under the 24-hour early warning clock. Risk management frameworks have to produce evidence a national CSIRT will accept. Boards have to demonstrate active oversight under threat of personal liability. And among the practical bottlenecks that catch even well-prepared organisations off guard, one stands out: the moment an incident starts, evidence begins to disappear, and the entity must capture it before it is gone. This guide focuses on that bottleneck — what to preserve, why it matters legally, and how to do it in a way that holds up to supervisory scrutiny across the EU.
Who NIS2 covers: essential and important entities
NIS2 classifies in-scope organisations into two categories — essential entities and important entities — based on the criticality of their sector and the size of the organisation. Both categories must implement the same ten cybersecurity risk-management measures under Article 21 and must comply with the same incident-reporting obligations under Article 23. The difference lies in the supervisory regime applied to each category and the maximum penalties that can be imposed.
Essential entities (Annex I, large enterprises)
Essential entities are organisations operating in sectors of high criticality (Annex I) that qualify as large enterprises — generally meaning at least 250 employees, or annual turnover exceeding €50 million and a balance-sheet total exceeding €43 million. The list covers energy (electricity, gas, oil, district heating, hydrogen), transport (air, rail, water, road), banking, financial market infrastructures, healthcare, drinking water, wastewater, digital infrastructure (DNS, TLD, IXPs, data centres, cloud, CDN, trust service providers), ICT service management for B2B, public administration at central and regional level, and space.
Important entities (Annex II, medium enterprises and below the essential threshold)
Important entities are organisations in other critical sectors (Annex II) or in Annex I sectors that fall below the size threshold for essential classification. Annex II covers postal and courier services, waste management, manufacture and distribution of chemicals, food production processing and distribution, manufacturing (medical devices, computer and electronic products, electrical equipment, machinery, motor vehicles, other transport equipment), digital providers (online marketplaces, online search engines, social networking platforms), and research organisations.
Supervision regime: Article 32 versus Article 33
The two categories face different supervisory regimes. Essential entities are subject to Article 32 — proactive ex-ante supervision, which includes regular on-site inspections, off-site audits, requests for documentation, and supervisory-authority-mandated security audits and penetration testing. Important entities fall under Article 33 — ex-post supervision triggered only when authorities have evidence of non-compliance. The practical consequence is that essential entities live with continuous regulatory presence; important entities deal with the regulator primarily after something has gone wrong.
Small and micro exemptions, and Member State discretion
Organisations with fewer than 50 employees and annual turnover or balance-sheet total of €10 million or less are generally exempt from mandatory scope. However, Member States retain discretion to bring smaller entities into scope where the entity is the sole provider of a service in a region, where its failure would create systemic risk, or where it operates in particular sub-sectors. Italy and Slovenia have notably extended their national scope beyond the directive's annexes; Belgium has added enhanced governance obligations. The takeaway: even if your organisation appears to fall below the threshold, the national transposition is the document that determines whether NIS2 applies to you.
The 18 sectors and the size-cap rule
NIS2 dramatically expands the sectoral scope compared with its predecessor. Where NIS1 applied to seven sectors, NIS2 applies to eighteen, organised across two annexes. The expansion reflects how dependent modern society has become on digital services and supply chains — and how a cybersecurity incident in any one of these sectors can cascade across borders within hours.
Annex I: sectors of high criticality
Annex I sectors are those whose disruption produces immediate, society-wide consequences. They include energy in its various forms, transport, banking and financial market infrastructure, health, drinking water and wastewater, digital infrastructure (which covers DNS providers, TLD registries, internet exchange points, cloud computing providers, data-centre service providers, content-delivery networks, trust service providers and electronic communications providers), B2B ICT service management, public administration at central and regional level, and space. Each of these has its own sectoral specifics in Member State transposition, but the cybersecurity baseline is harmonised.
Annex II: other critical sectors
Annex II covers sectors that are critical but where disruption is generally less immediately catastrophic — though it can still produce significant harm. It includes postal and courier services, waste management, chemicals (manufacture, distribution), food production, manufacturing of medical devices and key industrial goods, digital service providers including online marketplaces, search engines and social networking services, and research organisations. The Commission Implementing Regulation (EU) 2024/2690 provides specific technical and incident-classification rules for several Annex II sub-sectors, including cloud, DNS, marketplaces, search and social networks.
The size-cap rule
Within these sectors, NIS2 applies a size-cap rule: organisations qualify as in-scope if they have at least 50 employees AND annual turnover or balance-sheet total exceeding €10 million. Large enterprises (250+ employees, or €50M+ turnover and €43M+ balance sheet) in Annex I sectors are classified as essential entities. Medium-sized enterprises (50–249 employees, €10M–€50M turnover) in Annex I, and any medium or large enterprise in Annex II, are classified as important entities. The cap is intentionally generous — NIS2 does not want to crush smaller businesses with disproportionate obligations — but it captures essentially every meaningful actor in the regulated sectors.
Exceptions, supply-chain reach, and the de-facto regulated buyer effect
Beyond the formal scope, NIS2 has a practical reach that extends much further. Article 21 requires essential and important entities to manage supply-chain cybersecurity — meaning the entity must assess and impose appropriate security measures on its direct suppliers and service providers. The result is that organisations not directly in scope of NIS2 are increasingly required to demonstrate NIS2-aligned security to their regulated customers, simply to stay in the procurement pipeline. If your customer is in scope, your contract is in scope. This indirect effect doubles or triples the de-facto population of NIS2-relevant entities across the EU economy.
Article 23 in detail: the three-stage reporting timeline
Article 23 is the operational heart of NIS2 for incident response. It establishes a three-stage reporting cascade with strict timelines, and adds an obligation to notify service recipients in some circumstances. The clock starts ticking when the entity becomes aware that a significant incident has occurred — a definition that is itself contested in practice — and it does not stop until the final report is submitted up to a month later. Missing any stage is a compliance failure, and supervisory authorities have stated clearly that they will treat it as such.
Stage 1 — 24-hour early warning
Within 24 hours of becoming aware of a significant incident, the entity must submit an early warning to its national CSIRT or competent authority. The early warning must indicate, where information is available, whether the incident is suspected of being caused by unlawful or malicious acts, and whether it could have cross-border impact. This is not a complete report — its purpose is to alert authorities so they can prepare for coordinated response and inform other affected Member States. Common failures at this stage include not having pre-established CSIRT communication channels, no pre-prepared template, and uncertainty about whether the event yet meets the significant threshold.
Stage 2 — 72-hour incident notification
Within 72 hours of becoming aware, the entity must submit a more substantial incident notification. This update must include an initial assessment of the incident — severity, scope, impact — and crucially, where available, indicators of compromise (IoCs). The 72-hour notification is where evidence preservation becomes binding: you cannot describe IoCs you have allowed to be overwritten. Phishing pages, fake login interfaces, malicious URLs, defaced web properties, and other web-facing attack artefacts must be captured before they vanish during incident-response remediation.
Stage 3 — one-month final report
No later than one month after the incident notification, the entity must submit a final report. This contains a detailed description of the incident, including its severity and impact, the type of threat or root cause that likely triggered it, mitigation measures applied and ongoing, and where appropriate, the cross-border impact. The final report is the document that supervisory authorities will return to during any subsequent inspection or audit. Its quality determines whether your organisation will be viewed as having handled the incident professionally — or as having missed the basics.
Interim reports on CSIRT request, and progress reports for ongoing incidents
The CSIRT or competent authority may request interim updates at any time during the response phase. If the incident is still ongoing at the one-month mark, the entity submits a progress report instead of a final report, with a further final report due within one month after the incident has actually ended. The interim mechanism creates a continuous dialogue between the entity and the authority — and means that the entity's ability to produce credible, evidence-backed updates on demand is a permanent capability, not a one-shot effort.
Service recipient notification (Article 23(2))
Where a significant incident is likely to adversely affect the provision of services to recipients, the entity must inform those recipients without undue delay. This includes describing what happened, which services are affected, and what mitigation measures recipients should take. For B2B providers this means notifying their customer base; for consumer-facing providers it can mean public communication. The notification itself becomes part of the evidence record — and once it has been published, it is what the supervisory authority will hold the entity to.
What counts as a significant incident
Article 23 only triggers when an incident is significant. The directive's definition is broad: an incident is significant if it has caused or is capable of causing severe operational disruption of the entity's services or financial loss to the entity, or if it has affected or is capable of affecting other natural or legal persons by causing considerable material or non-material damage. The capable-of-causing language is critical — entities cannot wait until harm materialises to start reporting. Reasonable foreseeability is enough.
Commission Implementing Regulation 2024/2690 thresholds
For specified digital-sector entities (DNS providers, TLD registries, cloud providers, data centres, CDNs, MSPs, MSSPs, online marketplaces, search engines, social networks, and trust service providers), Commission Implementing Regulation (EU) 2024/2690 of 17 October 2024 lays down concrete thresholds. An incident is significant where, among others, it causes or is capable of causing financial loss exceeding €100,000 or 5% of annual turnover (whichever is lower), or where it involves successful suspectedly-malicious unauthorised access to network and information systems, or where it causes considerable reputational damage. The regulation applies directly to its named sectors; for other entities, it serves as a useful interpretive benchmark.
Reputational damage and media attention as factors
The considerable-reputational-damage criterion is not subjective. Regulators look at concrete indicators: whether the incident has been reported in mainstream media, whether the entity is likely to lose customers in numbers material to its business, whether it will be unable to meet regulatory requirements as a downstream consequence, and whether the entity's standing with partners and supervisors has been measurably damaged. Brand-targeting attacks — phishing pages impersonating your services, fake login interfaces hosted on lookalike domains — sit squarely inside this category, and they generate web-facing artefacts that must be preserved as evidence.
Successful unauthorised access as automatic trigger
One of the most operationally consequential provisions of CIR 2024/2690 is that any successful, suspectedly-malicious unauthorised access to network and information systems is per se significant for entities in scope of the regulation. This removes the debate about thresholds: if an attacker got in and the access is suspected of being malicious, it is significant, and the 24-hour clock starts. The directive's wider scope of significance — financial loss, reputational damage, service disruption — applies on top of this baseline.
Recurring incidents and the aggregate threshold
A series of smaller incidents can collectively meet the significant threshold even if each individual event would not. Recital wording across implementing texts makes clear that authorities expect entities to assess recurring incidents in aggregate, not in isolation. This is particularly relevant for sectors facing persistent campaign-style attacks: a sequence of credential-stuffing attempts, a series of low-volume DDoS bursts, or coordinated phishing waves can together cross the threshold. Documenting the link between incidents is itself an evidence challenge — one that depends on consistent log retention and forensic capture across the campaign timeline.
Why evidence preservation equals NIS2 compliance
There is a tendency among newly NIS2-regulated organisations to treat incident reporting as a paperwork problem — fill in the template, hit submit, move on. This view fundamentally misunderstands what the 72-hour notification actually requires and what supervisory authorities will check during inspections. The 72-hour incident notification mandates the inclusion of indicators of compromise where available. The one-month final report mandates a description of root cause and threat type. Neither is achievable without preserved forensic evidence — and that evidence has to be captured at the moment of detection, not reconstructed weeks later.
ENISA's technical guidance on NIS2 risk management measures, published in November 2024 alongside Commission Implementing Regulation 2024/2690, emphasises evidence preservation as an integral part of incident handling. The guidance places the obligation alongside detection, containment, and recovery — not as an optional add-on. Logs must be retained with sufficient integrity and duration to support post-incident reconstruction. Twelve months of tamper-evident logging is a widely accepted baseline; some sector-specific transpositions require more.
But the trickier evidence category is not log data — it is the web-facing artefacts of the attack itself. When the attack vector involves a phishing page impersonating your services, a fake login interface that captured your customers' credentials, a defaced corporate website, or a ransomware leak site publishing your stolen data, the evidence is hosted on infrastructure you do not control. The hosting provider takes it down within hours or days. The attacker abandons it. The DNS record gets sinkholed. By the time you write the 72-hour notification, the evidence is gone unless you captured it during initial response.
This is precisely the gap that forensic web evidence platforms such as GetProofAnchor are designed to fill. A capture made at the moment of incident detection produces a tamper-evident package — full HTML, rendered screenshot, extracted content, network metadata, TLS chain — bound together by a qualified electronic timestamp under eIDAS Article 42, anchored independently into the Bitcoin blockchain via OpenTimestamps, and linked through an append-only cryptographic chain. When the supervisory authority asks two months later for proof that the phishing page existed and looked the way you described it, the Evidence ZIP supplies the answer with mathematical certainty.
The point is not that every NIS2 entity must adopt one specific tool. The point is that evidence preservation during the incident response phase is a first-class compliance obligation under NIS2 — and that organisations that have not built this capability before an incident happens will discover, too late, that the absence of preserved evidence becomes its own reportable failure.
Capturing web-based attack evidence forensically
Many of the most common NIS2-significant incidents leave their primary forensic trail on the public internet. Understanding which artefacts to capture, and how to capture them in a way that satisfies the supervisory authority's expectations, is one of the highest-leverage operational capabilities a NIS2-regulated entity can develop. The remainder of this section walks through five categories of web-based incident evidence and what forensic capture looks like for each.
Phishing pages impersonating your services
When an attacker stands up a phishing page on a lookalike domain to harvest credentials from your customers, the page itself is the smoking gun. It demonstrates the attacker's targeting of your brand, the technical sophistication of the campaign, and the scope of customer exposure. Forensic capture must include the full HTML, the visual rendering (multiple viewports if mobile-specific variants exist), the network captures (HAR, DNS, TLS certificate chain showing the lookalike domain), and the timestamp of capture bound cryptographically to all artefacts. A standard screenshot is not enough — the supervisory authority and any subsequent law-enforcement referral will want the underlying DOM and the network identity of the malicious infrastructure.
Defaced corporate websites and unauthorised content modifications
Defacement incidents — where attackers modify the visible content of your public website to display propaganda, ransom demands, or simply mockery — typically last hours before remediation. The remediation process itself destroys evidence: the modified pages are overwritten with the restored content, server logs may be rotated, and forensic-relevant artefacts disappear into the backup cycle. Before remediation begins, the defaced state should be captured forensically. The same applies to subtle unauthorised modifications — a single injected JavaScript snippet exfiltrating form data, a hidden iframe, a modified payment endpoint — which require capture not only of the visible page but of the underlying scripts and network calls.
Ransomware leak sites and dark-web data dumps
When an attacker exfiltrates data and publishes it on a leak site to extort payment, the leak site itself is evidence. It demonstrates the materialisation of the exfiltration risk, the scope of the data the attacker claims to hold, and the negotiation timeline. Most ransomware groups operate on Tor hidden services, and these services come and go quickly. A forensic capture of the listing page, the sample data file structure, the timer, and the ransom-payment instructions becomes critical evidence for the final report — and for any subsequent law-enforcement engagement, insurance claim, or downstream civil litigation.
Brand-impersonating social media accounts and fake support pages
Adjacent to phishing in the strict sense are brand-impersonation campaigns on social media and via spoofed support pages. These are particularly damaging because they often run in parallel with a primary intrusion — the social media impersonator drives traffic to the phishing page hosted elsewhere — and because they expand the scope of the customer base affected. Capture should include the impersonator profile, all posts, follower counts, the linked phishing infrastructure, and the timeline of takedown attempts and platform responses.
Supply-chain compromise indicators on third-party properties
Many NIS2 incidents originate not on infrastructure the entity owns but on a third-party service the entity consumes. A compromised SaaS provider, a hijacked CDN endpoint, a corrupted software update on a vendor's website — each of these requires the entity to capture evidence of the third-party state at the moment of detection, before the third party remediates and the trail is lost. This is one of the genuinely novel evidentiary problems NIS2 introduces, because the entity has to demonstrate something it did not control. The only solution is to capture the third-party state as a forensic snapshot at the moment of awareness — with a credible timestamp that proves when the capture was made — and to preserve that capture independently of the third party.
Chain of custody: ISO 27037 meets NIS2
NIS2 itself does not prescribe a specific chain-of-custody methodology. Article 21 requires appropriate and proportionate measures, and the directive's recitals point toward European and international standards as the practical reference for what appropriate means. In the digital evidence space, that reference is ISO/IEC 27037:2012 — the international standard for identification, collection, acquisition, and preservation of digital evidence. Entities that align their incident-handling procedures with ISO 27037 will find themselves naturally satisfying NIS2's evidence-related expectations.
ISO 27037 defines four phases for handling digital evidence: identification (recognising what data items are evidence), collection (physically gathering them), acquisition (creating forensically sound copies), and preservation (maintaining integrity over time). For web-based evidence, the four phases map cleanly onto a single forensic capture operation — provided the capture is engineered to produce, in one atomic step, an artefact that is identifiable, collected, acquired, and preservation-ready. This is the design goal of any serious forensic web evidence platform: collapse the workflow into one defensible operation, document every parameter, and produce a package whose integrity can be independently verified.
The integrity primitive is the cryptographic hash. Every artefact in the evidence package — the screenshot, the HTML, the extracted text, the network captures, the metadata — is hashed with SHA-256 and the hashes are recorded in a manifest. The manifest is itself hashed, and that final hash is the input to an eIDAS qualified electronic timestamp under Article 42 of Regulation (EU) 910/2014. The qualified timestamp binds the entire package to a specific moment in time, with the same legal effect across all 27 EU Member States as a notarial attestation of date. Any subsequent modification to any byte of any artefact breaks the chain and is detectable in seconds during verification.
Anchoring into the Bitcoin blockchain via OpenTimestamps provides a second, independent layer that does not depend on the entity, the trust service provider, or any centralised infrastructure. This matters for cross-border incidents where the entity may need to demonstrate evidence integrity to authorities in a Member State whose national EU Trusted List the entity has limited familiarity with. The blockchain anchor can be verified by anyone with access to Bitcoin block headers — globally, without coordination.
The third element, an append-only cryptographic chain across all captures, prevents an entity from quietly editing history. Even with full administrative access to its own systems, the entity cannot insert a backdated entry — every link in the chain depends on the previous one's hash, and any modification cascades through every subsequent record. For supervisory authorities, this transforms incident-evidence integrity from a matter of trusting the regulated entity to a matter of mathematics. That shift is what defensible actually means.
Cross-border incidents and the EU CSIRTs network
Few significant cyber incidents respect national borders. NIS2 anticipates this by establishing a coordination architecture spanning the Union: each Member State designates one or more CSIRTs (Computer Security Incident Response Teams) and a single point of contact for cross-border matters, all of which participate in the EU CSIRTs network alongside ENISA. The European Cyber Crisis Liaison Organisation Network — EU-CyCLONe — coordinates strategic-level response to large-scale incidents. For regulated entities, this means an incident notification submitted to the national CSIRT may quickly produce reach into multiple Member States and into ENISA itself.
Article 23(6) provides that where a significant incident concerns two or more Member States, the receiving CSIRT or competent authority must inform the other affected Member States and ENISA without undue delay. Article 23(9) requires single points of contact to submit anonymised aggregated reports to ENISA every three months, and ENISA reports back to the Cooperation Group and the CSIRTs network every six months. The practical effect is that an entity operating in multiple Member States cannot expect to keep an incident contained to one jurisdiction's regulator — the architecture is designed to share information across the Union.
For the regulated entity, two operational consequences follow. First, the evidence supporting the original notification must be portable across jurisdictions. A Member State CSIRT may request additional information, additional Member State CSIRTs may make parallel requests, and ENISA may ask for input to its summary reports. An evidence package that satisfies one regulator must satisfy all of them, which means it must be self-contained, independently verifiable, and not dependent on the entity's continued availability or willingness to produce supplementary information. Second, the entity must understand which national CSIRT is the appropriate first point of contact — generally the CSIRT of the Member State of the entity's main establishment in the Union, but with sector-specific variations.
Cross-border coordination also extends beyond NIS2's own architecture. CSIRTs are required to share information with competent authorities under the CER Directive ((EU) 2022/2557) for entities identified as critical entities, and the relationship between NIS2 and the GDPR (Regulation (EU) 2016/679) means that personal data breach notifications under GDPR Article 33 may be running on a parallel track. Evidence preserved with verifiable integrity satisfies all of these simultaneously; evidence reconstructed from memory satisfies none of them.
Transposition status across 27 Member States (May 2026)
Eighteen months after the formal transposition deadline of 17 October 2024, the picture across the Union has clarified substantially. According to the European Cyber Security Organisation (ECSO) transposition tracker, as of March 2026, twenty-one of the twenty-seven Member States have completed transposition. The remainder — France, Ireland, Luxembourg, Poland, Spain and Bulgaria — are at various advanced stages of legislative process, with adoption expected during 2026.
Among the recent completions, several deserve particular attention for organisations operating across multiple jurisdictions. Germany's NIS2 implementation law was completed in December 2025, and is being applied through the BSI as competent authority. Sweden's Cyber Security Act and accompanying ordinance took effect on 1 January 2026. The Czech Republic's Zákon o kybernetické bezpečnosti began applying from 1 November 2025, supervised by NÚKIB. Austria's NISG 2026 Act enters into force on 1 October 2026, although transposition is already formally complete. Portugal's final draft enters into force in April 2026.
On 7 May 2025, the European Commission issued reasoned opinions to nineteen Member States, formally warning them that referral to the Court of Justice was the next step. By May 2026, most of those nineteen states have either transposed or are close to doing so, but for the remaining holdouts the legal pressure has intensified. The Commission's January 2026 amendments to NIS2 itself — proposing simplification of compliance language for around 28,700 entities including 6,200 micro and small enterprises — accompany rather than displace the enforcement effort. The amendments make compliance more achievable; they do not delay it.
For multi-jurisdictional entities, the practical implication is that transposed Member States are operating fully on their national NIS2 acts, while not-yet-transposed Member States are subject to the directive's direct effect in certain respects and to the Commission's pending Article 258 TFEU action. An entity operating in both groups simultaneously needs an incident-response posture that satisfies the strictest applicable transposition while being adaptable to whichever transposition emerges in the remaining jurisdictions. The base assumption should be that all 27 Member States will be fully operational by the end of 2026 at the latest.
It is also worth noting that several Member States have used national transposition to extend NIS2 beyond its minimum scope. Italy and Slovenia have added sectors. Belgium has added enhanced governance and oversight obligations. France's pending bill has included sector-specific event triggers for energy and banking with shorter notification timelines than the 24-hour baseline. Multi-jurisdictional entities should not assume the directive's text alone is sufficient guidance — the national transposition is the operative document, and the deltas between Member States can be material.
Sanctions and personal liability for management
The financial penalties under NIS2 are calibrated to be consequential at boardroom level, not absorbable as a cost of doing business. For essential entities, Article 34 sets the maximum administrative fine at €10 million or 2% of the entity's total worldwide annual turnover in the preceding financial year, whichever is higher. For important entities, the ceiling is €7 million or 1.4% of worldwide turnover, whichever is higher. Member States can — and several have — added national penalties on top of these EU minimums.
These numbers sit alongside other regulatory regimes that may apply to the same incident. The GDPR's tier-2 penalties reach €20 million or 4% of worldwide turnover. DORA, applicable to financial entities, includes periodic penalty payments up to 1% of the entity's average daily worldwide turnover. A single ransomware incident affecting an essential entity that processes personal data could plausibly trigger penalties under NIS2, GDPR, and (for financial-sector entities) DORA simultaneously — with each regulator assessing its own component independently.
More striking than the financial penalties is the personal liability regime under Article 20. Members of management bodies of essential and important entities are required to approve the cybersecurity risk-management measures taken to comply with Article 21, to oversee their implementation, and to follow specific training on cybersecurity. Where management bodies fail in these duties — particularly through gross negligence — Member States are required to provide for the possibility of holding individual management members liable. Several national transpositions have implemented this as administrative fines on individual executives; others have implemented temporary bans on management functions.
Reputational consequences extend beyond formal penalties. The reasoned opinion mechanism, the EU's quarterly aggregated reporting to ENISA, and the public-interest notification provisions of Article 23(7) mean that significant incidents — particularly those poorly handled — become matters of public record. Cyber insurance markets have responded by tightening underwriting conditions on NIS2-regulated entities: insurers increasingly require documented incident-response capability, including forensic evidence preservation, as a precondition for coverage.
The combined effect is that NIS2 has moved cybersecurity from the technical function to the governance function. The board is now responsible for the same reason it is responsible for financial reporting: regulators have made it so. And like financial reporting, the standard of proof is documentary. Verbal assurances from the CISO are not the standard; documented, dated, integrity-preserved evidence is.
NIS2, DORA, CER and GDPR: overlapping obligations
NIS2 does not operate in isolation. It sits within a regulatory cluster — DORA, the CER Directive, GDPR, the upcoming Cyber Resilience Act — each of which addresses overlapping subject matter from a different angle. For a single significant incident, an entity may face simultaneous obligations under two, three, or even four of these frameworks. Understanding how they interlock is essential to avoiding both gaps and duplicated effort.
DORA — financial services, lex specialis
The Digital Operational Resilience Act (Regulation (EU) 2022/2554) entered into application on 17 January 2025 and applies to financial entities including banks, investment firms, insurers, crypto-asset service providers, and a wide range of ICT third-party service providers serving the financial sector. DORA functions as lex specialis to NIS2 for in-scope financial entities, meaning where DORA imposes more specific obligations, it takes precedence. DORA's ICT incident reporting under Article 19 has its own timelines and threshold definitions that are similar to NIS2 but not identical.
CER Directive — critical entities physical resilience
The Critical Entities Resilience Directive (Directive (EU) 2022/2557) is NIS2's physical counterpart — addressing resilience of critical entities against all hazards, not just cyber. Member States identify critical entities in eleven sectors largely overlapping with NIS2's Annex I. An entity may be subject to both NIS2 (cybersecurity) and CER (physical and operational resilience), and the two regulatory regimes share information at the competent-authority level. NIS2 Article 23(10) provides that CSIRTs must inform CER competent authorities about significant incidents affecting CER-identified critical entities.
GDPR Article 33 — personal data breach
If a NIS2-significant incident involves a personal data breach, the GDPR Article 33 notification — to the Data Protection Authority within 72 hours of becoming aware — runs in parallel. The clocks are similar but the triggers are different: NIS2 triggers on significant operational impact; GDPR triggers on risk to data subjects' rights and freedoms. An entity may need to file both, and the evidence underpinning both must be preservable in a form that satisfies the strictest applicable requirements. Some Member States have established single entry points for both reports; this does not change the substantive obligations.
Cyber Resilience Act — products with digital elements
The Cyber Resilience Act (Regulation (EU) 2024/2847) entered into force on 10 December 2024 and applies in full from 11 December 2027. The CRA addresses cybersecurity of products with digital elements — hardware and software placed on the EU market — and includes manufacturer obligations to report actively exploited vulnerabilities and severe incidents involving the security of their products. Manufacturers of in-scope products who are also NIS2-regulated entities will need to manage CRA and NIS2 incident reporting on parallel tracks once the CRA becomes fully applicable.
Building audit trails your supervisory authority will accept
An incident notification is one document in a much larger evidentiary corpus that a supervisory authority may examine during NIS2 inspections. Essential entities under Article 32 face proactive inspections — on-site and off-site — and authorities are explicitly empowered to require security audits, vulnerability scans, and access to relevant data and documentation. Important entities under Article 33 face inspections triggered by indications of non-compliance, but once triggered, the scope is similarly broad. The quality of an entity's audit trail determines how these inspections conclude.
Authorities typically ask three categories of questions. First, governance: did the management body approve the Article 21 measures, did it oversee implementation, did it complete required training, are the meeting minutes documented and dated? Second, technical: were the ten Article 21 risk-management measures implemented appropriately and proportionately, are there documented controls, were they tested, do logs exist? Third, incident-specific: for each significant incident, was the 24-hour early warning sent, the 72-hour notification submitted, the one-month final report filed, and can the entity produce the evidence underpinning each?
The recurring failure mode at NIS2 inspections is not the absence of documentation — most regulated entities produce voluminous policies and procedures. The failure is the absence of integrity-preserving evidence for actions taken. Policy documents dated arbitrarily, log files retrievable but trivially modifiable, incident reports filed but the underlying capture artefacts long since overwritten — these are the patterns that draw enforcement attention. The remedy is not more documentation, but documentation whose integrity is independently verifiable.
Independent verifiability is the highest standard. It means that any third party — a supervisory authority, a court-appointed expert, a cyber insurer, opposing counsel in litigation — can verify the integrity of the entity's evidence using only the evidence itself and publicly available verification tools. This is the design principle behind GetProofAnchor's Evidence ZIP format and the MIT-licensed gpa-verify command-line tool published on PyPI: the regulated entity hands over the ZIP, and the recipient can verify it offline, in their own isolated Python environment, without any dependency on the entity or on GetProofAnchor. Whether or not an entity adopts this specific tool, the underlying principle should be: design your evidence pipeline so that the verifier does not have to trust the producer.
Long-term retention is the corollary. NIS2 does not prescribe a specific retention period, but ENISA's technical guidance and several national transpositions indicate that twelve months is a sensible baseline for routine logs, with sector-specific extensions for higher-risk evidence categories. For forensic captures of incident-related web evidence, retention should extend to the statute of limitations for any reasonably anticipated downstream proceedings — administrative, civil, or criminal — which can run to seven years or more.
NIS2 incident evidence checklist (15 points)
The following checklist consolidates the practical takeaways from this guide. It is not a substitute for legal advice or for the specific guidance of your national competent authority, but it captures the operational priorities that distinguish well-prepared NIS2 entities from those that are merely on paper.
- Determine your NIS2 classification (essential, important, or out of scope) under your Member State's transposition, and document the determination with supporting rationale.
- Identify your national CSIRT and competent authority, establish pre-incident communication channels, and obtain templates for the 24-hour early warning, 72-hour notification, and one-month final report.
- Define internal classification criteria for significant incident aligned with Article 23(3) and, where applicable, Commission Implementing Regulation 2024/2690 thresholds — and review them at least annually.
- Ensure 24/7 detection capability so that the 24-hour clock can start at the moment of awareness, regardless of business hours or weekends.
- Designate a primary incident-reporting officer and at least one deputy with full authority to file notifications, to eliminate single points of failure in the reporting chain.
- Implement tamper-evident logging with documented integrity primitives (hash chains, qualified timestamps, or equivalent) and retention of at least 12 months as a baseline.
- For any web-facing incident artefact — phishing pages, defaced sites, leak sites, fake login interfaces — establish a forensic capture procedure that produces eIDAS-qualified-timestamped evidence at the moment of detection.
- Bind capture artefacts together through cryptographic manifests (SHA-256 over all files) and submit the manifest hash to an eIDAS qualified trust service provider for a qualified electronic timestamp under Article 42 of Regulation (EU) 910/2014.
- Anchor evidence packages independently — for example, through Bitcoin blockchain via OpenTimestamps — so that integrity can be verified without reliance on the entity or any single trust service provider.
- Document chain of custody in line with ISO/IEC 27037:2012 phases (identification, collection, acquisition, preservation), with roles, timestamps, and actions recorded for each step.
- Train the management body on NIS2 Article 20 obligations, including approval of Article 21 measures and oversight of implementation; document the training with dated attendance records.
- Map the overlap between NIS2 and other applicable regulations (GDPR, DORA, CER Directive, sector-specific frameworks) to avoid both gaps and duplicated notifications, and document the mapping.
- Establish supply-chain evidence procedures so that incidents involving third-party services can be captured against the third party's state at the moment of awareness, before the third party remediates.
- Conduct at least one tabletop exercise per year simulating the full Article 23 timeline, including evidence capture, internal escalation, CSIRT notification, and service recipient communication.
- Make all evidence packages independently verifiable by third parties — supervisory authorities, courts, insurers, counsel — using open-source tools and without dependency on the producing entity's infrastructure.
Frequently asked questions and conclusion
The following answers address the questions that arise most often when organisations begin operationalising their NIS2 evidence-handling obligations. They are intended as practical orientation, not as legal advice for any specific situation.
When does the NIS2 24-hour clock actually start?
Is the NIS2 72-hour deadline the same as GDPR's 72 hours?
What counts as a significant incident for entities not covered by Commission Implementing Regulation 2024/2690?
Do small and micro-enterprises ever fall within NIS2 scope?
Can my organisation outsource NIS2 incident reporting to an MSSP?
What evidence must be preserved alongside the 72-hour notification?
How long must NIS2-related evidence be retained?
What does NIS2 require regarding management body responsibility?
Can a single cybersecurity incident trigger NIS2, GDPR, and DORA simultaneously?
Does my organisation need ISO 27001 certification for NIS2 compliance?
What is the relationship between NIS2 and the eIDAS Regulation?
How is forensic web evidence different from a screenshot?
What happens if an entity fails to submit the 24-hour early warning on time?
How should multinational entities handle cross-border NIS2 reporting?
How does GetProofAnchor specifically support NIS2 evidence-handling?
NIS2 enforcement in 2026 is not a hypothetical. The grace period is over, transposition is largely complete, and supervisory authorities across the Union are moving from the building-the-framework phase into the applying-the-framework phase. For regulated entities, the practical test is no longer whether policy documents exist — almost everyone has those — but whether evidence can be produced under the Article 23 clock and verified by independent third parties.
The capabilities that distinguish well-prepared NIS2 entities are not exotic. They are detection, classification, forensic preservation of incident artefacts, documented chain of custody, integrity-preserving log retention, and a board-level governance posture that can be demonstrated to a supervisor. Each of these is achievable with existing tools and existing standards. The decisive factor is whether they are in place before the incident — because once the 24-hour clock starts, the time for capability building is over.
GetProofAnchor exists to make the forensic preservation component of this stack — specifically the web-facing artefacts that disappear fastest in the moments after detection — straightforward, defensible, and independently verifiable. If your organisation is building its NIS2 evidence-handling capability and would like to see how qualified-timestamped forensic web capture integrates with your incident-response procedures, the most direct way is to create a sample proof from any public URL and inspect the Evidence ZIP. Verification is open-source and runs on any laptop. The standard is reproducible because it has to be.
Build NIS2-ready evidence before the next incident
Create a tamper-evident proof of any public URL with qualified eIDAS timestamps and independent Bitcoin blockchain anchoring. Verification is open-source and works offline. No specialist forensic skills required.
GetProofAnchor is designed for forensic web evidence capture across the EU. Captures are sealed with qualified electronic timestamps under eIDAS Article 42 by an accredited trust service provider.