Incident Response Plan: Complete Framework, Templates & Regulatory Timelines

When a security incident occurs, every minute counts. The difference between a contained incident and a full-blown crisis often comes down to whether the organization has a tested, documented incident response plan that the team can execute under pressure. Without one, even minor incidents escalate into chaos, regulatory deadlines are missed, and the organization learns nothing from the experience.
This guide provides a proven incident response framework covering the 6 phases of incident handling, team composition, regulatory reporting timelines under DORA, NIS2, and GDPR, communication templates, and how to build an IR capability that improves with every incident.
What Is an Incident Response Plan?
An incident response plan (IRP) is the documented playbook that guides your organization through a cybersecurity incident. It defines: who does what (roles and responsibilities), how incidents are detected and classified (severity levels), what steps to take for containment, eradication, and recovery, who to notify and when (regulatory reporting), and how to learn from incidents (post-incident review). The IRP is not a static document — it is a living framework that is tested regularly and updated based on lessons learned.
The 6 Phases of Incident Response
Phase 1: Preparation
Build the foundation before incidents occur: establish the Computer Security Incident Response Team (CSIRT) with defined roles and escalation paths, procure and configure IR tools (EDR, SIEM, forensic toolkits, communication platforms), create communication templates for internal stakeholders, regulators, customers, and media, establish relationships with external resources (forensics firms, legal counsel, cyber insurance carrier), and conduct tabletop exercises quarterly to test the plan and build team readiness.
Phase 2: Identification
Detect and confirm security incidents through: SIEM/XDR alerts and correlation rules, user reports (phishing attempts, suspicious activity), threat intelligence feeds, and automated detection tools (EDR, network monitoring). Classify incidents by severity: P1 (critical) triggers the full IR team and executive notification; P2 (high) activates the core IR team; P3/P4 may be handled by SOC analysts within standard procedures. The key metric is mean time to detect (MTTD) — the faster you identify an incident, the smaller the impact.
Phase 3: Containment
Stop the incident from spreading. Two stages: Short-term containment (immediate): isolate affected systems from the network, block malicious IP addresses and domains, disable compromised user accounts, activate network segmentation to prevent lateral movement. Long-term containment: apply emergency patches, reconfigure firewalls and access controls, rebuild affected systems from clean images while maintaining evidence preservation. Critical: preserve forensic evidence before making changes. Document every action with timestamps.
Phase 4: Eradication
Remove the root cause: eliminate malware from all affected systems, close the vulnerability that was exploited, revoke and rotate all potentially compromised credentials, verify that the threat actor no longer has access (check for backdoors, persistence mechanisms), and update detection rules to prevent reinfection. This phase often overlaps with containment and may require multiple iterations.
Phase 5: Recovery
Restore systems to normal operation: restore from verified clean backups, gradually reconnect isolated systems to the network, monitor intensively for any signs of recurrence, validate system integrity (file integrity checks, configuration verification), confirm business processes are functioning correctly, and communicate recovery status to stakeholders. Recovery timelines should align with BCM recovery time objectives established in the BIA.
Phase 6: Lessons Learned
Within 2 weeks of incident closure, conduct a post-incident review: What happened? (timeline, attack vector, systems affected) What went well? (effective detections, fast response actions) What went wrong? (delayed detection, communication gaps, tool failures) What improvements should we make? (process changes, tool upgrades, training needs) Update the IRP based on findings. Share relevant lessons across the organization. This phase is most often skipped and most often regretted — organizations that do not learn from incidents are condemned to repeat them.
Regulatory Reporting Timelines
Multiple regulations impose incident notification deadlines:
- DORA (financial institutions): Initial notification to competent authority within 4 hours of incident classification, intermediate report within 72 hours, final report within 1 month.
- NIS2 (essential and important entities): Early warning within 24 hours, incident notification within 72 hours, final report within 1 month.
- GDPR (personal data breaches): Notification to supervisory authority within 72 hours of becoming aware. Notification to affected individuals without undue delay if high risk.
Compliance with multiple overlapping reporting timelines requires pre-prepared notification templates and clear processes for determining which regulations apply to a given incident.
Building the Incident Response Team
Core team (activated for all incidents): CISO or security lead (incident commander), SOC analysts (detection and initial response), IT operations (system administration and recovery), legal counsel (regulatory obligations and liability). Extended team (activated for major incidents): communications/PR (external messaging), HR (employee-related incidents), executive management (strategic decisions and authority), external forensics (specialized investigation), and cyber insurance carrier (coverage and support services).
Frequently Asked Questions
How often should we test the incident response plan?
Tabletop exercises quarterly (low cost, high learning value), functional exercises semi-annually (partial activation of IR processes), and full-scale exercises annually (complete activation simulating a real incident). Additionally: test after every significant infrastructure change and after every real incident.
Who should be on the incident response team?
Core team: CISO/security lead, SOC analysts, IT operations, and legal counsel. Extended team (for major incidents): communications/PR, HR, executive management, external forensics, and cyber insurance carrier. Define roles, contact information, and escalation paths before an incident occurs.
Should we pay ransomware demands?
This is a business decision, not a technical one. Consider: do you have viable backups? What is the business impact of extended downtime? Are there legal restrictions on payment in your jurisdiction? Will payment actually result in data recovery (often it does not — only ~65% of victims who pay receive all their data back)? Involve legal counsel and your cyber insurance carrier before making this decision.
What tools do we need for incident response?
Minimum toolkit: EDR platform (CrowdStrike, SentinelOne, Microsoft Defender for Endpoint), SIEM or XDR for detection and correlation, forensic imaging tools (FTK Imager, AXIOM), secure communication channel (not dependent on potentially compromised email), and a ticketing/documentation system for timeline tracking. Advanced: threat intelligence platform, automated response (SOAR), and network forensics capability.
How do we handle the 4-hour DORA notification requirement?
The 4-hour clock starts when the incident is classified (not when it is detected). Preparation is key: pre-drafted notification templates with fill-in-the-blank fields, clear classification criteria so incidents can be categorized quickly, defined on-call schedule for the incident commander, direct communication channel to the competent authority, and practiced notification process through tabletop exercises.