Cyber.SoHo Educational Hub · Incident Management

Mid-Term Review
Days 01–10 · Weeks 1–5

A complete, three-hour study companion: condensed lecture review, ten realistic case studies with worked answers, and seventy-five exam-style questions across all six chapters of the course.

Coverage6 Chapters · 30 Lessons
Case studies10 worked scenarios
Questions75 MCQs & MSQs
Time budget≈ 3 hours

How to use this document

This is a study companion, not a substitute for the lecture notes. It walks you through everything that has been covered in class, then puts you in the hot seat with case studies and a 75-question practice exam. The recommended order is the order it is written:

  • Spend roughly 75 minutes reading Part 1 — the chapter-by-chapter review. Mark anything that does not feel solid and revisit the original lecture for that lesson before moving on.
  • Spend roughly 40 minutes on Part 2 — the ten case studies. Read each scenario, write your own answer in a notebook, and then compare with the suggested answer. The point is not to be right; the point is to find the gap between your reasoning and a defensible reasoning.
  • Spend roughly 65 minutes on Part 3 — the 75 questions. Cover the answer with your hand. Check the hint only if you are stuck. Look at the answer only after you have committed to a choice. The chapter reference next to each question tells you exactly which lecture to revisit if you got it wrong.

A small note on style. Abbreviations are spelled out the first time they appear, and again whenever enough material has gone past that you might have lost track. Every external organisation, regulator, framework, or named tool is hyperlinked to its official site so you can verify the source for yourself rather than taking my word for it. Always scan unfamiliar URLs through VirusTotal before clicking — a habit that costs three seconds and has saved careers.


PART 1 — Comprehensive Chapter Review


Part 1

Comprehensive Chapter Review

Lesson-by-lesson walk-through of all six chapters. Each chapter and each lesson can be unfolded individually; the unfold/fold control follows you down the page as you scroll, so you can collapse from anywhere.

Chapter 1

Introduction to Incident Handling & Security Concepts

Note Why we begin here

The opening lecture sets the vocabulary that the entire course is built on. If you only remember one sentence from Day 01, make it this: incident handling is the disciplined, documented process of dealing with the inevitable. Every organisation eventually has an incident; the well-prepared ones survive the others.

Lesson 1 What incident handling actually is

Three nouns that students confuse, and that you cannot afford to confuse in a write-up:

  • An event is anything observable on a system — a login, a packet, an antivirus scan. Server logs are full of them, easily millions per day on a medium-sized network.
  • An alert is an event that some detection tool has decided is worth a human's attention. Alerts are rarer than events, but plenty of them are still false positives.
  • An incident is an alert that has been confirmed — by a human or a well-trained automation — to represent an actual violation of security policy or an imminent threat of one. The moment the word "incident" gets used in writing, the stopwatch starts: lawyers care, regulators care, and the bill begins to climb.

Incident handling, sometimes called incident response (IR), is the disciplined process of detecting, analysing, containing, eradicating, recovering from, and learning about those incidents. The authoritative reference is NIST Special Publication 800-61 (NIST = National Institute of Standards and Technology) — the Computer Security Incident Handling Guide — which defines the classic six-phase lifecycle. NIST is a US federal agency that publishes technical standards used worldwide.

Why does anybody pay for this work? Three reasons in roughly descending order of how much they should be reversed: money (the IBM Cost of a Data Breach Report consistently shows organisations with mature IR programmes save one to two million US dollars per incident compared to those without), regulation (under GDPR (General Data Protection Regulation) the breach-notification clock is 72 hours from awareness — not from confirmation), and trust (a transparent, fast response often preserves a company's reputation; a hidden one ends in a courtroom).

Lesson 2 Core security concepts

The CIA triad — Confidentiality, Integrity, Availability — is the trio of properties every security control ultimately defends:

  • Confidentiality — information is disclosed only to those authorised to see it. Mechanisms: encryption (in transit and at rest), access control lists, multifactor authentication (MFA), the boring physical lock on the server-room door.
  • Integrity — information is accurate, complete, and unaltered by unauthorised parties. Mechanisms: cryptographic hashes, digital signatures, transaction logs, version control. The Stuxnet worm famously attacked integrity by reporting correct values to operators while centrifuges destroyed themselves.
  • Availability — authorised users can access information and systems when needed. Mechanisms: redundant hardware, load balancers, backup power, anti-DDoS (Distributed Denial-of-Service) services. Every DDoS attack is a direct assault on this pillar.

Three vocabulary terms revolve around the triangle:

  • A threat is anything that could cause harm — an organised crime group, a disgruntled employee, a tropical storm. Threats are potential, not actual.
  • A vulnerability is a weakness a threat can exploit — an unpatched server, a shared admin password, a missing background check.
  • An exploit is the specific method or tool that uses the vulnerability — a Metasploit module, a phishing email with a poisoned spreadsheet, a brick through the window.

Risk = Likelihood × Impact. That is the qualitative formula every CISO (Chief Information Security Officer) you ever meet will recite. Risk is never purely technical; the same vulnerability in a public-facing production database is a much bigger risk than in a test system holding fake data, because impact differs by orders of magnitude. The companion methodology is NIST SP 800-30Guide for Conducting Risk Assessments.

A control is a safeguard that reduces risk. Three families: preventive (stop the thing happening), detective (notice it when it does), corrective (clean up afterwards). A mature programme uses all three. An asset is anything the organisation values enough to protect — data, systems, people, reputation, intellectual property.

Lesson 3 Incident families and classification axes

The seven incident families you will see in the wild, in roughly decreasing order of frequency:

  1. Malicious code (malware) — viruses, worms, trojans, rootkits, ransomware. The 2017 WannaCry outbreak hit ~200,000 systems across 150 countries in roughly a day.
  2. Social engineering — phishing (email), vishing (voice), smishing (SMS — Short Message Service), pretexting, Business Email Compromise (BEC). The FBI Internet Crime Complaint Center reports BEC alone causes billions of dollars of losses annually.
  3. Denial of Service (DoS) and DDoS — flooding a target to take it offline. The 2016 Dyn cyberattack took a chunk of the US internet down using compromised home routers and cameras.
  4. Unauthorised access — stolen passwords, credential stuffing, exploitation of exposed services. The 2021 Colonial Pipeline ransomware case began with a single leaked VPN (Virtual Private Network) credential reused across systems.
  5. Data exposure and leakage — deliberate exfiltration, accidental public buckets, mis-addressed spreadsheets. The 2017 Equifax breach exposed records of 147 million people.
  6. Insider threats — malicious (a departing engineer with source code on a USB stick) and accidental (an executive uploading sensitive data to personal cloud storage by mistake).
  7. Supply-chain compromise — an attacker compromises a trusted vendor and the malicious update ships to thousands of customers. The 2020 SolarWinds Sunburst campaign is the canonical case.

Every incident gets classified along four axes: severity (how loud should the alarm ring — typically Low/Medium/High/Critical), scope (how far does it reach — narrow or broad), category (which of the seven families above, since category drives runbook selection), and sensitivity (was regulated data — personal data, protected health information, cardholder data — touched).

Lesson 4 Teams that respond to incidents

The SOC (Security Operations Center) is the front line. SOC analysts monitor the SIEM (Security Information and Event Management) platform, triage alerts, and resolve or escalate. SOCs run in tiers: Tier 1 does first-pass triage, Tier 2 does deeper analysis and minor incident response, Tier 3 does forensic investigation and malware reverse engineering. Most enterprise SOCs run 24×7 with follow-the-sun shifts because attackers do not respect office hours.

The CSIRT (Computer Security Incident Response Team) owns the incident from confirmation through lessons learned. In small organisations, the CSIRT is a subset of the SOC; in large ones, a separate unit reporting to the CISO. Two cousin acronyms: CERT (Computer Emergency Response Team) is older and still used for many national teams like CERT-EU; the CERT trademark belongs to Carnegie Mellon University, which is why most corporate teams now prefer CSIRT. PSIRT (Product Security Incident Response Team) is the team inside a software vendor that handles vulnerabilities in their own products.

Around the SOC and CSIRT sit the supporting cast: the CISO owns the executive-level security programme, Legal counsel decides what can be said publicly and asserts privilege over forensic reports, Communications/PR (Public Relations) writes the breach-notification letter and the press release, Human Resources handles insider cases and out-of-hours staff matters, IT operations know the infrastructure and must be befriended long before the breach, and External partners (a digital-forensics retainer with firms like Mandiant, CrowdStrike, or Unit 42, a breach-notification law firm, a PR agency) get called when the incident exceeds in-house capacity. Law enforcement (the FBI and its Internet Crime Complaint Center (IC3), the U.S. Cybersecurity and Infrastructure Security Agency (CISA), and Canada's Royal Canadian Mounted Police National Cybercrime Coordination Unit (RCMP NC3)) gets engaged when the incident is serious enough that the intelligence outweighs the headache.

If you remember one non-technical word from Lesson 4, make it tabletop exercise — a two-hour meeting where the IR team walks through a simulated incident with no keyboards and no real systems. Tabletops surface the playbook gaps that no documentation review will reveal. Every organisation that runs them quarterly handles real incidents dramatically better than those that do not.

Lesson 5 Legal and regulatory frameworks

Modern incident handling has a stopwatch attached. The four frameworks you must know on day one of the job:

  • GDPR (General Data Protection Regulation) — the EU regulation in force since May 2018. Article 33 requires notification of the lead supervisory authority within 72 hours of awareness of a personal-data breach, unless the breach is unlikely to result in a risk to individuals. Article 34 requires notification of the affected individuals themselves when the risk is high. Penalties reach EUR 20 million or 4% of global annual turnover, whichever is higher. The Irish Data Protection Commission has been the lead regulator for many of the largest fines.
  • HIPAA (Health Insurance Portability and Accountability Act) — the US law governing protected health information (PHI). The Breach Notification Rule, enforced by the US Department of Health and Human Services Office for Civil Rights, requires individuals to be notified within 60 days, the Secretary of HHS within 60 days for breaches of 500 or more individuals, and prominent media outlets in the affected state if 500 or more residents are involved.
  • NIST guidancenot a law, but the global de-facto baseline. The two publications you must know are NIST SP 800-61 Rev. 2 (the six-phase incident-handling lifecycle) and the NIST Cybersecurity Framework version 2.0 (organised into six functions: Govern, Identify, Protect, Detect, Respond, Recover).
  • Others worth knowingPCI DSS (Payment Card Industry Data Security Standard, contractual rather than legal but card-brands can revoke processing rights for non-compliance), NIS2 (the EU Network and Information Security Directive 2 with a 24-hour early-warning clock for essential and important entities), the SEC cyber-incident disclosure rule (US-listed companies must file Form 8-K Item 1.05 within four business days of materiality determination), CCPA/CPRA (California's privacy laws), and ISO/IEC 27035 (the international standard for information-security incident management).

The practical consequence: every incident has a clock. The instant an incident is confirmed, two questions must be asked — "is personal or regulated data implicated?" and "when does the clock start?". Write those two questions on a sticky note on your monitor.

Recap Chapter 1 — five things to remember
  • An incident is an event that has been confirmed as a security-policy violation. Document everything from the moment of confirmation.
  • Risk = Likelihood × Impact, with Impact taken as the worst-case across C, I, and A.
  • Classify every incident by category, severity, scope, and sensitivity — all four.
  • Know who to page, in what order, and rehearse the chain through tabletops before you need it.
  • Every serious incident has at least one legal clock. Put the relevant ones on a sticky note.

Chapter 2

The Incident Handling & Response (IH&R) Process

Lesson 1 The full IH&R lifecycle

The classic six-phase IH&R lifecycle is the model you will see on every certification exam — CEH (Certified Ethical Hacker), GCIH (GIAC Certified Incident Handler), ECIH (EC-Council Certified Incident Handler) — and on most job descriptions:

  • Preparation — before the fire starts, you check the extinguishers. Build the IRP (Incident Response Plan), stand up the CSIRT, stock the jump bag, run tabletops, harden logging on the SIEM, EDR (Endpoint Detection and Response), and NDR (Network Detection and Response) — and get legal, HR, PR (Public Relations), and the insurance carrier on the same page before the 3 a.m. call.
  • Identification — somebody yells "fire!" and you confirm it is actually fire and not burnt toast. Sources: SIEM correlation rules, EDR telemetry from products like CrowdStrike Falcon or Microsoft Defender for Endpoint, help-desk calls, external notification (your bank, a customer, CISA — the Cybersecurity and Infrastructure Security Agency — or, awkwardly, a journalist).
  • Containment — close the door so the fire does not spread. Short-term: stop the bleeding (network isolation, account disable, DNS blackholing of attacker infrastructure). Long-term: keep the attacker busy while you plan a clean eradication. Snapshot affected VMs (Virtual Machines) before reboot — never the other way around, or you lose volatile evidence in RAM (Random Access Memory).
  • Eradication — remove the malware, delete attacker accounts, patch the exploited CVE (Common Vulnerabilities and Exposures), kill every persistence mechanism (scheduled tasks, registry run keys, cron jobs, WMI subscriptions, systemd timers). If you eradicate before you fully understand root cause, the attacker comes back through the same door.
  • Recovery — bring systems back online under careful monitoring. Restore from known-good backups, re-image rather than "clean" where possible, stage the bring-up so you catch surprises before they hit the crown jewels, and validate that business functions actually work — finance does not care that the domain controller is clean if they cannot run payroll.
  • Lessons Learned — within two weeks of recovery, run a blameless post-incident review (PIR), produce metrics, update the IRP and playbooks, and feed the lessons back into Preparation.

The classic three nouns that students confuse: Event (anything observable), Incident (event(s) that violate or threaten security policy), Breach (an incident where confidentiality of data is proven lost). Mixing these up in a boardroom is how careers end.

Lesson 2 Building, maintaining, and activating the IRP

A real IRP (Incident Response Plan) is a living document, not a PDF gathering dust on a SharePoint. Seven sections at minimum: purpose/scope/authority, roles and responsibilities (in the form of a RACI matrix — Responsible / Accountable / Consulted / Informed — by title, not by name), incident classification and severity matrix (typically SEV-1 through SEV-4 with crystal-clear definitions and time-to-notify SLAs — Service Level Agreements), six-phase playbook references, communication and escalation protocol, legal and regulatory reporting obligations (Canada's Personal Information Protection and Electronic Documents Act (PIPEDA) reported to the Office of the Privacy Commissioner of Canada (OPC), Quebec's Law 25 where Quebec residents are affected, U.S. state breach-notification laws reported to the relevant state Attorney General — for example California, New York, Illinois — plus the U.S. Federal Trade Commission (FTC) and sector-specific bodies), and plan maintenance with an owner and review schedule.

When you build an IRP from scratch, do it in this order: map crown-jewel assets, define the severity matrix in terms of business impact (a SEV-1 means "this is costing money or reputation right now", not "this is a cool zero-day"), draft roles with named deputies, write one playbook per top-five incident type (the SANS Reading Room and the CISA playbooks are excellent starting templates), get legal and HR to review the relevant sections in writing, then tabletop-test, break, and rewrite annually.

The IRP decays. Every new SaaS (Software as a Service) tool, every reorganisation, every new regulation (the EU's NIS2 and DORA — Digital Operational Resilience Act — are two recent examples) rots a piece of it. Schedule a minor review every quarter, a major review annually, and an unscheduled review after any SEV-1 or SEV-2.

Activation is a formal act, not a mood. Triggers: any confirmed SEV-1 or SEV-2, credible external notification, detection of high-impact TTPs (Tactics, Techniques, and Procedures) such as domain-wide Kerberoasting or a ransomware note on any host, or executive decision by the CISO. First moves: declare, assemble, stabilise, communicate.

Lesson 3 Communication and escalation

Most of the damage in a major breach comes from communication failures, not technical failures. The attacker exfiltrated in six minutes; the legal team learned about it from Twitter; the PR team contradicted the CEO's TV statement; customer service was telling people "everything is fine" while the website was publicly defaced. Look at the post-mortems of Equifax or Target and you will see exactly this pattern.

Three communication axes run in parallel during a live incident, each with different rules:

  • Internal-technical — the IRT (Incident Response Team), SOC, IT operations, network team, application owners. High bandwidth, high detail, minimal filter. Use secure, out-of-band channels because your main channels may be compromised.
  • Internal-executive/legal/HR/PR — leadership, legal counsel, HR, PR, insurance broker. Lower bandwidth, structured, often via SITREPs (Situation Reports) every 60 or 120 minutes during an active SEV-1.
  • External — customers, regulators, law enforcement, media, vendors, partners. Lowest bandwidth, maximum filtering, often through a single spokesperson. Every word is potentially a legal exhibit.

Escalation — moving the incident up the severity ladder or up the seniority ladder — has two rules every junior responder should memorise. First: when in doubt, escalate. The cost of escalating a false positive is a slightly grumpy manager; the cost of not escalating a real one ends careers. Second: escalation is not a blame transfer — you still own the technical response after escalating. You are asking for authority, resources, and visibility, not for someone to take the pain away.

A sane escalation matrix maps severity to "time to notify", "who must be told", and "who decides". Each row maps to a pre-written, pre-approved message template so the on-call analyst at 02:14 on a Saturday is not composing prose with shaking hands.

The single rule that has bitten me once and that I will never break again: never assume your primary communication channel is safe during an active incident. If an attacker has email access, they are reading your incident email thread. Always have an out-of-band fallback — a private Signal or WhatsApp group set up in peacetime, a conference-bridge number printed on a laminated card in the jump bag, an alternate ticketing system. This is not paranoia. The Lapsus$, Colonial Pipeline, and Uber 2022 breach playbooks all confirm attackers eavesdrop on the response when they can.

Lesson 4 Documentation and reporting

Lawyers have a phrase incident responders should tattoo on their forearms: if it is not documented, it did not happen. Documentation has four distinct purposes — operational (telling the next analyst what has been done), forensic (building an admissible chain-of-custody trail), regulatory (satisfying GDPR Article 33, the SEC cyber-incident rule, HIPAA, NIS2, PCI-DSS), and strategic (feeding lessons-learned and metrics back into the programme). Mixing them — for example, putting forensic hypotheses in the regulator notification — creates legal exposure.

A minimum documentation set, phase by phase:

  • Preparation: the current version of the IRP and every playbook with version number and approval date, asset inventory, contact lists, tabletop after-action reports, training records.
  • Identification: the alert that triggered detection with raw log excerpts and timestamps in UTC (Coordinated Universal Time), initial analyst triage notes and the reasoning for the severity call, the exact list of IoCs (Indicators of Compromise) observed, time of declaration, identity of declaring authority.
  • Containment: every action taken, the system it was taken on, the authorised person, the timestamp, any chain-of-custody forms (NIST SP 800-86Guide to Integrating Forensic Techniques into Incident Response — is the reference), hash values of all evidence collected (typically SHA-256).
  • Eradication: root-cause hypothesis → validation → confirmed root cause, with evidence for each step; list of patches, configuration changes, account deletions, and rule additions made, with change-management ticket numbers.
  • Recovery: restore sources (which backup, what point-in-time, validated how), service-by-service bring-up order, sign-off from business owners.
  • Lessons Learned: reconstructed UTC minute-granular timeline, root-cause analysis report, cost of the incident (direct, indirect, regulatory, reputational), specific action items with owners and deadlines.

The reporting clock you must internalise: GDPR Article 33 — 72 hours from awareness to the lead supervisory authority; SEC 8-K Item 1.05 — four business days from materiality determination; PCI-DSS — "immediately" to your acquiring bank; HIPAA — 60 days for breaches affecting 500 or more individuals; NIS2 — 24-hour early warning to the national CSIRT. These deadlines start from the moment you become aware, not from the moment it is convenient.

Lesson 5 Post-incident analysis and continuous improvement

Most organisations do the first five phases. Many quietly skip Lesson Six. The lessons-learned meeting gets scheduled, then cancelled, then forgotten. Skipping it is the single most expensive mistake in security: every incident is a free, fully-funded research project into how your defences fail, and throwing that research in the bin is indefensible.

A proper PIR (Post-Incident Review) is blameless (about systems, not individuals), timed (within two weeks of recovery, before memory decays), multi-disciplinary (IR, IT Ops, SOC, application owners, business owners, legal, HR — the gaps live at the seams between disciplines), data-driven (real metrics from the SIEM and ticketing system, not war stories), and action-oriented (every PIR produces a written list of items with owners and deadlines).

Two root-cause-analysis techniques worth knowing. The 5 Whys — ask "why did this happen?" five times in a row, and the fifth answer is what you actually fix. Why did the attacker get in? Because a contractor's VPN account had MFA disabled. Why? Because IT turned it off six months ago "temporarily". Why was the temporary change not reverted? Because there was no expiry on the exception ticket. Why? Because the exception-ticket template has no expiry field. Why? Because nobody has reviewed the template in three years. The fix is the template, not the contractor. The second technique is the Fishbone (Ishikawa) diagram, which splits causes into categories — People, Process, Technology, Environment — and forces the team to populate each branch (ASQ has a good primer).

Metrics that actually matter (and a couple that do not):

  • MTTD (Mean Time to Detect) — first attacker action to first confirmed alert.
  • MTTA (Mean Time to Acknowledge) — alert firing to analyst pickup.
  • MTTR (Mean Time to Respond / Recover) — declaration to full recovery.
  • Incidents per 1,000 employees — normalised so the CFO does not panic at headcount growth.
  • % of alerts that were false positives — anything above 40% means your detection content needs tuning, not your staffing.
  • % of PIR action items closed within 90 days — the single best signal of whether your lessons-learned process actually learns.

Vanity metrics that look impressive but are worthless: number of alerts generated (volume is not quality) and number of blocked connections at the firewall (that is internet noise, not defensive value).

Every PIR action item belongs to one of four buckets — Detection (a new SIEM rule, a new EDR signature), Prevention (a patch, a config change, a new control), Process (an updated playbook, a new training module, a new checklist), and People (a hire, a re-org, a new external retainer). Tag every action item; at the next quarterly review, count the closures per bucket. If your programme is leaning entirely on Detection (the sexy bucket), you have a maturity problem.

And finally — publish a sanitised internal PIR report to your whole IT organisation. Engineers reading "here is how we got beaten last month and here is what we are doing about it" build security instincts faster than any classroom training. Secrecy breeds repetition.


Chapter 3

First Response & Evidence Handling

Recap Why the first ninety minutes are decisive

The dirty secret of incident response is that, by the time the news writes about a breach, the company has already won or lost. Not in the boardroom, not in the courtroom, but in the panicky first ninety minutes when an over-eager admin reboots the box "to clear it up", or a helpdesk technician scans the suspicious file on the live system, or somebody screenshots the alert into a Slack channel where the attacker — still inside — is reading every word.

Lesson 1 Immediate triage steps

A useful rule veteran responders pass to juniors: the alert is not the incident; the alert is a hypothesis. Modern SIEM platforms throw off thousands of correlations a day — most are noise, a few are real. Your job in the first ninety seconds is to decide which.

Five steps in a specific order, drilled into every reputable playbook from NIST SP 800-61, ENISA, and SANS:

  • Verify the signal. Cross-reference. If a detection rule fired because a host queried a domain on a threat-intelligence feed, do not just trust the rule — open the VirusTotal entry, check the AlienVault OTX (Open Threat eXchange) reputation, look at the WHOIS via ICANN Lookup. Two confirming sources turn an alert into a finding. Zero turn it into a tuned-rule false positive.
  • Scope and severity. Three sub-questions. Which hosts? (one workstation or a whole VLAN — Virtual Local Area Network). What data? (developer laptop or a database with regulated financial records — GDPR and HIPAA timers attach). What business impact? (a tax-closing run that ships tomorrow morning is a different beast from a marketing landing page). Severity is typically a P1-to-P4 scale: P1 means active loss or material business impact, P4 means informational.
  • Contain (without erasing). This is where the WannaCry first responder went wrong. The default first-response containment is network isolation, not power-off. On a managed endpoint, the EDR agent's "contain device" button is one click — the host is cut off from everything except its EDR controller while memory, processes, and disk stay intact for forensic capture. Pulling the plug destroys evidence in RAM that the eradication phase may need.
  • Notify. Page the on-call analyst, log the incident in the ticketing system with a unique ID, alert the IR lead per the IRP, start the legal clock if regulated data may be in scope.
  • Capture. Run the volatile-evidence collection script before anything else changes. Memory image, network connections, process tree, logged-in users — in that order, because that is the order they decay.
Lesson 2 Chain of custody

A 1995 California criminal trial — famously — turned on a forensic detective who could not, in the end, account for who had been holding a piece of physical evidence during a few hours between collection and lab analysis. The defence pounced. The judge sustained an objection. The evidence was excluded.

Now imagine your turn on the stand. The case is corporate; your company is suing a former employee for stealing source code on his last day. The crucial evidence is a forensic image of his laptop drive, made by you during off-boarding eighteen months earlier. Opposing counsel leans forward and asks if you can account for who had physical custody, when, and what they did with the drive. If you can — every transfer, every signature, every hash — your evidence stays in. If you cannot — even one unsigned form, one missing hash — the judge may exclude it, and with it the case.

This is not theoretical. The bodies that write cybercrime evidence rules — the International Organization on Computer Evidence (IOCE), the Council of Europe Budapest Convention on Cybercrime, the US Department of Justice CCIPS (Computer Crime and Intellectual Property Section), ENISA — converge on the same principle: evidence is admissible if and only if its integrity, authenticity, and continuity can be demonstrated.

The five non-negotiable principles:

  • Integrity — the evidence must not be altered by collection or analysis. Operationalised through write blockers on physical media, working only on copies, hashing before and after every operation, and using forensically-validated tools (the NIST CFTT (Computer Forensics Tool Testing) program maintains a database).
  • Authenticity — the evidence is provably what it claims to be. Hashing is the technical part; documenting origin (which host, which user, which storage) is the procedural part.
  • Continuity — every transfer of custody is recorded with no gaps. If evidence sits in a locked cabinet for three weeks, the access log of the cabinet must show that nothing happened.
  • Reproducibility — analysis must be reproducible by an independent party using the same tools and inputs. This is why we record tool versions to the patch level (Volatility 3.4.0 and 3.4.1 may produce different output on the same memory image).
  • Minimality — collect what you need; do not over-collect. Sweeping up an entire department's email when you needed three messages creates privacy exposure and may conflict with GDPR's data minimisation principle.

A defensible chain-of-custody form contains, at minimum: a unique case identifier, a precise description ("SRV-FIN-03 RAM dump, 32 GiB, file SRV-FIN-03_mem.raw" — not "a memory dump"), a hash record with algorithm and digest, a custody log with one row per transfer including UTC timestamps and signatures from both parties, a storage record describing the bag, seal number, safe location, and access control, and a disposition record.

Two practical points students under-appreciate. First: timestamps must be in UTC. CET vs PST timestamps in a single case file at 03:00 cause confusion attackers exploit. Second: handwriting must be legible and in permanent ink. Pencil entries and erasable-ink (frixion) pens are inadmissible.

The senior responder's heuristic: at the moment of seizure, make at least two complete forensic images, hash both, lock the original copy in the safe with no further work done — it becomes the master. Keep one working copy in the analyst's hands; keep a second working copy as a recovery copy. The master is touched once, twelve months later in court, and at that point the only operation is a hash check that proves the working copy is identical.

Lesson 3 Volatile vs non-volatile evidence: the order of volatility

A working day's worth of evidence is layered like a pyramid by half-life. CPU registers and cache decay in nanoseconds. ARP (Address Resolution Protocol) tables persist for seconds. Running-process and network-connection state persists for minutes to hours on a live system. RAM contents disappear at power-off (and sometimes faster, due to memory wear-leveling). Disk contents persist for days to years. Archival media persists for as long as the medium lasts.

The reference document is RFC 3227, the Internet Engineering Task Force guide on evidence collection priority. Its prescription, simplified: collect from the top of the pyramid down. Memory before disk, live network state before logs, kernel structures before user-space artefacts.

The lesson NotPetya 2017 hammered home: many administrators watching their fleets get destroyed shut machines down to "save" them. This was wrong. The malware had already encrypted the boot sector by the time the ransom note appeared, so shutdown saved nothing — but it destroyed the most valuable evidence: the contents of memory, which held the malware's running threads, decrypted strings, the hash-dump it had performed on lsass.exe (Local Security Authority Subsystem Service), the credentials it had stolen, and the list of remote hosts it was about to attack via PsExec and Windows Management Instrumentation (WMI). Almost none of that was recoverable from a powered-off disk.

Memory forensics with Volatility and similar tools lets you, hours or days later, list every process running at the moment of capture, find parent-child relationships that look wrong (cmd.exe spawned by winword.exe is rarely good news), pull command-line arguments, extract injected shellcode, recover loaded DLLs (Dynamic Link Libraries), and sometimes reconstruct browser tabs and chat windows that the user had open. None of that is on the disk.

Lesson 4 Documentation during first response

The notebook is the responder's alibi. When a regulator asks the data controller, under GDPR Article 33, to demonstrate that the 72-hour breach-notification clock was respected, the proof is the notebook. When a HIPAA auditor asks why the affected database was kept online for ninety extra minutes, the answer is in the notebook. When the chief executive asks, six months later, why the e-commerce site was not pulled during Black Friday, the answer is in the notebook.

Five practical disciplines beginners must internalise:

  • The 5W+H rule. Every entry answers, at minimum, Who (which person or tool acted), What (the action or observation), Where (which host, network segment, file path), When (full UTC timestamp to the second), Why (the reason — even one phrase is better than nothing), and How (the command, query, or procedure used).
  • Time in UTC, always. A Toronto analyst sits in Eastern Time and will instinctively log "13:42 ET"; a Vancouver colleague at the same wall-clock instant writes "10:42 PT"; on the same incident, an attacker connecting from Singapore appears in the firewall log at "17:42 UTC" and nothing will line up. Convert to local time only for human-facing communication; everything in the timeline notebook is UTC.
  • Quote, do not paraphrase. When a tool produces output, paste the literal output. Do not write "the scan found three suspicious files." Write the three filenames with full paths and hashes. Paraphrase looks like a tidy reconstruction afterwards; literal output stands up in court.
  • Mark assumptions visibly. "Assuming the user did not knowingly execute the macro." "Assuming the workstation clock was correct at the time of capture." Visible assumptions allow the future reader to know which load-bearing facts were verified and which were taken on faith.
  • Never delete; correct by appending. A responder writes the wrong IP, notices five minutes later, and reaches for the eraser. Don't. Leave the wrong line and add a new line: 13:47:12Z — CORRECTION: the IP recorded at 13:42:08Z was 10.0.0.42, not 10.0.0.24. Source: arp -a re-checked. The corrected notebook is unforgeable in a way the silently-fixed one is not.

The mechanical implementation can be a paper composition book with consecutive page numbers, or a private Git repository with signed commits, or an append-only file with hash-chained entries (each entry includes the SHA-256 of the previous one, blockchain-style). All are valid. What matters is that entries cannot be plausibly altered after the fact.

The 2019 Capital One breach is the published case study where documentation quality won the day. The bank produced for the US Office of the Comptroller of the Currency a precise UTC timeline of when the suspicious access pattern was first noticed (an external researcher's email, archived with full headers), when the affected bucket was first restricted, when each cloud key was rotated, when each affected customer cohort was identified, and when each external notification was sent. The regulator still fined the bank — eighty million US dollars — but the documentation itself was treated as evidence of a mature response, and that mattered for the size of the fine.

Lesson 5 Roles and responsibilities to avoid evidence contamination

The single most common cause of evidence contamination during the first hour is not malice and not incompetence — it is enthusiasm. Three people who all want to help reach for the same machine. The desktop technician opens the user's mailbox to "see what came in" and overwrites the access timestamp on the malicious attachment. The system administrator restarts the suspect server and destroys running malware in memory. The security analyst browses to the attacker's command-and-control domain from the corporate network and tips off the adversary. None did anything wrong by their own job description. They simply did not coordinate.

The classic six-role structure derived from NIST SP 800-61 Rev. 2 and the SANS Incident Handler's Handbook:

  • Incident Commander (IC). The single decision-maker. Authorises actions, manages timelines, protects the team's focus. The IC decides when to pull the network cable and when to escalate to law enforcement. Without an IC, the response degenerates into a committee — the worst kind of response.
  • First Responder / Forensic Analyst. The hands at the keyboard. Runs the volatile capture script, takes the memory image, seals the evidence bag. Their discipline is to follow the playbook exactly and resist the urge to investigate before they have collected.
  • Communications Lead. The single voice to the rest of the organisation, customers, and (if applicable) press. Stops technicians from giving inadvertently misleading updates while the investigation is incomplete.
  • Legal & Compliance Liaison. The bridge to the lawyers and external regulators. Watches the GDPR Article 33 / HIPAA Breach Notification Rule / PCI-DSS requirement 12.10 clocks so the technical team does not have to.
  • Subject-Matter Expert (SME). A floating role filled by whoever knows the affected system best — DBA (Database Administrator) for a database breach, network engineer for a routing anomaly, developer for an application-layer compromise. SMEs advise; they do not act independently.
  • Scribe / Documentation Officer. The keeper of the contemporaneous notebook from Lesson 4. In small teams this merges with the first responder; in larger ones a dedicated scribe is invaluable.

The tool that codifies the boundaries is the RACI matrix — every action down one axis, every role across the other, each cell filled with one of four letters (Responsible, Accountable, Consulted, Informed). Print it on A3, laminate it, and stick it to the wall of the war room. When the question "who restarts the server?" is asked at 03:14, the answer is not a debate; it is in the matrix.

The 2017 Equifax breach is the textbook role-confusion case. The published US House of Representatives report reads, in places, like a checklist of role failures: no clearly-named Incident Commander when the breach was first detected internally, decisions taken by committee, the website set up for affected consumers was hosted on a domain not owned by Equifax (making it indistinguishable from a phishing site — a Communications Lead with authority would have rejected that), and the Scribe role was effectively absent so the Congressional report had to assemble the timeline from email archives instead of a contemporaneous notebook. Equifax ultimately settled with the US Federal Trade Commission and a coalition of state attorneys general for approximately seven hundred million US dollars.


Chapter 4

Malware Detection, Analysis & Containment

Lesson 1 The six malware families

Malware research has many sub-categories, but for incident handling six core families cover almost everything. Memorise them — they appear on every certification exam and every job interview:

  • Virus. Needs a host file to live; attaches to an executable, document, or script and runs when the user opens the host. A virus on a USB stick that nobody plugs in is harmless.
  • Worm. Self-propagating; finds vulnerable services on the network and copies itself across with no user interaction. WannaCry crossed continents in hours because it was a worm.
  • Ransomware. Encrypts files (or whole drives) and demands payment for the decryption key. Modern ransomware is usually a double-extortion game — pay or we publish your data. LockBit, BlackCat (ALPHV), Royal — same technical mechanism (encryption), same business model (extortion).
  • Trojan. Pretends to be useful software. The user installs it on purpose because it looks like a free Photoshop, a "system optimiser", or a game crack. Once running, the trojan opens a backdoor.
  • Rootkit. Designed to hide. Modifies the operating system — sometimes the kernel itself — so infected processes, files, and network connections become invisible to standard tools like ps, Task Manager, or netstat. Rarely comes alone; usually brings spyware or backdoors with it.
  • Spyware. Silently collects information — keystrokes, screenshots, browser history, saved passwords — and ships it to the attacker. The most famous modern example is Pegasus, the NSO Group commercial surveillance product that targeted journalists and activists.

The lines blur in real malware: WannaCry was a worm that delivered ransomware. Emotet started as a banking trojan and finished its life as a malware-loader-as-a-service. NotPetya looked like ransomware but was a wiper — destruction was the goal and the ransom note was theatre. When you classify, classify the primary behaviour right now.

Lesson 2 Recognising Indicators of Compromise (IoCs)

An IoC (Indicator of Compromise) is any forensic artefact observed on a network or in an operating system that, with high confidence, indicates an intrusion has occurred. The phrase was popularised around 2010 by Mandiant (now part of Google Cloud) when they published their work on the APT1 (Advanced Persistent Threat) group. Before that, intrusion detection was mostly signature-based and event-by-event; IoC thinking turned it into pattern-of-life thinking.

Five categories of IoC:

  • File-based. Cryptographic hashes (MD5 — Message Digest 5 — has known collisions since 2008; SHA-1 — Secure Hash Algorithm 1 — was broken in 2017 by the SHAttered attack from Google and CWI Amsterdam; SHA-256 is the modern standard), file names and paths (svchost.exe in C:\Users\Public\ is suspicious because the real one lives in C:\Windows\System32\), file size and timestamps.
  • Network-based. IP addresses (run them through AbuseIPDB and VirusTotal), domain names (random-looking domains like xkrjvbsiq.duckdns.org are often DGA — Domain Generation Algorithm — C2/Command-and-Control), URLs, TLS (Transport Layer Security) fingerprints (JA3 and JA3S hash the way a client/server negotiates TLS).
  • Host-based. Registry keys (HKCU\Software\Microsoft\Windows\CurrentVersion\Run is the classic Windows persistence spot), scheduled tasks, services, mutexes (the WannaCry mutex Global\WannaCryptor is a famous example).
  • Behavioural (TTPs) — Tactics, Techniques, and Procedures, described in the MITRE ATT&CK framework. Examples: T1059.001 (PowerShell execution), T1486 (data encrypted for impact, i.e. ransomware), T1543.003 (Windows service persistence).
  • IoA (Indicator of Attack). A more modern term popularised by CrowdStrike. IoCs say what attacked you (static fingerprint); IoAs say how (intent and behaviour). You want both.

The single most quoted concept in IoC work is David Bianco's Pyramid of Pain (2013), which ranks IoCs by how much pain it costs the attacker to change them. From easiest (and most worthless from a defender's standpoint) to hardest:

  • Hash values — trivial to change (recompile).
  • IP addresses — trivial (VPN, Tor).
  • Domain names — easy (register a new one).
  • Network/host artefacts — annoying.
  • Tools — hard (development cost).
  • TTPs — very hard.

Implication for your career: signature-based blocking on hashes and IPs is necessary but cheap to defeat. Behavioural detection lasts months instead of hours. The senior SOC analyst is the one who writes the behavioural rule.

In practical sweep work, three independent IoCs from different categories pointing at the same target equals rope — escalate. One IoC in isolation might be a false positive. Three is not.

Lesson 3 Containment strategies

The golden rule, written in capitals because it is the rule everything else builds on:

CONTAIN, DO NOT POWER OFF.

Pulling the plug on an infected host feels right, is fast, is decisive — and is one of the worst things you can do. You destroy RAM (which holds running processes, network connections, decryption keys, injected code, cached SSO — Single Sign-On — credentials). You break the timeline. You teach the malware (modern malware detects shutdown signals and triggers anti-analysis or wiper code paths). You cannot easily un-power-off — once it is off, the only way back is a reboot, and reboot is exactly when persistence mechanisms re-engage.

Instead, isolate the host while keeping it running. Network segmentation, EDR "contain device" buttons, firewall rules on pfSense or OPNsense — all of them keep memory and processes alive while cutting off the attacker.

NIST splits containment into three time horizons:

  • Short-term containment (minutes). The fire is burning. Stop the spread NOW. Disconnect the host from the network at the NIC (Network Interface Controller) level rather than power, kill the malicious process, block the C2 domain at the DNS resolver, isolate at the EDR.
  • Long-term containment (hours to days). Keep the attacker busy while planning a clean eradication. Allow-list legitimate processes only, redirect attacker traffic to a sinkhole, deploy honeytokens to detect lateral movement.
  • Sustained containment (days to weeks). Maintain a quarantine zone for forensic analysis while business operations continue elsewhere.

Crown-jewel protection matters: when a malware outbreak begins, you protect the high-value assets first — domain controllers, finance systems, anything with regulated data — even if the active fire is somewhere else. Attackers do not stay where they landed; they move toward value.

Lesson 4 Communication and escalation during a malware outbreak

Containment buys time. Communication wins or loses the rest of the game. The most technically perfect response can still cost the company executives, market cap, and regulatory penalties if the comms are clumsy.

The picture: 02:14, ransomware on a payments server. Wrong answer — post in the corporate Slack #incidents channel, because if the attacker has been inside for a week (and average dwell time is roughly two to three weeks depending on the report), they may be reading that channel and will accelerate before you finish typing. Right answer — pick up your work phone, call the IR lead's mobile (a number saved before this happened), and read the situation aloud in plain language. Out-of-band communication.

Communication has four axes — WHO (audience: internal-technical, internal-leadership, internal-support, external), WHAT (message structure), WHEN (cadence by severity), HOW (channel ranked by integrity), and WHY (regulatory clock).

The SBAR pattern, borrowed from emergency medicine, works perfectly for an incident message:

  • S — Situation (one sentence): "At 02:14 we detected ransomware on a payments server."
  • B — Background (two sentences): "The server processes 8% of transaction volume. Initial vector appears to be a compromised VPN credential."
  • A — Assessment (two sentences): "We have isolated the host. We do not yet know whether other hosts are affected. Forensic image in progress."
  • R — Recommendation/Request (one sentence): "I need you on a call in five minutes; I am calling Legal next."

That format works for a pager, an email, a phone call, and a board update.

Channels ranked by integrity (highest first): face-to-face → personal mobile voice call → encrypted out-of-band messenger (a dedicated Signal group set up in peacetime) → personal email → corporate phone (VoIP) → corporate Slack/Teams → corporate email. Once an incident is confirmed, assume corporate channels are compromised and discuss IoCs only in a separate private war-room channel with restricted membership.

The regulatory clock summary, by region: GDPR — 72 hours from awareness; UK GDPR + Data Protection Act 2018 (enforced by ICO — Information Commissioner's Office) — 72 hours; HIPAA Breach Notification Rule — 60 days to individuals, "without unreasonable delay"; PCI-DSS — typically 24 hours to your acquirer; PIPEDA (Canada — Personal Information Protection and Electronic Documents Act) — "as soon as feasible" after determining real risk of significant harm; NIS2 — 24-hour early warning, 72-hour full notification. The key skill is not memorising the laws; it is knowing that legal counsel is one of your first calls, not your last.

Lesson 5 Real-world case studies (the four landmark malware events)
  • WannaCry (May 2017). A worm + ransomware combination linked by the US Treasury and UK NCSC (National Cyber Security Centre) to North Korean actor Lazarus Group. Vector: SMBv1 (Server Message Block v1) RCE (Remote Code Execution) via the EternalBlue exploit, originally NSA (US National Security Agency) code leaked by the Shadow Brokers in April 2017. Microsoft had patched it (MS17-010) two months before the outbreak. Within 24 hours WannaCry infected over 200,000 systems in 150 countries; the UK NHS (National Health Service) was hit hardest in the public eye. The famous kill switch — researcher Marcus Hutchins registered the long random domain the malware checked — stopped the spread within hours. Lesson: patch management cadence is not optional; segment networks; offline backups beat ransom payments.
  • NotPetya (June 2017). A wiper disguised as ransomware, attributed by US and UK statements to Russian military intelligence (GRU). Vector: a compromised software update for M.E.Doc, a Ukrainian tax-accounting application; from there the malware spread via EternalBlue and stolen credentials. Total damage estimated at roughly $10 billion — the most costly cyberattack in history. The "ransom note" demanded $300 in Bitcoin to a single address, but the encryption was non-recoverable — paying changed nothing. Maersk's domain controllers were all wiped; the company was saved by a single domain controller in Ghana that had been offline due to a power outage during the attack, flown back to the UK to rebuild identity infrastructure (Andy Greenberg's WIRED article tells the story). Lesson: validate vendor update integrity not just origin; treat ransomware alerts as possibly destructive; geographic and offline backup diversity matters; read your cyber-insurance war-exclusion clause.
  • Colonial Pipeline (May 2021). Ransomware (DarkSide). Vector: a compromised VPN account — single password, no MFA, password had appeared in a credential dump. The pipeline operator shut down a 5,500-mile fuel pipeline as a precaution even though OT (Operational Technology) was not directly affected; six days of outage and fuel panic across the US East Coast. Colonial paid roughly $4.4 million in ransom; the FBI later recovered approximately $2.3 million by tracing the cryptocurrency wallet. Lesson: MFA on every remote-access point, no exceptions; decommission unused VPN accounts immediately; OT/IT segmentation; engage law enforcement early.
  • SolarWinds Sunburst (December 2020). Trojanised software supply chain attributed to Russian SVR (foreign intelligence) per a US joint statement. Vector: malicious code inserted into the SolarWinds Orion build pipeline; approximately 18,000 organisations downloaded the trojanised version, with a much smaller targeted subset including FireEye (now Trellix/Mandiant) and parts of the US federal government. FireEye discovered the campaign while investigating their own intrusion. Lesson: detect anomalous behaviour, not just bad signatures (the trojanised binary was perfectly signed); secure the build pipeline like production; egress monitoring catches the C2 traffic eventually; frameworks like SLSA (Supply-chain Levels for Software Artefacts) address the structural issue.

Chapter 5

Email Security Incidents (Phishing & Spam)

Recap Why this matters

The Verizon Data Breach Investigations Report has been telling us for over a decade that the same recurring root cause shows up year after year in real-world breach reports: someone clicked something in an email. So we stop treating phishing as "the awareness team's problem" and start treating it as the technical, multi-stage attack chain that it is.

Lesson 1 Phishing, spear-phishing, whaling, and BEC

In casual conversation people use "phishing" for everything. In an incident report you cannot — the four terms describe genuinely different attacks with different victim profiles, detection signatures, and financial impacts.

  • Phishing (broad-net). Industrial-scale; the same email or small set of templates sent to thousands or millions of recipients hoping for a 0.1% click-through. Generic greeting ("Dear customer"), urgency language, mismatched display name and underlying address. The Anti-Phishing Working Group (APWG) tracks volumes quarterly. Detection is usually loud — bad grammar, suspicious sender domain — and modern email gateways catch the obvious ones.
  • Spear-phishing (personalised). Phishing with reconnaissance. The attacker has spent time on you specifically — your LinkedIn profile, your company's website, maybe a conference where you spoke. The email is addressed to you by name, references a project you actually work on, and might come from what looks like a colleague's or supplier's address. Volume is low (sometimes a handful in one company), success rate is an order of magnitude higher than untargeted phishing. CISA publishes alerts on advanced campaigns; many state-aligned actors run these almost exclusively.
  • Whaling (executive-targeted). Spear-phishing aimed at "big fish" — CEO, CFO, board members, Domain Administrators, cloud-account owners. Lures look like board communications, legal notices, or merger paperwork. The attacker has often profiled the target for weeks. The payoff: large wire transfer, crown-jewel data, or a foothold for trivial lateral movement. Whaling is a subset of spear-phishing distinguished by victim seniority.
  • Business Email Compromise (BEC). The category that confuses students most. Often no malicious attachment and no malicious link at all. The attacker either compromises a legitimate corporate mailbox (via earlier phishing or credential-stuffing) or carefully spoofs one, then sends a perfectly-worded, perfectly-timed email asking finance to change a supplier's bank account, a junior employee to buy gift cards, or IT to reset a password. The FBI Internet Crime Complaint Center (IC3) consistently ranks BEC as the single most financially damaging cybercrime category — typically billions of dollars in reported losses each year, far more than ransomware in raw money terms. There is no malware for an EDR tool to catch.
Lesson 2 The three-stage model: lure, hook, payload

Every phishing attack — laziest mass-mail to elegant BEC — decomposes into three sequential stages. Memorising them pays back in incident triage because they map cleanly onto which artefact you collect, which control failed, and which mitigation you apply.

  • Lure construction (the artistry phase). The content the victim sees: subject line, sender name, body text, visual design, call-to-action. The attacker is doing creative writing. Good lures hit one or more of the six classic levers of social engineering documented systematically in Robert Cialdini's research (cialdini.com): authority ("from the IT department"), urgency ("expires in 24 hours"), scarcity ("only three slots remain"), familiarity/liking (copying a colleague's tone, name-dropping a real project), social proof ("your colleague Juan already approved this"), reciprocity ("we helped you last quarter"). Sophisticated lures stack multiple levers — almost no legitimate corporate email applies three at once.
  • Hook delivery (the logistics phase). How the lure reaches the victim's screen. Channels: standard email over SMTP (Simple Mail Transfer Protocol), spoofed sender domain (which SPF/DKIM/DMARC are designed to stop — Lesson 3), look-alike domain (typosquatting like paypa1.com with a numeric "1", homograph attacks using Cyrillic characters that look Latin, sub-domain trickery like paypal.com.attacker.io), compromised legitimate mailbox, and adjacent channels (smishing — SMS phishing; vishing — voice; quishing — QR-code; chat-platform phishing on Slack, Teams, WhatsApp). The defender's job during this phase is to make hook delivery fail before it reaches the user.
  • Payload execution (the conversion phase). Whatever the attacker actually wanted. Most common payloads: credential harvesting (user clicks a link, lands on a fake login page, types username and password into the attacker's form); malware execution (user opens an attachment — maliciously-crafted Office document, ISO file, password-protected ZIP, executable disguised as a PDF); drive-by browser exploit (rare today but still seen); direct human action (the BEC payload — the user just does what the email asked; wires money, changes a password, forwards a confidential document); token/session theft (increasingly important — the attacker harvests not just the password but the post-authentication session cookie or an OAuth token, bypassing MFA).

The three-stage model maps roughly onto the Lockheed Martin Cyber Kill Chain (Reconnaissance → Delivery → Exploitation/Installation/Actions on Objectives) and the MITRE ATT&CK framework. When you write up a phishing incident, you fill in details for those three stages — a clean report structure.

Lesson 3 Email authentication: SPF, DKIM, DMARC

When email was designed in the 1970s, nobody verified the sender. SMTP (defined in RFC 5321) trusts the sender to tell the truth about who they are. The "From:" line is structurally just text the sender wrote. The internet has been retroactively bolting on authentication ever since.

  • SPF (Sender Policy Framework, RFC 7208). Answers: is the server that just connected allowed to send mail for the domain it claims to be from? The owner of cyber.soho.example publishes a DNS (Domain Name System) TXT record listing every IP authorised to send for that domain — for example v=spf1 ip4:203.0.113.0/24 include:_spf.google.com -all (the -all is a "hard fail" — anything not on the list is asserted not to be us). What SPF does not protect against: an attacker who has compromised your real mail server, or one using a look-alike domain (a different domain entirely, so SPF does not even apply). Subtle gotcha: SPF breaks on email forwarding — the forwarding server appears as the sender to the final destination.
  • DKIM (DomainKeys Identified Mail, RFC 6376). Answers: has this message been tampered with in transit, and was it signed by a server holding the domain's private key? The domain owner publishes a public cryptographic key in DNS under a "selector" name like default._domainkey.cyber.soho.example. The corresponding private key sits on the sending mail server. When the sending server emits an email, it signs selected headers and the body and attaches a DKIM-Signature: header. The receiver fetches the public key and verifies. DKIM survives forwarding (the signature stays attached) but breaks if a mailing list modifies the email by adding a footer or rewriting the subject.
  • DMARC (Domain-based Message Authentication, Reporting & Conformance, RFC 7489; consortium home at dmarc.org). The policy layer on top. Answers: what should the receiver do when SPF and DKIM both fail, and what should they tell us about it after the fact? The owner publishes _dmarc.cyber.soho.example TXT v=DMARC1; p=reject; rua=mailto:dmarc@cyber.soho.example; pct=100. The p= tag is the policy — none (monitor only), quarantine (deliver to spam), or reject (refuse outright). The rua= tag points to where aggregate reports go — daily summaries of every IP claiming to be your domain. DMARC also enforces alignment — the SPF or DKIM domain must match the visible "From:" domain, closing the loophole where an attacker passes SPF/DKIM for some other domain they control while showing your domain in the From line.

A small consultancy with no SPF, no DKIM, no DMARC is structurally spoofable — an attacker on a $5-a-month VPS (Virtual Private Server) can send a perfect-looking invoice email purporting to be from accounts@mapleleafcode.example and the receiver has nothing to evaluate against. After publishing the three records with p=reject, the same attack returns 550 5.7.1 DMARC policy violation at the SMTP layer and never reaches the inbox. Cost to the company: roughly an hour of DNS configuration. Value: every spoof attempt against the domain now bounces at the receiver's edge.

Lesson 4 User-awareness training

A magic email gateway that blocked 99.9% of phishing emails sounds amazing, until you do the maths: a 5,000-employee company processing 91 million emails a year still sees 91,000 phishing emails reach inboxes annually. The only way to bring residual risk down is to train the humans. The cybersecurity industry's saying — "the user is the last line of defence — and also the first" — captures it: the user is first because they receive the lure, last because if every technical control has failed, only the user clicking or not clicking remains.

A bad awareness program is the one most companies still run: a 25-minute mandatory video once a year, multiple-choice quiz, certificate emailed to HR (Human Resources), checkbox ticked. Compliance achieved, behaviour unchanged.

A good awareness program (aligned with CISA's awareness materials and NIST SP 800-50 on building information-technology security awareness and training programmes) has these characteristics:

  • Continuous, not annual. Drip-feed — short 2-4 minute micro-lessons monthly, not 25-minute marathons.
  • Role-tailored. Finance lessons focus on BEC and invoice fraud. Engineer lessons focus on credential phishing and source-code-repository tokens. Generic training trains nobody well.
  • Includes simulation. Periodic simulated phishing campaigns; clickers receive an immediate "this was a simulation" landing page; reporters get a quick thank-you. The point is never to punish the click — it is to find the awareness gap.
  • Measurable. Track the report rate (how many people report a real or simulated phishing email) at least as carefully as the click rate. Many programs obsess over click rate and miss that report rate is more actionable — clicks happen anyway, but a high report rate gives early warning of campaigns in flight.
  • Blameless. Public shaming for clicks produces a culture where users hide mistakes, which is catastrophic. Foster the opposite culture: "if you click and tell us within five minutes, you're a hero."
  • Updated to current threats. A 2026 program that still uses 2019 lures (Nigerian princes, lottery winnings) is a museum exhibit.

What good training is not: not punishment, not a substitute for technical controls (SPF/DKIM/DMARC, MFA, email gateways, EDR all stay in place — awareness is a layer, not a replacement), not a one-time fix, and not just for non-technical staff (engineers and IT staff fall for phishing too, and their access tends to be more dangerous).

Lesson 5 Email-specific incident-response procedure

Generic IH&R applies, but email incidents have specific quirks: the artefact lives in cloud mailboxes (Microsoft 365, Google Workspace, Exchange Online) and is collected via API; the blast radius is everyone (a campaign rarely targets just one person — by the time one user reports, the same email is in 50 other inboxes); the attacker may already have the credentials and may notice when you start pulling logs; MFA bypass is now common (Adversary-in-the-Middle / AITM kits harvest the post-authentication session cookie, so password reset alone is insufficient — you must revoke active sessions); and the supply-chain effect is real (a compromise of a finance mailbox can pivot into BEC against the company's customers within hours).

The email-specific 6-phase playbook:

  • Phase 1 — Preparation. Provision audit access (a "break-glass" role with read access to the email tenant's audit logs — Microsoft 365 Unified Audit Log, Google Workspace Audit Logs — message-trace tools, in-place eDiscovery). Know your retention windows (M365 default unified audit log is 180 days, longer with E5 licensing; Google Workspace audit logs typically 6 months). Pre-write one-page playbooks for "user reports phishing", "credentials confirmed phished", "BEC suspected". Set up the report channel (a "Report Phishing" button in the email client). Run quarterly tabletops.
  • Phase 2 — Detection & Analysis. Trigger is usually one of three: a user reports through the dedicated channel (60-80% of email incidents in mature programmes — which is why Lesson 4 awareness directly enables Lesson 5), the email gateway flags something post-delivery (e.g. a URL that became malicious after delivery), or a downstream alert (anomalous sign-in, EDR detection, suspicious finance request). The analyst's first job is scoping — original sender, subject, content; how many internal recipients; how many opened, clicked, or executed; were any credentials submitted, tokens issued, or wires processed.
  • Phase 3 — Containment. Purge the email from every recipient mailbox using the provider's purge tools (M365 Content Search + Purge, or Google Workspace Investigation Tool). Block the sender at the gateway including originating IPs and look-alike domains. Block malicious URLs at the web proxy and DNS filter. Reset credentials and revoke all active sessions — an attacker with a valid session token does not need the password. Disable any new mailbox rules the attacker created (a common attacker move is an inbox rule that auto-forwards or auto-deletes specific keywords like "invoice"). For BEC, contact the impacted external party on a known channel to halt in-flight wire transfers.
  • Phase 4 — Eradication. Re-image any endpoint where malware execution was confirmed. Audit and remove all OAuth (Open Authorization) consents the user (or the attacker) granted to third-party applications — AITM kits often plant a malicious OAuth app for persistence even after passwords change. Force MFA re-enrollment; if a factor was compromised (e.g. SIM-swap), enrol a stronger factor. Hunt for lateral movement — sign-in logs from the compromised account during the attacker window, what other resources did it touch, were emails sent, did the account access SharePoint, Drive, or source-code repositories.
  • Phase 5 — Recovery. Restore mailbox state after purging — verify legitimate emails were not accidentally caught. Re-enable accounts with fresh MFA. Verify monitoring is in place for the IoCs identified. Re-issue communication if external parties were affected.
  • Phase 6 — Lessons Learned. Within two weeks, blameless retrospective. What was the gap (a missing DMARC policy, a user who had not taken recent training, a finance process that allowed IBAN — International Bank Account Number — changes without verification)? Which detection fired first? What playbook step took longest? What from this incident should be added to the next round of awareness training?

The first 30 minutes cheat-sheet when a user clicks "Report" on something real:

Minute Action
0–2 Acknowledge user's report; tell them not to delete the email yet (you need it as evidence).
2–5 Read the email; capture the original .eml file or message-id and sender headers.
5–10 Run a message trace to find every internal recipient of the same message.
10–15 If credentials submitted: disable account, then revoke active sessions, then reset password (in that order).
15–20 Purge the email from all recipient mailboxes; block sender domain and malicious URLs.
20–25 Check sign-in logs for the affected accounts during the suspected window — note unusual locations or User-Agent strings.
25–30 Open the formal incident ticket; notify the on-call manager; loop in legal/communications if data exposure is suspected.

Chapter 6

Web Application Security Incidents

Recap Why this matters

In the 2024 Verizon DBIR, web applications were the single most-targeted asset class for the third year running, accounting for roughly four out of every ten breaches involving an external attacker. Web-app incidents are not exotic zero-day artistry; the same five or six bug classes get re-exploited year after year because they are in the design, in the framework defaults, and in the developer's first job.

Lesson 1 The OWASP Top 10

OWASP (the Open Worldwide Application Security Project) is a non-profit foundation, federated through hundreds of local chapters worldwide including OWASP Toronto and OWASP Boston. Its mission is the boring, important kind: gather what application-security practitioners actually see, distil it into checklists, training, free tools and standards, and give all of it away. The most-cited piece of work they publish is the OWASP Top 10, a ranked list of the ten most critical web-application security risks. The current edition is the OWASP Top 10 — 2021, and the next refresh is in active drafting.

The Top 10 is ten categories of risk, not "the ten worst vulnerabilities" — buckets, not bugs. Each bucket holds many specific CWE (Common Weakness Enumeration) entries maintained by MITRE. It is a floor, not a ceiling — necessary but not sufficient. Treating it as a compliance checklist ("we ticked all ten, we are safe") is a common, expensive mistake.

The 2021 list:

Rank ID Name Plain-language description
1 A01 Broken Access Control A user can reach data or actions they should not be allowed to reach.
2 A02 Cryptographic Failures Sensitive data sent or stored without proper encryption.
3 A03 Injection Untrusted text treated as code by an interpreter — SQL, OS shell, LDAP, NoSQL.
4 A04 Insecure Design The architecture itself is the bug — no patch can fix it after the fact.
5 A05 Security Misconfiguration Default credentials, verbose errors, debug mode in production.
6 A06 Vulnerable & Outdated Components A library or framework with a public CVE entry.
7 A07 Identification & Authentication Failures Weak login, predictable session tokens, no MFA.
8 A08 Software & Data Integrity Failures Updates or dependencies trusted without verifying their origin.
9 A09 Security Logging & Monitoring Failures Either no logs, or logs nobody reads.
10 A10 Server-Side Request Forgery (SSRF) The server fetches a URL the attacker controls and reaches places they cannot.

Three buckets dominate breach reports year after year: A01 Broken Access Control (the boring, devastating one — change ?invoice_id=42 to ?invoice_id=43 and read someone else's invoice; OWASP's own statistics across half a million applications found access-control failures in roughly 55% of tested apps); A03 Injection (SQL injection is genuinely declining as frameworks default to parameterised queries, but XSS — Cross-Site Scripting — is everywhere and command injection is having a renaissance through insecure server-side template engines); A06 Vulnerable & Outdated Components (the Equifax category — running a web framework with a public CVE for which a patch exists).

A nuance to internalise: most common ≠ most damaging. SSRF (A10) is rare in absolute terms — maybe one in ten apps — but in a cloud environment it can pivot into stolen cloud credentials in minutes, as it did in the Capital One breach in 2019.

Lesson 2 Detect and respond to SQLi, XSS, CSRF
  • SQL injection (SQLi). Untrusted text is concatenated into a database query and the database treats part of it as code. The textbook payload ' OR '1'='1' -- turns SELECT * FROM users WHERE name='[input]' into a query that matches every user. Detection signatures include unusual single-quote density in URL parameters, UNION SELECT in request bodies, error responses leaking SQL syntax, and User-Agent strings like sqlmap/1.7. Response: identify whether any requests returned a 200 OK with payload content (those are confirmed reads), block the source IP at the WAF (Web Application Firewall), check the database query log for the rows the attacker actually saw, and escalate. Long-term remediation: parameterised queries (the default API of every database driver — the vulnerable form exists only because programmers reach for string concatenation out of habit).
  • XSS (Cross-Site Scripting). Untrusted text is reflected into a page and the browser executes it as JavaScript. Three flavours: reflected (the malicious script is in the URL and runs only when a victim clicks the crafted link), stored (the script is saved in the application's data store — a comment field, a user profile — and runs for every visitor who views it), and DOM-based (the script runs through client-side JavaScript without ever touching the server). Detection: <script> tags in URL parameters, javascript: schemes, encoded variants like %3Cscript%3E. Response: invalidate active sessions, force password resets if cookies were exfiltrated, and remediate by output-encoding all user-supplied content.
  • CSRF (Cross-Site Request Forgery). A victim who is already authenticated to your site visits an attacker-controlled page that triggers a state-changing request (transfer money, change email) using the victim's cookies. Defences: CSRF tokens (a unique unpredictable value tied to the session and submitted with every state-changing request), the SameSite cookie attribute, and re-authentication for sensitive actions.

A WAF (Web Application Firewall) like ModSecurity sits in front of the application and applies rule sets such as the OWASP Core Rule Set (CRS), scoring every request and blocking or alerting when the score crosses a threshold. The five-step detection-and-response loop on a WAF alert: triage (is fourteen hits in five minutes a typo or a campaign?), pivot to the source (reverse DNS, threat-intel reputation), pivot to the target (which endpoint, what payload), confirm whether anything got through (status=200 and payload content is the rope), escalate.

Lesson 3 Web log analysis

A line worth writing on the inside of your forehead: "the logs are not the truth; the logs are evidence; the analyst builds the truth from them." Logs are written by the very systems an attacker is trying to fool. A skilled attacker tries to delete, rotate, or poison the log. A novice leaves a confession. Most attackers fall in the middle.

Six layers of logs an incident responder pivots between:

  • WAF / CDN log — every request that hit the perimeter, including blocks. (CDN = Content Delivery Network, e.g. Cloudflare, Akamai, Fastly.)
  • Web server log — Apache access.log, Nginx access.log, IIS (Internet Information Services) *.log. HTTP method, URL, status code, response size, User-Agent.
  • Application log — stack traces, authentication events, business actions like "user X exported the customer list".
  • Database query log — slow queries, query errors, full-query auditing on sensitive tables.
  • Operating-system log — new files, sudo invocations, new cron entries, new users.
  • SIEM correlation layer — pulls from all of the above and lets you pivot across them in seconds.

The NCSA Combined Log Format is the default for most web servers:

203.0.113.42 - - [04/May/2026:23:11:02 +0200] "GET /api/v1/login?u=admin'+OR+1=1-- HTTP/1.1" 200 412 "-" "sqlmap/1.7"

Read left to right: source IP, identity (almost always -), authenticated user (almost always -), timestamp with timezone offset, request method, request line including the suspicious payload, HTTP status code (200 here means the SQLi worked), response size in bytes, referer, User-Agent (the attacker did not even bother to hide they were using sqlmap).

Status-code patterns worth watching: a sudden burst of 404s from one IP is reconnaissance (the attacker is probing for paths). A string of 500s is the application crashing on malformed input — possibly an exploit attempt. A 200 after a long string of 401s and 403s is a successful brute-force or authentication bypass.

Lesson 4 Containment and remediation after a confirmed compromise

Containment is not a single action; it is a sequence. The seven-rung containment ladder, aligned with NIST SP 800-61 Rev. 2:

  • Confirm. Validate the alert is real. False positives waste resources and erode trust.
  • Isolate. Pull the host from the network or move it to a quarantine VLAN. Cut the network before you cut the malware — if the attacker cannot see your cleanup, they cannot adapt.
  • Preserve. Snapshot disk and memory before changing anything else. Evidence first. The ten minutes a snapshot takes saves you weeks later when forensics needs the disk image to identify exactly what was stolen, when the legal team needs the memory image because the running process held an unencrypted key, when communications needs to tell customers exactly what data was accessed (without evidence, you say "we cannot rule out").
  • Eradicate. Identify and remove all webshells (a webshell is a small server-side script the attacker uploaded to give themselves remote command execution — common forms in PHP are c99.php, r57.php, <?php system($_GET['c']); ?>); inventory and remove attacker-created accounts (operating-system and application); remove persistence (cron, systemd timers, scheduled tasks, .bashrc injections, modified startup scripts); rotate every credential the host had access to (database passwords, API keys, SSH keys, OAuth tokens).
  • Patch & Harden. Fix the underlying vulnerability (parameterise the SQL query for A03; update the library for A06; add the missing authorisation check for A01); harden the surrounding configuration (remove verbose error pages, disable directory listing, drop unused services); add the monitoring you wish you had had (if the breach was invisible for a week, you have an A09 gap — close it).
  • Restore. Rebuild from a known-good image. Redeploy clean code. Bring traffic back gradually — 10%, 50%, 100% — watching the WAF logs at each step.
  • Lessons Learned. A blameless post-mortem 2-5 days later. Three concrete runbook updates. Owners and deadlines.

Under Canada's Personal Information Protection and Electronic Documents Act (PIPEDA), Quebec's Law 25, and U.S. state breach-notification regimes (Colorado's 30-day rule, Illinois' PIPA, Massachusetts' 201 CMR 17.00, California's CCPA, and HIPAA's Breach Notification Rule for healthcare data), failure to demonstrate due diligence in the response phase can multiply the regulatory and civil exposure. Preservation is not optional bureaucracy; it is your defence. Whichever clock applies to your jurisdiction, it starts at "becoming aware" — or, in the Canadian phrasing, at "determining real risk of significant harm" — not at "fully understanding". As an incident responder you do not own the legal notification, but you do own the clock — and your manager will need an honest, documented "we became aware at this timestamp" entry in your timeline.

Lesson 5 Three landmark web incident case studies
  • Equifax, 2017 — 147 million records. A clean A06 — Vulnerable & Outdated Components case. On 7 March 2017 the Apache Software Foundation published CVE-2017-5638, an RCE flaw in Apache Struts; a patch shipped the same day. Equifax's internal email instructed teams to patch within 48 hours. One critical web application was missed. On 13 May, attackers exploited the unpatched Struts on Equifax's online-disputes portal. They moved laterally for 76 days before an internal SOC analyst noticed unusual traffic on 29 July. Public disclosure on 7 September. The single most important number: 67 days from public CVE to attacker exploit. The attacker's reconnaissance window — from "patch is publicly known" to "this victim has not patched yet" — is now measured in weeks. The fix was a one-line dependency update; the failure was process, not code. Recommended further reading: the US Government Accountability Office's 2018 report, 38 pages, reads like a thriller.
  • British Airways, 2018 — 380,000 cards. An A08 — Software & Data Integrity Failures case, specifically a client-side supply-chain attack known as Magecart. Attackers compromised a third-party JavaScript file that BA's website loaded on the payment page. The injected script intercepted form submissions and sent card data to an attacker-controlled domain that mimicked a legitimate BA subdomain. No SQLi, no traditional vulnerability, no zero-day — there was a checkbox in the supply chain that was not ticked. The UK ICO initially proposed a £183 million fine, later reduced to £20 million on appeal — one of the first major GDPR enforcement actions in Europe. Modern web pages are not just your code; they are your code plus 40 third-party scripts. Each is a vulnerability you do not control. Subresource Integrity (SRI) — a browser feature that binds a script to a hash so a tampered version refuses to run — is the simplest defence (Mozilla Developer Network's SRI documentation is a 10-minute read).
  • Capital One, 2019 — 106 million records. An A10 — Server-Side Request Forgery case, with cloud privilege as the amplifier. A web application accepted a user-supplied URL parameter and fetched it server-side. The attacker (a former AWS — Amazon Web Services — employee) pointed it at the cloud-instance metadata endpoint http://169.254.169.254/, the special address from which a virtual machine reads its own IAM (Identity and Access Management) credentials. The application politely fetched and returned the credentials. The attacker used them to read S3 (Simple Storage Service) buckets full of personal data. SSRF in 2010 was a curiosity; SSRF in a cloud environment in 2026 is a credential-theft tool. Block outbound requests to link-local and private ranges from your web tier, full stop, by default, before you even think about CVEs.

The pattern across all three: none were prevented by perimeter security (all got through the firewall the legitimate way); all were detected by humans, not tools (tooling buys you the data; humans buy you the conclusion); all could have been contained faster; all triggered regulatory action.

The post-mortem on a small web-app incident has a specific shape — the IR lead opens with three sentences that are not decoration but the technical specification of a blameless review: "we are not here to assign blame; we are here to find what made this possible", "every name in this room is here because they have something to teach us, not because they did something wrong", "the output of this meeting is three runbook changes, not three resignations". Without those sentences, the meeting becomes a witch hunt and the organisation learns nothing.

The numbers you extract from the timeline, every time: TTD (Time to Detect), TTC (Time to Contain), total attacker dwell time. The IBM Cost of a Data Breach Report puts the industry mean time to identify a web-app breach above 200 days for several years running. Single-digit hours is world class; months is normal — and a clear improvement target.


Consolidated reference — frameworks, regulators, and tools

Every official source linked from the chapters above, organised for quick recall.

Threat intelligence and lookup

Major DFIR / IR vendors named in the course


Part 2

Ten Case Studies with Suggested Answers

Each case is fictional but built from real attack patterns. Read the scenario, attempt your own answer in a notebook, then unfold the worked answer to compare.

Case Study 1

Maple Logistics — the midnight ransomware call

Scenario. Maple Logistics Inc. is a 420-employee logistics company headquartered in Toronto, Ontario. On a Tuesday at 02:14, the help-desk analyst on call receives three phone calls in four minutes: warehouse PCs are showing a red skull screen demanding 4 BTC (Bitcoin). The analyst confirms via Remote Desktop Protocol (RDP) into one of the PCs that file extensions have all been renamed to .locked. Roughly 90 minutes earlier, the SIEM had logged an rclone.exe process running on a warehouse PC that copied data to mega.nz. The initial access vector turns out to have been a contractor's VPN account whose MFA had been "temporarily" disabled six months earlier and never re-enabled.

Question. Walk through the IH&R lifecycle as it should have been applied — phase by phase, decision by decision — and identify the regulatory clocks that are now ticking.

Suggested answer.

The first observation is that this is a double-extortion ransomware case, not pure encryption — the rclone.exe to mega.nz activity 90 minutes before the encryption note is data exfiltration. That changes the regulatory profile from "availability incident" to "personal-data breach" the moment you confirm any customer or employee personal data was in the exfiltrated set. Two clocks start: under PIPEDA's Breach of Security Safeguards Regulations the controller must notify the Office of the Privacy Commissioner of Canada (OPC) and affected individuals "as soon as feasible" after determining a real risk of significant harm; if any customers are Quebec residents, Quebec's Law 25 imposes a parallel obligation to the Commission d'accès à l'information (CAI). And depending on whether any cardholder data was on those warehouse PCs, PCI-DSS notification to the acquiring bank may also apply.

Identification. The analyst verifies the alert is real by checking one host (correct — verification is two-source confirmation, not "I trust the alert"). The severity is escalated to SEV-1 because production is impacted and personal data may be in scope. An incident ID is opened and the IR lead is paged via the out-of-band channel (mobile phone, not corporate Slack).

Containment — short-term. The SOC pushes an EDR isolation policy to every host in the Toronto warehouse VLAN; affected hosts can talk to the EDR console only. Shared drives are dismounted at the file server. Critically, the analyst does not power off any host — RAM may contain decryption-key material and live attacker connections that forensics will need.

Containment — long-term. Firewall egress to mega.nz, anonfiles.com, and a small set of other cloud-storage providers is blocked at the perimeter while the team plans the eradication. The compromised contractor VPN account is disabled but not deleted (preserve the artefact). All other VPN accounts are audited for MFA status, with anything missing MFA disabled within the hour.

Eradication. Forensics confirms the initial vector was the contractor account. The account is deleted; the VPN concentrator is patched to the latest firmware. Persistence on the warehouse PCs is mapped (any scheduled tasks, registry Run keys, and services the malware created) and listed for the wipe-and-rebuild step.

Recovery. Warehouse PCs are re-imaged from the golden image. Data is restored from the immutable backup tier. Services come back in this order: domain controller health check → email → ERP (Enterprise Resource Planning) → warehouse scanning software → finally, the public-facing tracking portal. Each step is monitored for re-infection.

Lessons Learned. Within two weeks: a quarterly MFA audit becomes a process control with a named owner; mass-copy tools like rclone and megacmd get blocked at the application layer; nightly backups are upgraded to hourly snapshots for the file server. Action items are tagged Detection / Prevention / Process / People and tracked to closure.

Documentation discipline. Every action above is logged in UTC with the responder's name in an append-only notebook. Every evidence artefact (the rclone.exe binary, the memory dump from the first warehouse PC, the EDR audit log of containment actions) is hashed with SHA-256 and entered onto a chain-of-custody form. The analyst's own notebook is the alibi when the OPC asks how the "as soon as feasible" obligation was respected.

The lesson under the lesson: this incident is, technically, very ordinary. The discipline that turns it into a 4-million-euro regulatory question instead of a 40-million-euro one is process — naming the IC, paging the right people, isolating before erasing, documenting every action — none of which can be invented at 02:14 on a Tuesday.

Case Study 2

Boulder EdTech — the Saturday BEC

Scenario. Boulder EdTech is an online-learning startup based in Boulder, Colorado. On Saturday at 11:17, a finance staff member clicks a link in a fake "Microsoft 365 password expiration" email. Twelve minutes later, the attacker is sending invoice-rerouting emails from the finance mailbox cfo-assist@boulder-edtech.com to three of the company's biggest customers. PagerDuty fires for "Suspicious OAuth consent grant + outbound mail volume anomaly". The on-call SOC analyst, just back from a morning run, picks up the page.

Question. Lay out the first 30 minutes of the response, mapped to the email-incident playbook, and explain the order of the three containment actions on the compromised mailbox.

Suggested answer.

The first practical move is to not delete the original phishing email — it is evidence. The analyst captures the .eml and the message-id, then runs a message trace to confirm scope: was this campaign delivered to other Boulder EdTech mailboxes? In a typical phishing campaign, the answer is "yes, several, most of whom did not report".

The severity matrix entry for unauthorised access to a finance mailbox with outbound impersonation is SEV-1. SEV-1 has a 15-minute notification SLA to the on-call IR lead and the CISO; the analyst sends a single pre-written message into the out-of-band IR war-room channel: "SEV-1 declared, BEC, mailbox cfo-assist, IRP activated, joining bridge in 60 seconds."

The order of the three containment actions on the compromised mailbox matters and is non-obvious. The correct order is: (1) disable the account to prevent any further sign-ins, (2) revoke all active sessions to invalidate any tokens the attacker is currently holding (this is the critical step — modern attackers use Adversary-in-the-Middle / AITM kits that harvest the post-authentication session cookie, so resetting the password alone is not sufficient), and only then (3) reset the password and force MFA re-enrollment. Reversing this order — resetting the password while the attacker still holds a valid session token — does nothing useful, because the token was issued before the reset and remains valid until explicitly revoked.

Additional containment steps in the same window: pull the mailbox audit log for the last 72 hours (an attacker often creates an inbox rule that auto-forwards or auto-deletes specific keywords like "invoice" so the legitimate user never sees the suspicious replies — that rule must be deleted). The Finance Director is paged through the Section 5 communications tree because invoice-rerouting emails did go out; the Finance Director immediately calls the three customers on a known phone number (out-of-band, because the email channel is now suspect) and tells them not to pay any invoice received in the last hour.

When the IR lead joins the bridge, the handover is two sentences: "SEV-1 BEC on cfo-assist, sessions killed, rule removed, three customers already notified by Finance." That is the IRP's handover protocol — one sentence for what, one sentence for what's next.

The deeper lesson: BEC is the FBI IC3's most financially damaging cybercrime category. There is often no malware to detect and no signature to block — only a contextual oddness that a trained human and a well-rehearsed playbook catch. Awareness training (Lesson 4 of Chapter 5) is what made the finance assistant report the click rather than hide it.

Case Study 3

Denver Retail — the Friday SQL injection

Scenario. Denver Retail runs a small U.S. e-commerce site out of Denver, Colorado. On Friday at 22:47, the SOC analyst's phone buzzes: ModSecurity has fired with anomaly score ≥ 8 on /api/v1/login, source IP 203.0.113.42, payload username=admin' UNION SELECT username,password FROM users-- &password=x, with 14 hits in the last 5 minutes. The analyst's access log query for that IP and status=200 returns three rows.

Question. Walk through the five-step detection-and-response loop, identify the OWASP category, and describe the seven-rung containment ladder you would now climb.

Suggested answer.

The five steps:

  • Triage. Fourteen hits in five minutes from one IP is not a typo — it is a campaign. Open the ticket and start tracking.
  • Pivot to the source. Reverse DNS on 203.0.113.42 points to an IP block in a country Denver Retail has never sold to. The IP appeared in a DigitalOcean abuse list ten days ago. Score going up.
  • Pivot to the target. /api/v1/login is the customer-facing login endpoint. The payload is a textbook UNION-based SQL injection attempting to read the users table. The User-Agent string in the access log probably reads sqlmap/1.7 because the attacker did not bother to hide that they were using the standard automated SQLi tool.
  • Confirm whether anything got through. Three of fourteen requests returned 200 OK with payload content. Three rows of credentials walked out. Score goes critical.
  • Escalate. Wake the IR lead. Climb the containment ladder.

This is a clean OWASP A03 — Injection case. The technical fix is parameterised queries (the default API of every database driver); the existence of the bug means the developer reached for string concatenation. The remediation will land in the Patch & Harden rung, but it is not the first action.

The seven-rung ladder, with time budgets:

Rung Action Budget
1. Confirm Replicate the payload against the host; confirm the SQLi is exploitable. 10 min
2. Isolate Remove the affected web node from the load-balancer pool. 5 min
3. Preserve Snapshot the disk. Capture memory with LiME (Linux Memory Extractor). Copy /var/log to a forensics share. 25 min
4. Eradicate Identify any uploaded webshells (find /var/www -mtime -14). Rotate every credential the host had access to (DB passwords, API keys, OAuth tokens). 30 min
5. Patch & Harden Replace the vulnerable concatenated query with a parameterised one. Add a server-side allow-list. Disable verbose error pages. 60 min
6. Restore Redeploy from the build pipeline. Bring traffic back at 10%, 50%, 100% while watching the WAF logs. 30 min
7. Lessons Learned Blameless post-mortem 5 days later with the dev team. Three concrete runbook updates. 2 h

Three independent regulatory clocks may now be ticking: under the Colorado Privacy Act and Colorado's breach-notification statute (C.R.S. § 6-1-716), notification of the Colorado Attorney General is required within 30 days for breaches affecting 500+ Colorado residents; PCI-DSS notification to the acquiring bank if any cardholder data could have been touched; and contractual notification to any business customer whose accounts were in the exfiltrated set. The 30-day clock starts at awareness (when the analyst confirmed three 200-OK rows), not at full understanding. The IR lead's notebook records that timestamp.

The deeper lesson: the technical fix here is a one-line code change. The reason this incident exists is a process failure — somewhere in the development lifecycle, code review and automated security testing did not catch a string-concatenation pattern that any half-decent linter flags. The post-mortem action item is not "fix the bug"; it is "add the linter rule and a regression test so the bug cannot return."

Case Study 4

Mapleleaf Code Studio — the GitHub credential phish

Scenario. Jessica is a senior developer at Mapleleaf Code Studio, a 12-person Ottawa, Ontario software consultancy. On Friday at 09:42 she receives an email apparently from GitHub: "Suspicious sign-in detected — verify your account or it will be locked in 24 hours." She clicks. The page looks exactly like the GitHub login. She types her credentials and only afterwards notices the URL is github-secure-login.support, not github.com. She immediately clicks the "Report Phishing" button in Outlook and phones the help desk. A quick check of GitHub sign-in logs shows a successful login from an IP in Bulgaria at 09:38, four minutes after Jessica submitted the form.

Question. Explain why MFA still bought the company time even though the credentials were leaked, and outline the eradication steps you would take on Jessica's GitHub account.

Suggested answer.

The credentials were compromised — the Bulgarian sign-in confirms that. What MFA did was prevent the attacker from completing the post-authentication step that GitHub requires for any sensitive action: pushing code, creating personal access tokens, modifying repository settings. The attacker has the password but does not have Jessica's MFA factor (a TOTP — Time-based One-Time Password — code or a hardware security key), so the session never reaches the privileged state. MFA bought the team a window of hours instead of a window of seconds. That is the entire reason MFA is non-negotiable for any account with code-base access.

Critically, however, MFA does not buy unlimited time. Modern AITM (Adversary-in-the-Middle) phishing kits can harvest the post-authentication session cookie alongside the password, in which case the attacker walks through MFA the same way Jessica does. The fact that Jessica's case was a static fake login page — not an AITM proxy — is what limited the damage. If the attacker had been running an AITM toolkit, the response would need to include immediate session-token revocation across all of Jessica's connected services, not just GitHub.

Eradication on Jessica's GitHub account, in order:

  1. Force-sign-out via the GitHub admin console (this revokes active web sessions immediately).
  2. Reset the password.
  3. Revoke every active session token, every personal access token (PAT), and every SSH key registered against her account. PATs in particular are dangerous because they often have wide repository scopes and do not require MFA when used for API calls.
  4. Audit OAuth applications connected to her account — any unfamiliar third-party app must be revoked. AITM kits frequently plant a malicious OAuth app for persistence even after passwords change.
  5. Audit deploy keys and webhook subscriptions on every repository she has access to.
  6. Re-enroll MFA, ideally upgrading from TOTP to a phishing-resistant FIDO2 (Fast IDentity Online 2) hardware security key. The incident is the perfect moment to drive the upgrade, because the user is motivated and the budget conversation just got much easier.
  7. Scan recent commits across all Mapleleaf Code repositories during the attacker's possible window for unauthorised pushes. None expected here, because MFA blocked the privileged actions, but the audit needs to be performed and documented to prove it.
  8. Jessica's local laptop is checked for malware. In this case the attack was credential phishing, not malware delivery, so this is precautionary; if any second-stage payload is found, the laptop is re-imaged.

Containment beyond Jessica's account: the phishing email is purged from the mailboxes of the three other developers who received it (a message-trace surfaces them). The sender domain github-secure-login.support and the second-stage URL are added to the email-gateway block list and the DNS filter. Multi-factor authentication enforcement on Mapleleaf Code's GitHub organisation is verified — if the org policy did not require MFA, that policy is now updated.

Lessons Learned action item: roll out FIDO2 hardware keys to all developers within 60 days; owner is the CTO; the deadline goes on the calendar. Add a developer-targeted micro-lesson on GitHub-style phishing to the awareness program. Three of four recipients did not report — even though click rate was low (only Jessica clicked), report rate is the metric that matters here, and it needs work.

Case Study 5

Cambridge BioTech — the two-headed press release

Scenario. Cambridge BioTech, a biotechnology firm in Cambridge, Massachusetts, suffers a confirmed data breach at 09:00 Monday: roughly 80,000 customer records exfiltrated. Over the next 75 minutes the following sequence happens. 09:12 — a SOC analyst posts in the company-wide #general Slack channel: "FYI — we're investigating a possible data incident, more to follow." Hundreds of employees see it; some screenshot it. 09:27 — the Head of IT emails the entire IT department: "breach confirmed, please be vigilant." The attacker, still inside the company's Exchange server, reads the email. 09:35 — a junior marketing employee, alarmed, tweets from the corporate Twitter/X account: "We take security seriously and are looking into reports of an issue." No review, no approval. 09:48 — the CEO, who first hears about the incident from his wife after she saw the tweet, calls a journalist he trusts and gives an informal quote: "It's a minor issue, we've contained it." 10:15 — the IR Lead joins the war room.

Question. Identify the four communication mistakes by axis (technical / executive / external) and propose the single protocol rule that would have prevented all four.

Suggested answer.

The four mistakes, mapped to the three communication axes:

  1. Mixing internal-technical with internal-general. The SOC analyst's #general post leaked technical-axis content into general-employee channels. Hundreds of people who did not need to know now know, and some of them have screenshots that may end up on social media within the hour.
  2. Using the compromised email system to discuss the compromise. The Head of IT's email tipped off the attacker that the response was beginning. Modern attackers monitor admin email when they have the access; they accelerate or burn the operation when they see the words "breach", "incident", or "investigation" in their target's inbox.
  3. An untrained employee spoke on the external axis. The marketing employee's tweet is a public statement made without legal review, without communications-lead approval, and without verified facts. It is now the company's first official statement on the breach, and it will be quoted in the regulatory submission.
  4. The CEO spoke publicly without legal review and without verified facts. "Minor" and "contained" are both potentially false at 09:48 — the IR team has not even joined the war room yet. The journalist will publish the quote. The regulator and any future plaintiffs' lawyers will have written evidence that the company described the breach inaccurately while it was still unfolding.

The single protocol rule that would have prevented all four:

No communication about this incident occurs on any channel other than the designated out-of-band IR war-room until the Communications Lead releases pre-approved templates.

That sentence belongs in the IRP at the start of every incident playbook. It enforces three things at once: (a) all incident traffic moves to a private war-room channel where membership is restricted to people with a need to know, (b) the channel is out-of-band, so the corporate Exchange or Slack tenant is not the medium, and (c) the only person authorised to speak externally is the Communications Lead, working from pre-approved message templates. Pre-approved is the operative word — the templates are written on a calm day, vetted by Legal, and stored ready to go. Saturday at 09:00 is not the moment to compose a press statement.

The training implication: every employee who has access to the corporate Twitter/X account, every executive who might be tempted to call a journalist, every IT manager who instinctively reaches for "reply-all" — all of them need a 30-second internal protocol drilled into them through the awareness programme: "If something looks like an incident, do not post, do not tweet, do not email — call the security hotline."

Case Study 6

Denver Retail — the insider data theft

Scenario. Months after a containment incident, Denver Retail's leadership decides to sue a former employee, Emma Walker, under U.S. criminal and civil law (including the Computer Fraud and Abuse Act (CFAA), the Defend Trade Secrets Act (DTSA) and Colorado's trade-secret statutes) for the alleged exfiltration of the company's customer database during her last week of employment. The IR lead, R. Mitchell, kept a Markdown lab notebook during the original incident. One key entry reads:

2026-03-14 21:47:12Z  [CONTAINMENT]  Author: R.Vega
Action: Isolated host WS-0412 via CrowdStrike Falcon console ("Contain Host").
Authorized by: M.Ruiz (IR Lead, ID badge #IR-4) per ticket INC-2026-0077.
Pre-state: Host online, last-seen 21:46:58Z, logged-in user SAMACC\e.torres (employee ID E-1029).
Post-state: Host isolated at 21:47:10Z per Falcon audit log, entry ID fac-9a4e1b.
Evidence captured before isolation: memory image WS-0412-mem-20260314T214701Z.lime
                                   SHA-256: 5e6d...a91c (verified by Volatility 3.2.0 plugin hash check).
Chain-of-custody form signed at 21:52Z, stored in evidence safe E-04, form #COC-2026-017.
Why: EDR telemetry showed suspect process rclone.exe at 21:43Z targeting share \\FS-01\hr$.

The defence challenges the admissibility of the memory image.

Question. List the elements in this single notebook entry that defend against admissibility challenges, identify which of the five chain-of-custody principles each one addresses, and explain why a "Slack-message" record of the same actions would not survive the same legal scrutiny.

Suggested answer.

Element-by-element mapping to the five principles (Integrity, Authenticity, Continuity, Reproducibility, Minimality):

  • Timestamp in UTC to the second (2026-03-14 21:47:12Z). Addresses Continuity and Authenticity. UTC eliminates daylight-saving and time-zone ambiguity; second-level precision lets the evidence be cross-referenced against external systems (the Falcon audit log, the EDR telemetry).
  • Named author (R.Vega) and named authoriser (M.Ruiz, ID badge #IR-4, ticket INC-2026-0077). Addresses Continuity. The chain has identifiable, accountable custodians at every step; the badge ID and ticket number are the auditable trail.
  • Action described in tool-specific terms (Isolated host WS-0412 via CrowdStrike Falcon console "Contain Host"). Addresses Reproducibility. An independent forensic expert can reproduce the action in their own environment because the tool, the host, and the operation are unambiguously identified.
  • Pre-state and post-state recorded with cross-references (last-seen 21:46:58Z, Falcon audit log entry fac-9a4e1b). Addresses Authenticity. The notebook is not the only source of the action; an external system independently confirms it.
  • Evidence file named with timestamp and host (WS-0412-mem-20260314T214701Z.lime) and SHA-256 hash recorded at the moment of capture (5e6d...a91c). Addresses Integrity. Any future hash check will either match the recorded value (proving the bytes are unchanged) or not match (proving tampering). Tool version is named (Volatility 3.2.0) — Reproducibility.
  • Chain-of-custody form number (COC-2026-017), evidence safe location (E-04), and signing time (21:52Z). Addresses Continuity. The custody hand-off from the responder to the safe is a documented event with a paper trail.
  • Reason for the action (EDR telemetry showed suspect process rclone.exe at 21:43Z targeting share \\FS-01\hr$). Addresses Minimality. The action was not a fishing expedition; it was a targeted response to an observed attacker behaviour, which justifies the scope of the evidence collected.

A Slack-message record of the same actions would fail several of these tests:

  • Slack messages can be edited after the fact, and the edit history is not always preserved indefinitely (workspace owners can purge edit history). That breaks Integrity.
  • The Slack server itself is a custodian that the IR team does not control. Whether the messages were preserved unaltered between event time and discovery is a question only Slack's admins can answer, and the chain-of-custody for the Slack thread itself was almost certainly never established. That breaks Continuity.
  • The "evidence" in a Slack message is text the responder typed, not a hash-anchored binary artefact. There is no cryptographic proof that the memory image referenced in the message is the same memory image now produced in court. That breaks Authenticity.
  • The structure is conversational, not action-by-action. Pre-state and post-state are usually missing; tool versions are not noted; the reason for each action is rarely written out. That breaks Reproducibility and Minimality.

Denver Retail's published industry comparison (referenced in the lecture): a comparable company that kept only Slack messages saw the defence successfully argue that chain of custody of the messages themselves had been broken; the case was dismissed. Same incident category, opposite outcome, different documentation discipline.

The takeaway: the seventeen-line notebook entry is not bureaucracy; it is a defensible artefact. Multiplied by 40 such entries across the whole incident, it produces a record a judge will accept without a second glance — which is exactly what happened here. The defendant settled before trial.

Case Study 7

MediCare Boston — phishing into a healthcare provider

Scenario. MediCare Boston is a private healthcare provider in Massachusetts with around 3,500 patients. On a Wednesday at 14:30, a clinic receptionist clicks a link in an email that purports to be from the company's HR (Human Resources) department, asking her to "review the updated 2026 holiday calendar". She enters her Microsoft 365 credentials on a page that looks identical to the corporate login. By 14:45 the SOC sees an unusual sign-in to her account from an IP geolocated in Eastern Europe. By 15:10, fifty patient records — including diagnoses and prescription histories — have been viewed and downloaded as a CSV (Comma-Separated Values) file from the practice-management portal that her account had access to.

Question. Identify the regulatory regime(s) at play, the notification clocks that have started, and the additional considerations because the breached data is health-related.

Suggested answer.

This breach implicates two regulatory regimes simultaneously:

  • HIPAA Breach Notification Rule. Patient diagnoses and prescription histories are Protected Health Information (PHI). The HIPAA Breach Notification Rule requires notification of affected individuals "without unreasonable delay" and in no case later than 60 calendar days after discovery, notification of the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) within the same 60 days for breaches affecting 500+ individuals (and within an annual log otherwise), and prominent media notice in the affected state for breaches affecting 500+ residents of that state. The 60-day clock starts at 14:45 — the moment the SOC confirmed the unusual sign-in is real — not at 14:30 when the receptionist clicked, and not at 15:30 when the analyst finishes the timeline. The breach involves special-category health information, which raises both the risk threshold for affected-individual notification and the regulator's appetite for enforcement.
  • Massachusetts state law layers on top. Massachusetts has one of the strictest U.S. state data-breach regimes: 201 CMR 17.00 (security standards for personal information of Commonwealth residents) and M.G.L. c. 93H (the breach-notification statute) require notification of the Massachusetts Attorney General and the Office of Consumer Affairs and Business Regulation, as well as to affected residents, "as soon as practicable and without unreasonable delay". If any patients are Canadian residents (cross-border medical referrals are routine in the Boston-area teaching-hospital orbit), PIPEDA requires separate notification to the Office of the Privacy Commissioner of Canada (OPC) "as soon as feasible" after determining a real risk of significant harm.

Additional considerations because the data is health-related:

  • The notification to individuals must include specific content under GDPR Article 34: the nature of the breach, the contact details of the data protection officer, the likely consequences, the measures taken or proposed. For health data, the "likely consequences" section honestly has to address the risk of medical-identity theft, social-engineering follow-on attacks ("we know your diagnosis; pay or we tell your employer"), and even physical safety in some cases.
  • The penalty exposure is high because PHI is involved. HHS Office for Civil Rights' enforcement track record on healthcare breaches is not lenient — civil monetary penalties can reach into the millions of dollars per incident, and OCR-imposed corrective-action plans typically run for several years. The Massachusetts Attorney General has separate authority and has publicly fined healthcare providers under c. 93H for inadequate safeguards.
  • Patients who are minors require additional handling — the parent or legal guardian is the recipient of the Article 34 notification. The IR team has to filter the affected list by date of birth before generating the letters.
  • The clinical record systems are often the secondary victim. If the attacker pivoted from the receptionist's account into shared drives or the practice-management API, the breach scope may extend well beyond the 50 visible records. The incident scope must be re-validated as new evidence arrives.

The IR plan from this point: containment (disable the account; revoke active sessions; reset password; force MFA re-enrollment with FIDO2 if not already in place; purge the phishing email from all other recipients; block the sender domain at the gateway and the DNS filter); evidence preservation (export the receptionist's mailbox at the time of compromise, capture the Microsoft 365 unified audit log entries for the access and download events, image the receptionist's workstation); legal liaison (the on-call Legal Liaison joins the bridge within 30 minutes — the HIPAA 60-day clock and the Massachusetts notification clock are now their problem too); communication (Communications Lead drafts the HHS OCR notification, the Massachusetts Attorney General notification, and the patient-letter template using pre-approved Section 5 IRP templates; no employee speaks externally until those drafts are released); and external partners (the breach-notification law firm on retainer is engaged; if the cyber-insurance policy requires notification within a window, that call goes out today).

The deeper lesson: every incident has at least one legal clock. Healthcare incidents have several at once, and the clocks tick from the moment of awareness, not the moment of certainty. The incident handler's job is to handle the technical part fast enough that the legal dimension can be handled on time.

Case Study 8

Atlanta Cloud Services — the Black-Friday DDoS

Scenario. Atlanta Cloud Services hosts the e-commerce back-end for 60 small U.S. retailers based out of Atlanta, Georgia. On Black Friday at 13:02, a volumetric DDoS (Distributed Denial-of-Service) attack takes the platform offline for 47 minutes. Lost revenue for customers is estimated at USD 380,000. Eight days later, the IR lead runs a post-incident review and pulls the metrics: MTTD (Mean Time to Detect) was 2 minutes against a target of <2 minutes, MTTA (Mean Time to Acknowledge) was 6 minutes against a target of <5 minutes, and MTTR (Mean Time to Recover) was 47 minutes against a target of <20 minutes.

Question. Apply the 5 Whys to the MTTR overshoot, identify the root cause (and not just the proximate cause), and propose action items tagged by Detection / Prevention / Process / People bucket.

Suggested answer.

The 5 Whys, condensed:

  • Why did recovery take 47 minutes against a 20-minute target? Because rerouting traffic to the Cloudflare scrubbing centre required manual approval from the Network Director, who was off-shift on a Friday afternoon.
  • Why did it need manual approval? Because the playbook said so.
  • Why does the playbook require manual approval for a procedure that takes seconds when automated? Because a year ago an automation ran the reroute accidentally during business hours and caused a partial outage, so leadership demanded a human in the loop.
  • Why did the automation misfire that time? Because the trigger threshold was set lower than production traffic peaks; the system saw normal Black-Friday-day traffic as an attack and pre-emptively rerouted.
  • Why was the threshold never re-tuned? Because there was no named owner for the runbook's threshold reviews; the original on-call engineer had moved teams, and no one inherited the calendar item.

The proximate cause is "MTTA overshoot of 1 minute" or "manual approval delay". Both are surface symptoms. The root cause is no named owner for runbook threshold reviews, which manifested through an over-cautious manual-approval step that turned a 3-minute technical fix into a 47-minute organisational one.

This is critical to internalise. A blameless post-mortem that stopped at "the Network Director was slow" would have produced an action item like "be faster", which is not actionable. The 5 Whys forces the team past the proximate cause into the structural cause, which has a real fix: appoint an owner for every runbook with a quarterly threshold-review process.

Action items, tagged by bucket:

  • (Process) Appoint an owner for every runbook in the IRP repository. Owner: Head of Site Reliability Engineering. Quarterly threshold-review process documented in a calendar event, due in 30 days.
  • (Detection) Add a Cloudflare-side anomaly detector for volumetric attacks so reroute proposals are pre-validated against an external second-source signal before they hit the manual-approval queue. Owner: Network Lead. 60 days.
  • (People) Add a weekend on-call rotation for the network team (previously only weekdays — which is why the Network Director was unreachable on Friday afternoon). Owner: CISO. 45 days.
  • (Process) Update the runbook so automation can act on SEV-1 DDoS events without manual approval, with a 10-minute auto-revert if a human does not confirm within that window. The auto-revert addresses the original concern that triggered the manual-approval step in the first place. Owner: Head of SRE. 30 days.

Three months later, the team handles a second volumetric DDoS during a Tuesday lunchtime. MTTR: 8 minutes. One human in the loop, not five. The PIR did that, not the tooling.

The lesson worth carrying out of this case: incident-response metrics are the cheapest, highest-leverage instrument an IR team has. They turn vague feelings ("the response felt slow") into numbers ("MTTR was 47 minutes against a 20-minute target"), and the difference between a proximate cause and a root cause is the difference between an action item that fails to close and one that fundamentally improves the programme.

Case Study 9

Mississauga Logistics — the Canada Post parcel phish

Scenario. Mississauga Logistics is a 200-person logistics firm in a Mississauga, Ontario industrial park. On a Tuesday morning, Mike from accounting clicks the "Report Phishing" button in his Outlook on an email that claims to be from Canada Post. The display name reads "Canada Post — Postes Canada", but the sending domain is notice@canadapost-delivery-notice.com — close enough to fool a tired eye on a busy morning. The subject line reads "[Important] Your parcel is on hold — pay $1.99 CAD to release". The body asks the recipient to click "Pay now" linked to https://canadapost-pay-client.top/. Headers show SPF=fail DKIM=none DMARC=fail. There is no attachment.

Question. Decompose the email into the three-stage phishing model, identify the social-engineering levers in use, and explain how the email reached Mike's inbox at all given that all three authentication checks failed.

Suggested answer.

The three-stage decomposition.

  • Lure construction. The attackers chose Canada Post because almost everyone in Canada has received a Canada Post delivery in the past month, so the email is contextually plausible. They paired a deliberately small payment amount (\$1.99 CAD) with the urgency lever ("Important") — small enough to slip under the recipient's risk-thermostat. The visual design probably mimicked Canada Post branding fairly well; the body text was in correct English with a couple of words of French sprinkled in to mirror Canada Post's bilingual communications style (modern translation tools have eliminated the bad-grammar tells of older campaigns). Two social-engineering levers stack here: urgency (the package is being held; act now) and a minor deception lever sometimes called commitment-and-consistency (a tiny payment feels like a low-risk commitment, even though entering card details on the next page is the real ask).
  • Hook delivery. The email arrived via a look-alike domain, canadapost-delivery-notice.com, registered probably days before the campaign. Note the .top top-level domain (TLD) on the second-stage URL — .top is one of the cheapest TLDs and is widely abused. The real Canada Post uses canadapost.ca and canadapost-postescanada.ca.
  • Payload execution. This was credential harvesting plus card-data theft. The link led to a near-perfect clone of the Canada Post payment page. Had Mike typed his card details, the attackers would have captured them — and possibly an OTP (One-Time Password) by relaying the request in real time to the real Canada Post. The real legitimate domain is canadapost.ca.

Social-engineering levers. Two are obvious in the visible content: urgency ("Important") and the small-amount commitment lever above. There is also an implicit authority lever — the lure relies on the recipient's mental model of Canada Post as a legitimate, official-looking Crown corporation; the attackers borrow that authority by mimicking the visual identity. The lure quality overall is competent but not premium — a 6 out of 10. A premium spear-phish would have been personalised to Mike by name, would have referenced a real shipment Mississauga Logistics was expecting, and would have used a domain that survived basic cursory scrutiny.

How did this reach Mike's inbox if SPF, DKIM, and DMARC all failed?

The answer is configuration nuance, and it is the most common reason phishing emails reach inboxes despite authentication failure. Three sub-reasons:

  1. DMARC alignment requires the domain owner to publish a DMARC policy of p=quarantine or p=reject. If the attacker's look-alike domain canadapost-delivery-notice.com has no DMARC record at all (or has p=none), then "DMARC=fail" reported in the headers may be informational rather than enforced — the receiving server has no instruction from the attacker's domain owner to reject. This is subtle and worth re-reading: DMARC enforcement depends on the sending domain's policy.
  2. The receiving organisation's email gateway may be configured in quarantine mode rather than reject mode. Many gateways soft-fail DMARC failures into the user's "External" or "Junk" folder rather than blocking outright, and Mike's gateway may even have moved this to the inbox with only an "external sender" warning banner that he ignored.
  3. Real Canada Post's DMARC policy applies to canadapost.ca, not to canadapost-delivery-notice.com. The look-alike domain is a different domain entirely, so the real Canada Post's authentication setup is irrelevant. SPF, DKIM, and DMARC do not protect against typosquatting domains. The defence against typosquatting is at the email gateway (look-alike-domain detection rules) and at the user (awareness training that teaches users to check domains, not display names).

Containment and follow-up. The analyst runs a message trace to count the recipients of the same email — likely many other Mississauga Logistics staff received it. The email is purged from those mailboxes before any of them clicks. The look-alike domain and the payment URL are added to the gateway block list and the DNS filter. The case is logged. If Mike had submitted card details rather than clicking Report, the response would also include calling his bank to flag the card and watching for fraudulent transactions over the next few days.

The deeper lesson: SPF, DKIM, and DMARC defend against spoofing of your own domain. They do not defend against typosquatting, look-alike domains, or any other attack where the malicious mail genuinely originates from an attacker-controlled domain that the attacker has registered legitimately. The defence in those cases is the gateway's heuristics, the user's eye, and the awareness programme.

Case Study 10

Chicago Retail — the silent webshell

Scenario. Chicago Retail is a small Chicago, Illinois retailer running a single web server www01. On Saturday at 04:13, an outsourced SOC calls the on-call IR-trained employee: "we think you have a webshell on www01". The on-call drives to the office, logs into the jump host, and begins working through the seven-rung containment ladder. They find one webshell cmd.php in /var/www/uploads, then a second pix.php they had not been told about, and a new cron entry pointing to a callback script in /tmp. Examining the access log shows the original upload of cmd.php happened the previous Tuesday — eleven days before the SOC's Saturday alert.

Question. Walk through the seven-rung ladder with explicit time budgets, explain why "isolate before you eradicate" matters here specifically, and identify the dwell-time metric that goes on the post-mortem slide.

Suggested answer.

The seven rungs, with budgets and what actually happens at each:

Rung Action Budget Detail
1. Confirm 10 min Pull the suspicious URL pattern from the SOC; replicate against the host; confirm the webshell file exists in the web root and matches a known signature.
2. Isolate 5 min SSH to the load balancer; remove www01 from the pool. The host is up — memory and processes intact — but no traffic reaches it. The attacker, if they are watching, sees their webshell stop responding to new requests but does not yet see cleanup activity.
3. Preserve 25 min Trigger the platform's "snapshot disk" on the cloud console. Capture a memory dump with LiME. Copy /var/log and /etc to a forensics share.
4. Eradicate 30 min Identify all webshells (not just the one the SOC reported). Run find /var/www -mtime -14 -type f to list every file under the web root modified in the last 14 days. Discover pix.php, the second webshell. Find the new cron entry pointing to /tmp/<script> and remove it. Rotate every credential the host had access to: database password, API keys for any cloud service, SSH keys, application tokens.
5. Patch & Harden 60 min Find the upload endpoint that allowed cmd.php to be uploaded. It accepted any file extension. Add a server-side allow-list that rejects anything other than the expected image MIME types. Disable PHP execution in the /uploads directory at the web-server config level (an AddType text/plain .php in an .htaccess file or equivalent in nginx config).
6. Restore 30 min Redeploy the patched application from the build pipeline. Restore traffic gradually: 10%, 50%, 100%. Watch the WAF logs at each step for any sign of the attacker probing the patched endpoint.
7. Lessons Learned 5 days later, 2 h Blameless post-mortem with the dev team, the ops team, the SOC, and the IT director. Three concrete runbook updates.

Total wall-clock from rung 1 to rung 6: approximately 2.5 hours.

Why "isolate before you eradicate" matters here specifically. A surprising number of well-meaning teams skip rung 2 and rush to rung 4 — "I see a webshell, I delete it, I feel good." Two hours later the attacker drops a new webshell from a foothold in another machine they had quietly already moved to. In this specific case, the team also had no idea about the second webshell pix.php until they got to rung 4 — meaning if they had skipped rung 2, the attacker would have watched the deletion of cmd.php, immediately uploaded cmd2.php from pix.php, and the team would still be where they started. Isolate first so the attacker cannot adapt to the cleanup. Then preserve evidence. Then, and only then, eradicate.

Why preservation matters before eradication. The disk snapshot will let forensics later identify exactly what data the attacker accessed during the eleven-day window. Without it, the team will have to report worst-case to the Illinois Attorney General under the Illinois Personal Information Protection Act (PIPA), which means the legal team must assume that anything reachable from www01's database account was potentially exfiltrated. With the snapshot, forensics can analyse the access log, the database query log, and the webshell's own command history (often recoverable from memory) and produce a defensible "the following 1,200 customer records were accessed" finding. The narrower the finding, the smaller the regulatory exposure and the customer notification scope.

The dwell-time metric. The webshell was uploaded the previous Tuesday and detected on Saturday — eleven days of attacker dwell time. That is the number that goes on the first slide of the post-mortem. The IBM Cost of a Data Breach Report puts the industry mean time to identify a web-app breach above 200 days for several years running. Eleven days is dramatically better than industry average — but eleven days is still eleven days during which the attacker had remote command execution on a host with database access. The post-mortem action items focus on closing the detection gap: the SOC alert that fired on Saturday should have fired on Tuesday afternoon, and the question to answer is "what would have made that possible?" The likely answer is one of A09 (Security Logging & Monitoring Failures): file-integrity monitoring on the web root, alerting on new files appearing in /uploads, alerting on new cron entries, alerting on outbound connections from web nodes to non-CDN destinations.

The deeper lesson: a clean response to a webshell incident takes about 2.5 hours of wall-clock work and produces a defensible record. Skipping the isolation rung or the preservation rung saves twenty minutes today and costs weeks of regulatory exposure later. The senior responder does the work in order; the junior responder learns to want to do the work in order.


Part 3

75 Mid-Term Review Questions

Filter by chapter or by question type, search the question bank by keyword, and reveal hints or answers individually as you go.

75 total

Chapter 1 — Introduction to Incident Handling & Security Concepts (12 questions)

Q01 MCQ Chapter 1 · Lesson 1
Three of the following are events. Which one is, by the strict definition used in the chapter, an incident?
  • A) A user logs in successfully at 09:00.
  • B) A backup job completes overnight and writes a status line to syslog.
  • C) An attacker uses stolen credentials to read the CEO's mailbox for forty minutes before being noticed.
  • D) A scheduled vulnerability scanner kicks off at 02:00 and finishes at 03:15.
Hint — An incident is an event (or chain of events) that negatively affects Confidentiality, Integrity, or Availability — read the options through that lens.
Answer — C — Reading the CEO's mailbox is a confirmed compromise of confidentiality, which is the threshold that turns an event into an incident. The other three are routine events that produce log lines but do not breach Confidentiality, Integrity, or Availability.
Q02 MSQ — Select all that apply Chapter 1 · Lesson 1
Which of the following correctly describe the relationship between events, alerts, incidents, and breaches? (Select all that apply)
  • A) Every alert is an incident.
  • B) Every incident is an event, but not every event is an incident.
  • C) Every breach is an incident, but not every incident is a breach.
  • D) An alert is an event (or pattern of events) that a detection rule has flagged as worth a human's attention.
Hint — Three of these are correct definitions and one is the classic newbie mistake.
Answer — B, C, D — A is the classic newbie error: most alerts turn out to be benign noise. The other three reproduce the chapter's hierarchy: events ⊃ alerts; incidents ⊂ events; breaches ⊂ incidents (a breach is an incident with confirmed unauthorised disclosure of regulated data).
Q03 MCQ Chapter 1 · Lesson 2
An attacker encrypts the file server during a ransomware incident. Customers cannot place orders for six hours. Which leg(s) of the CIA (Confidentiality, Integrity, Availability) triad are most clearly affected?
  • A) Confidentiality only.
  • B) Availability only.
  • C) Integrity and Availability.
  • D) Confidentiality, Integrity, and Availability all three.
Hint — Encryption flips the file from readable to unreadable. Customers cannot reach the system. Whether the attacker also read the data before encrypting it is a separate question.
Answer — C — Availability is obviously broken (the service is down). Integrity is broken because the files are not in the form the owner wrote them. Pure ransomware that does not exfiltrate first does not breach Confidentiality. Modern double-extortion ransomware does, but the question describes basic encryption.
Q04 MCQ Chapter 1 · Lesson 2
Pick the best definition of risk in incident-management terms.
  • A) Any weakness in a system.
  • B) A piece of code that takes advantage of a weakness.
  • C) The combination of likelihood that a threat will exploit a vulnerability and the impact if it does.
  • D) A circumstance that has the potential to cause harm.
Hint — Vulnerability, exploit, threat, and risk are four different things. The question asks for the one that combines likelihood and impact.
Answer — C — Risk = likelihood × impact, where likelihood is driven by the existence of a vulnerability and an actor willing to exploit it. A is a vulnerability, B is an exploit, D is a threat.
Q05 MSQ — Select all that apply Chapter 1 · Lesson 2
Which of the following are correctly classified as vulnerabilities? (Select all that apply)
  • A) An unpatched Apache HTTP Server with a known Common Vulnerabilities and Exposures (CVE) number.
  • B) A nation-state actor with a budget for zero-days.
  • C) A weak password reuse policy in the human-resources system.
  • D) A working proof-of-concept code that demonstrates remote code execution against the unpatched Apache.
Hint — A weakness is the flaw itself. The actor is a threat. The proof-of-concept is an exploit.
Answer — A, C — The unpatched server and the weak policy are weaknesses in the system. B is a threat (an actor); D is an exploit (the working code that takes advantage of the weakness).
Q06 MCQ Chapter 1 · Lesson 3
An attacker plants a webshell that lets them run arbitrary commands on a public web server. Which of the seven incident families fits best?
  • A) Unauthorised Access.
  • B) Denial-of-Service.
  • C) Malicious Code.
  • D) Improper Usage.
Hint — There is genuine overlap between two families here, but the chapter picks the one that names the artefact most precisely.
Answer — C — A webshell is malicious code persistently installed on a system. Unauthorised Access is a defensible second pick because the attacker is also using the host without permission, but the chapter places webshells, ransomware, and rootkits in the Malicious Code family.
Q07 MSQ — Select all that apply Chapter 1 · Lesson 3
Which of the following are the four classification axes the chapter uses to label an incident? (Select all that apply)
  • A) Severity.
  • B) Scope.
  • C) Cost.
  • D) Category.
  • E) Sensitivity.
Hint — Four axes, not five. Cost is a consequence of severity and scope, not an axis on its own.
Answer — A, B, D, E — The four axes are Severity (how bad), Scope (how wide), Category (what family of incident), and Sensitivity (what kind of data is involved). Cost is computed downstream from these four.
Q08 MCQ Chapter 1 · Lesson 4
Which acronym describes a coordination team that handles incidents for a vendor's products, often with public advisories and a Common Vulnerabilities and Exposures (CVE) workflow, rather than for a single company's internal estate?
  • A) Security Operations Centre (SOC).
  • B) Computer Security Incident Response Team (CSIRT).
  • C) Computer Emergency Response Team (CERT).
  • D) Product Security Incident Response Team (PSIRT).
Hint — Vendor + product + advisories points clearly at one of these.
Answer — D — A PSIRT runs a vendor's product-security advisory programme, coordinates Common Vulnerabilities and Exposures (CVE) numbering for the vendor's own software, and works with researchers under coordinated disclosure. SOC and CSIRT defend an organisation's own estate; CERT is historically the national-team flavour.
Q09 MCQ Chapter 1 · Lesson 4
An organisation has a 24×7 monitoring desk that watches alerts on Endpoint Detection and Response (EDR), the Security Information and Event Management (SIEM) platform, and the Web Application Firewall (WAF). It triages those alerts and escalates. What is this team most accurately called?
  • A) CSIRT.
  • B) SOC.
  • C) PSIRT.
  • D) Threat-Intelligence Team.
Hint — Triage and escalation, not deep forensic ownership of the full incident, are the giveaway.
Answer — B — A SOC is the eyes-and-ears function: triage, escalate, hand-off. The CSIRT typically takes the heavy end of an incident once it has been escalated.
Q10 MCQ Chapter 1 · Lesson 5
Under the EU's General Data Protection Regulation (GDPR) Article 33, what is the maximum window in which a data controller must notify the supervisory authority of a personal-data breach?
  • A) 24 hours.
  • B) 48 hours.
  • C) 72 hours.
  • D) 30 days.
Hint — The number is famous, three digits, and easy to mix up with the NIS2 early-warning clock.
Answer — C — 72 hours from the moment the controller becomes aware of the breach. Article 33 also says "without undue delay". NIS2 requires a 24-hour early warning; HIPAA gives 60 days; SEC Form 8-K gives 4 business days.
Q11 MSQ — Select all that apply Chapter 1 · Lesson 5
Which of the following pairs of (regulator/framework → reporting clock) are correct? (Select all that apply)
  • A) GDPR → 72 hours to the supervisory authority.
  • B) Health Insurance Portability and Accountability Act (HIPAA) → 60 days for breaches affecting fewer than 500 individuals (annual log) or within 60 days for breaches affecting 500+.
  • C) U.S. Securities and Exchange Commission (SEC) Form 8-K → 4 business days from determination of materiality.
  • D) Network and Information Security Directive 2 (NIS2) → 30 days early warning.
  • E) Payment Card Industry Data Security Standard (PCI DSS) → no defined breach-notification clock; the contractual obligation is to notify the acquirer/card brand promptly.
Hint — Four of these five are right and one is the classic mix-up of NIS2's 24-hour early-warning rule with another framework's window.
Answer — A, B, C, E — D is wrong: NIS2 requires a 24-hour early warning, then a 72-hour incident notification, then a final report at 1 month. The other four are correctly stated.
Q12 MCQ Chapter 1 · Lesson 5
International Organisation for Standardisation (ISO) standard 27035 covers what specifically?
  • A) An information-security management-system (ISMS) generic baseline.
  • B) A risk-management framework.
  • C) Information security incident management — principles, process, and lessons-learned guidance.
  • D) A privacy-information management system.
Hint — The number 27035 sits inside the 27000 family but has a very specific scope.
Answer — C — ISO/IEC 27035 is the incident-management standard. ISO 27001 is the ISMS baseline (A); ISO 27005 covers risk (B); ISO 27701 covers privacy (D).

Chapter 2 — The Incident Handling & Response (IH&R) Process (13 questions)

Q13 MCQ Chapter 2 · Lesson 1
List the six phases of the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-61 incident-response lifecycle in the correct order.
  • A) Detection → Preparation → Containment → Eradication → Recovery → Lessons Learned.
  • B) Preparation → Detection & Analysis → Containment → Eradication → Recovery → Lessons Learned.
  • C) Preparation → Containment → Detection → Eradication → Lessons Learned → Recovery.
  • D) Preparation → Detection & Analysis → Eradication → Containment → Recovery → Lessons Learned.
Hint — Containment always comes before eradication. Preparation is always first; lessons learned is always last.
Answer — B — The chapter's mnemonic is "PDA-CER-LL": Prepare, Detect & Analyse, Contain, Eradicate, Recover, Lessons-Learned. Eradication before containment (option D) is the rookie mistake.
Q14 MCQ Chapter 2 · Lesson 1
Which of the six NIST phases is most heavily front-loaded before any incident occurs?
  • A) Detection & Analysis.
  • B) Preparation.
  • C) Recovery.
  • D) Lessons Learned.
Hint — Front-loaded means "work that has already been done by the time the alert fires".
Answer — B — Preparation covers the runbooks, the on-call rota, the contact lists, the legal pre-engagements, the tabletop exercises, and the tooling. By definition it happens before the incident — it is the only phase that does.
Q15 MSQ — Select all that apply Chapter 2 · Lesson 2
An Incident Response Plan (IRP) typically contains which of the following sections? (Select all that apply)
  • A) A scope statement (which systems and which incident types are in scope).
  • B) Roles and responsibilities, often expressed as a Responsible-Accountable-Consulted-Informed (RACI) matrix.
  • C) A severity matrix.
  • D) A communications plan with named external parties.
  • E) A copy of every employee's home address.
Hint — Four of these are core IRP sections. One is a privacy nightmare.
Answer — A, B, C, D — The IRP typically also covers the lifecycle phases, escalation triggers, evidence-handling requirements, and an exercise/maintenance schedule. Personal home addresses do not belong in the IRP; on-call contact details (mobile number) do.
Q16 MCQ Chapter 2 · Lesson 2
In a RACI matrix, who can there be exactly one of for any given activity?
  • A) Responsible.
  • B) Accountable.
  • C) Consulted.
  • D) Informed.
Hint — RACI's central design rule is about single-throat-to-choke on accountability.
Answer — B — There can be many people Responsible (doing the work), Consulted (input requested), or Informed (kept in the loop). There must be exactly one person Accountable for each activity, otherwise no one truly is.
Q17 MCQ Chapter 2 · Lesson 3
Which best describes the three axes of communication discussed in the IH&R process chapter?
  • A) Internal (your own staff), External (regulators, customers, press), and Adversarial (the attacker).
  • B) Spoken, Written, and Recorded.
  • C) Pre-incident, In-incident, Post-incident.
  • D) Confidential, Internal, Public.
Hint — Adversarial is the give-away — most students forget that talking to (or being read by) the attacker is itself a third axis.
Answer — A — Internal, External, and Adversarial. The Adversarial axis matters because in a live ransomware case the attacker may be reading your inbox; out-of-band channels exist precisely so they cannot.
Q18 MCQ Chapter 2 · Lesson 3
Why are out-of-band communication channels (Signal, dedicated phones, an offline bridge) used during a serious incident?
  • A) They are usually free.
  • B) They reduce the attacker's ability to read or interfere with response coordination if the primary email or chat platform is itself compromised.
  • C) They are required by GDPR Article 33.
  • D) They are faster than the corporate Wi-Fi.
Hint — The point is to assume the primary channel may already be in the hands of the attacker.
Answer — B — The principle is to plan as if the corporate email server is owned by the attacker. Switching to Signal or a phone bridge denies the attacker visibility into the response.
Q19 MCQ Chapter 2 · Lesson 4
What is the primary purpose of an escalation matrix?
  • A) To allocate budget for security tooling.
  • B) To map an incident's severity to the seniority of the person who must be informed and to the speed at which they must be informed.
  • C) To rank attackers by their threat level.
  • D) To produce annual KPI reports.
Hint — The matrix sits beside the severity scale and tells you, at each severity level, who gets the call and how fast.
Answer — B — The matrix turns severity into pages and phone calls. A Severity 1 might mean "page the on-call IR Manager within 15 minutes and the CISO within 30". A Severity 3 might mean "email at the next business day".
Q20 MSQ — Select all that apply Chapter 2 · Lesson 4
Which of the following are required during the incident (not just after) to support the post-mortem and the regulator? (Select all that apply)
  • A) Timestamped notes of every decision and the reasoning behind it.
  • B) Hashes of any preserved evidence files.
  • C) A polished marketing slide deck.
  • D) A clear chain-of-custody record for any disk image, memory dump, or log export.
  • E) The names of everyone who joined a bridge call.
Hint — The point of "contemporaneous notes" is that they cannot be reconstructed accurately a week later.
Answer — A, B, D, E — Marketing decks come later (or never). Timestamped notes, evidence hashes, chain-of-custody records, and the bridge attendance roster all need to be captured as the incident unfolds; reconstructing them three days later under regulatory pressure is brutal and unreliable.
Q21 MCQ Chapter 2 · Lesson 5
When does the chapter say a Post-Incident Review (PIR) should happen?
  • A) Within 24 hours, while emotions are highest.
  • B) Within 5 to 10 working days — late enough that the dust has settled, early enough that memory is fresh.
  • C) Once a year, at the annual security review.
  • D) Only when a regulator asks for one.
Hint — The window is a balance between letting people calm down and not letting them forget the details.
Answer — B — The chapter recommends 5 to 10 working days. Sooner risks emotional finger-pointing; later risks selective memory.
Q22 MCQ Chapter 2 · Lesson 5
Which two root-cause-analysis techniques are emphasised in the PIR section?
  • A) 5 Whys and Fishbone (Ishikawa) diagram.
  • B) SWOT and PEST.
  • C) STRIDE and PASTA.
  • D) OWASP Top 10 and CVSS scoring.
Hint — One is a chain-of-questions, the other is a category-of-causes diagram.
Answer — A — 5 Whys drills down a chain of causation. The Ishikawa fishbone organises causes by category (People, Process, Tools, Environment, etc.). STRIDE/PASTA are threat-modelling methods, not RCA. SWOT is strategic planning.
Q23 MCQ Chapter 2 · Lesson 5
Mean Time to Detect (MTTD), Mean Time to Acknowledge (MTTA), and Mean Time to Recover (MTTR) are tracked across incidents to do what?
  • A) Decide which employee to fire.
  • B) Quantify whether the incident-response programme is improving or degrading over time.
  • C) Set the cybersecurity-insurance premium.
  • D) Choose the SIEM vendor.
Hint — Metrics across many incidents tell you about the programme, not about any individual.
Answer — B — MTTD/MTTA/MTTR are programme-level metrics. They reveal whether monitoring is getting faster, whether the on-call rotation is paging quickly enough, and whether eradication-and-recovery is improving as runbooks mature.
Q24 MSQ — Select all that apply Chapter 2 · Lesson 5
Action items emerging from a PIR are typically grouped into which buckets? (Select all that apply)
  • A) Detection.
  • B) Prevention.
  • C) Process.
  • D) People.
  • E) Marketing.
Hint — Four buckets, not five, and they map to what kind of intervention is needed.
Answer — A, B, C, D — Detection (better logging/alerting), Prevention (patches, hardening), Process (runbook updates, training cadence), People (training, hiring, role clarity). Marketing is not a PIR bucket.
Q25 MCQ Chapter 2 · Lesson 5
What is the single most important characteristic of a blameless post-mortem?
  • A) It is short.
  • B) It is conducted by an external consultant.
  • C) It separates the question "who made the call?" from the question "why was that the most reasonable call given the information available at the time?"
  • D) It produces fewer than three action items.
Hint — Blameless does not mean nameless — it means avoiding the easy fall-back of pinning the failure on one person.
Answer — C — A blameless culture surfaces honest information. The moment people fear personal blame, they hide what they know, the RCA degrades, and the same incident recurs.

Chapter 3 — First Response & Evidence Handling (12 questions)

Q26 MCQ Chapter 3 · Lesson 1
What is the correct order of the five triage steps as taught in the First Response chapter?
  • A) Capture → Verify → Scope → Notify → Contain.
  • B) Verify → Scope → Contain → Notify → Capture.
  • C) Contain → Verify → Notify → Capture → Scope.
  • D) Notify → Verify → Contain → Capture → Scope.
Hint — Verification always comes first. Capture is the last step in triage — you have to know what to capture before you capture it.
Answer — B — The chapter's order is V-S-C-N-C: Verify the alert is real, Scope which assets are involved, Contain the obvious blast radius, Notify the right people, then Capture volatile evidence. Doing capture first is a common newbie reflex; doing it without scope is wasteful.
Q27 MSQ — Select all that apply Chapter 3 · Lesson 2
Which of the following are among the five core principles of chain of custody? (Select all that apply)
  • A) Integrity.
  • B) Authenticity.
  • C) Continuity.
  • D) Reproducibility.
  • E) Profitability.
Hint — Five principles. One of these options is from the wrong dictionary entirely.
Answer — A, B, C, D — The fifth principle is Minimality (collect only what is needed). Profitability is a business term and is not a forensic principle.
Q28 MCQ Chapter 3 · Lesson 2
What does Internet Engineering Task Force (IETF) Request for Comments (RFC) 3227 — Order of Volatility tell a first responder?
  • A) The order in which to apply security patches.
  • B) The order in which to capture forensic evidence, from most volatile (CPU registers, RAM) to least volatile (disk, removable media).
  • C) The order in which to inform regulators.
  • D) The order in which to escalate severity.
Hint — Volatility = how fast the evidence disappears if you do nothing.
Answer — B — RFC 3227 captures memory and ephemeral state first because rebooting or pulling the plug destroys it. Disk and removable media survive longer and are captured later.
Q29 MCQ Chapter 3 · Lesson 2
The chapter cites the NotPetya outbreak as a cautionary tale about which evidence-handling habit?
  • A) Powering off infected hosts immediately.
  • B) Restoring from backups before forensics is complete.
  • C) Ignoring memory capture and going straight to disk imaging.
  • D) Skipping the chain-of-custody form to save time.
Hint — Reflexive power-off destroys exactly the artefact that NotPetya hid in.
Answer — A — NotPetya's encryption key material lived in memory. Pulling the plug destroyed any chance of recovering it post-hoc. The lesson: contain, do not power off. There is a separate principle in malware response — the same lesson applies in evidence terms here.
Q30 MCQ Chapter 3 · Lesson 2
What does the "5W + H" rule require to be recorded for every evidence item?
  • A) Who, What, When, Where, Why, and How (it was acquired).
  • B) Win, Wait, Write, Watch, Wipe, and Hash.
  • C) Who, Whom, Where, When, Whose, and How-much.
  • D) Width, Weight, Wavelength, Wattage, Whistle, and Heat.
Hint — Standard journalism mnemonic, transplanted into forensics.
Answer — A — Every evidence record answers: who acquired it, what it is, when it was acquired (in Coordinated Universal Time, UTC), where it came from, why it was acquired, and how it was captured (tool, command, hash).
Q31 MCQ Chapter 3 · Lesson 2
Why does the chapter insist on Coordinated Universal Time (UTC) for all incident timestamps?
  • A) UTC is mandated by GDPR.
  • B) Local time-zone arithmetic during cross-border incidents introduces errors and disputes; UTC is unambiguous.
  • C) UTC is the only format the SIEM can ingest.
  • D) UTC is the most secure time format.
Hint — The reason is operational, not regulatory or technical.
Answer — B — When an attacker pivots from a server in Frankfurt to a workstation in Toronto to a cloud account in Virginia, every clock disagrees. UTC removes the arithmetic and removes the dispute.
Q32 MCQ Chapter 3 · Lesson 2
Why is the principle "append, don't delete" central to evidence handling?
  • A) Storage is cheap.
  • B) Deleting any file from a forensic image, even an obviously irrelevant one, destroys the integrity of the image and the chain of custody.
  • C) GDPR requires it.
  • D) It is faster than deleting.
Hint — The forensic image must remain bit-for-bit what was acquired.
Answer — B — Forensic images are hashed at acquisition. Any modification, including "helpful" deletion, changes the hash, and the image is no longer admissible as the same artefact that was acquired.
Q33 MSQ — Select all that apply Chapter 3 · Lesson 3
Which of the following describe correct behaviour during the Capital One Web Application Firewall (WAF) Server-Side Request Forgery (SSRF) incident's evidence-handling response? (Select all that apply)
  • A) The team preserved Amazon Simple Storage Service (S3) access logs and CloudTrail trails before remediation.
  • B) The team published the attacker's home address publicly.
  • C) The team produced a defensible record of which 100 million records were accessed.
  • D) The team patched the SSRF vulnerability before snapshotting evidence.
  • E) The team coordinated with the U.S. Federal Bureau of Investigation (FBI).
Hint — Two of these are correct evidence-handling moves and two are inventions or rookie errors.
Answer — A, C, E — Capital One preserved the logs that allowed accurate disclosure scoping; coordinated with law enforcement; and produced a defensible scope of impact. Patching before evidence preservation (D) would have destroyed the audit trail; publishing personal data of a suspect (B) is illegal and is not how this case was handled.
Q34 MSQ — Select all that apply Chapter 3 · Lesson 4
Which roles are typically named in an incident-response RACI? (Select all that apply)
  • A) Incident Commander (IC).
  • B) First Responder.
  • C) Communications Lead.
  • D) Legal Liaison.
  • E) Subject-Matter Expert (SME).
  • F) Scribe.
  • G) Marketing Intern.
Hint — Six roles, not seven. The unwanted option is the obvious giveaway.
Answer — A, B, C, D, E, F — These are the six recurring roles. The Marketing Intern is not part of an IR rota; the Communications Lead handles external messaging, often working with marketing but only at the appropriate phase.
Q35 MCQ Chapter 3 · Lesson 4
What does the Incident Commander (IC) role do, and not do, in a serious incident?
  • A) The IC personally writes the SQL queries to find the malicious rows.
  • B) The IC decides direction and tempo, makes the calls that others cannot make alone, and absorbs the political pressure so the responders can keep working.
  • C) The IC produces the press release.
  • D) The IC is always the most senior technical person on the bridge.
Hint — Commander = decider, not the most experienced query-writer.
Answer — B — The IC's job is decisions, tempo, and shielding the responders from interruption. They are not necessarily the deepest technical specialist on the bridge — that person is usually a Subject-Matter Expert.
Q36 MCQ Chapter 3 · Lesson 5
The Equifax case is cited in Chapter 3 as an example of failure in which area?
  • A) Patching cadence.
  • B) Role clarity, communication, and overall response coordination — not only the failure to patch.
  • C) Evidence preservation.
  • D) Customer-notification timing only.
Hint — The chapter uses Equifax for its broader role-failure story, not solely the missed patch.
Answer — B — Equifax did fail to patch a known Apache Struts Common Vulnerabilities and Exposures (CVE), but the chapter focuses on the cascade of role-clarity and communication failures that followed: who knew what, when, and who should have escalated.
Q37 MCQ Chapter 3 · Lesson 5
Which of the following best summarises the Minimality principle of chain of custody?
  • A) Take a forensic image of every disk in the company, just to be safe.
  • B) Collect only the evidence you need to answer the questions of the investigation, and no more.
  • C) Compress evidence files as small as possible to save storage.
  • D) Always anonymise evidence before reviewing it.
Hint — Minimality is about scope of collection, not file size.
Answer — B — Over-collection raises privacy issues, lengthens forensic review, and can put irrelevant private data into court records. Collect what you need, defensibly justify what you collected, and stop there.

Chapter 4 — Malware Detection, Analysis & Containment (13 questions)

Q38 MSQ — Select all that apply Chapter 4 · Lesson 1
Which of the following are families of malware identified in the chapter? (Select all that apply)
  • A) Virus.
  • B) Worm.
  • C) Ransomware.
  • D) Trojan.
  • E) Rootkit.
  • F) Spyware.
  • G) Antivirus.
Hint — Six families, not seven. The seventh option is what defends against malware.
Answer — A, B, C, D, E, F — The six malware families are virus, worm, ransomware, trojan, rootkit, and spyware. Antivirus is a control, not a family.
Q39 MCQ Chapter 4 · Lesson 1
What is the defining characteristic that distinguishes a worm from a virus?
  • A) Worms are written in C; viruses are written in Python.
  • B) Worms self-propagate across a network without user interaction; viruses require a host file or user action to spread.
  • C) Worms only target Windows; viruses only target Linux.
  • D) Worms are always destructive; viruses never are.
Hint — The difference is in how the malware moves.
Answer — B — A worm is autonomous: it scans, exploits, copies, repeats. A virus piggybacks on a host file or a user action. WannaCry was a worm because it exploited EternalBlue to propagate without help.
Q40 MSQ — Select all that apply Chapter 4 · Lesson 2
Which of the following are categories of Indicator of Compromise (IoC) listed in the chapter? (Select all that apply)
  • A) File-based (hashes, paths, names).
  • B) Network-based (Internet Protocol (IP) addresses, domains, JA3 fingerprints).
  • C) Host-based (registry keys, services, scheduled tasks).
  • D) Behavioural (PowerShell with encoded command, parent-child process anomalies).
  • E) Astrological (the attacker's star sign).
Hint — Four real categories plus one obvious distractor.
Answer — A, B, C, D — A fifth category is sometimes called Indicators of Attack (IoA), which captures the technique itself rather than the artefact. Astrological IoCs do not exist.
Q41 MCQ Chapter 4 · Lesson 2
On the Pyramid of Pain, which IoC type is at the top — i.e., the most painful for the attacker to change?
  • A) File hashes.
  • B) Internet Protocol (IP) addresses.
  • C) Domain names.
  • D) Tactics, Techniques, and Procedures (TTPs).
Hint — The higher the rung, the harder it is for the attacker to swap the indicator out.
Answer — D — TTPs sit at the top because changing them requires the attacker to retrain, retool, and rewrite playbooks. Hashes flip in seconds (recompile); IPs and domains in minutes; TTPs in months. Detecting at the TTP level forces real attacker investment to evade you.
Q42 MCQ Chapter 4 · Lesson 2
When investigating an unknown file hash, which of these is the appropriate first step?
  • A) Run the file in production to see what it does.
  • B) Submit the hash (not the file) to a public reputation service such as VirusTotal to see if it is already known.
  • C) Email the file to the vendor for analysis.
  • D) Delete the file immediately so it cannot run.
Hint — Submitting the file itself can leak sensitive content; submitting the hash leaks only a fingerprint.
Answer — B — A hash leaks no content. If VirusTotal has seen the file, you get an immediate answer. If not, you can decide whether the file is safe to upload (sometimes yes, sometimes no — internal-only documents may not be), or whether to detonate locally in a sandbox. Running it in production is never the answer.
Q43 MCQ Chapter 4 · Lesson 3
Why is the chapter's golden rule "CONTAIN, DO NOT POWER OFF"?
  • A) Power-off destroys volatile evidence (memory, encryption keys, ephemeral processes).
  • B) Power-off voids the device's warranty.
  • C) Power-off causes file-system corruption.
  • D) Power-off triggers an audible alarm.
Hint — The reason is forensic, not mechanical.
Answer — A — Containment (network isolation, account disable, process kill) preserves the host's state for forensics. Power-off destroys memory, in-flight processes, and any decryption keys the malware may have left in RAM. NotPetya is the canonical cautionary tale.
Q44 MSQ — Select all that apply Chapter 4 · Lesson 3
NIST SP 800-61 distinguishes containment by time horizon. Which of the following are correct? (Select all that apply)
  • A) Short-term containment is what stops the bleeding right now (network isolation, account disable).
  • B) System-backup containment refers to capturing forensic images before further action.
  • C) Long-term containment is the stable state in which the host can keep operating safely until full eradication and rebuild.
  • D) Containment is a one-step activity completed within five minutes.
Hint — Three correct horizons; one obviously wrong overstatement.
Answer — A, B, C — The three horizons are short-term, system-backup, and long-term. Containment is rarely a single step; it is a sequence of bounded actions buying time for eradication.
Q45 MCQ Chapter 4 · Lesson 3
Why is the SBAR (Situation–Background–Assessment–Recommendation) communication pattern useful in malware response?
  • A) It is a memory-dumping tool.
  • B) It is a structured way to brief executives or other responders quickly without losing critical context, originally borrowed from medicine.
  • C) It is a containment technique.
  • D) It is a malware family.
Hint — SBAR is a communication tool, originally from clinical handover.
Answer — B — SBAR was popularised in healthcare for shift handovers and crisis calls. It transplants well to incident response: a 90-second structured brief loses far less context than an unstructured update.
Q46 MCQ Chapter 4 · Lesson 4
Which malware was the centrepiece case study for worm-style propagation via the EternalBlue Server Message Block (SMB) exploit?
  • A) Stuxnet.
  • B) WannaCry.
  • C) Mirai.
  • D) Conficker.
Hint — Worm + SMB + ransomware-payload + 2017 = a single famous name.
Answer — B — WannaCry, May 2017. EternalBlue exploited the SMBv1 protocol; the kill-switch domain was registered by Marcus Hutchins, slowing the worm. NotPetya followed a month later with a similar SMB-propagation pattern but a destructive payload.
Q47 MCQ Chapter 4 · Lesson 4
What was the distinctive feature of NotPetya compared to a normal ransomware family?
  • A) It demanded a one-thousand-bitcoin ransom.
  • B) It was destructive: even when the ransom was paid, the chosen encryption design did not allow recovery, suggesting a wiper masquerading as ransomware.
  • C) It only infected printers.
  • D) It had no kill switch.
Hint — Looks like ransomware, behaves like a wiper.
Answer — B — NotPetya's encryption was designed in a way that recovery was infeasible regardless of payment, indicating the goal was destruction (Ukraine sabotage), not revenue. Maersk and Merck were collateral damage running into the billions of dollars.
Q48 MCQ Chapter 4 · Lesson 4
The Colonial Pipeline incident in May 2021 illustrates which lesson most strongly?
  • A) Operational disruption (fuel availability) can outweigh the direct ransom cost.
  • B) Antivirus is sufficient defence against modern ransomware.
  • C) Backups are unimportant if you have insurance.
  • D) The U.S. Federal Bureau of Investigation (FBI) cannot recover any cryptocurrency.
Hint — The pipeline being shut down disrupted fuel supply across the U.S. east coast for several days.
Answer — A — The ransom paid was approximately $4.4 million; the operational cost was orders of magnitude higher. The FBI ultimately recovered a substantial portion of the bitcoin (the opposite of D), and AV alone is not a defence against modern ransomware (B).
Q49 MCQ Chapter 4 · Lesson 4
What was the central technique behind the SolarWinds Orion (SUNBURST) supply-chain compromise?
  • A) A phishing email to the SolarWinds CEO.
  • B) Compromised build pipeline — malicious code injected into the legitimate, signed Orion product update before it was distributed to customers.
  • C) An unsecured Amazon Simple Storage Service (S3) bucket.
  • D) A USB drop in the SolarWinds car park.
Hint — The signature was valid because the malicious code was added during the legitimate build.
Answer — B — The malicious DLL was inserted into the build process, then signed with SolarWinds' legitimate signing key as part of the normal release. Customers received the malicious update through ordinary patching channels, which is why supply-chain attacks are so dangerous and so hard to detect.
Q50 MSQ — Select all that apply Chapter 4 · Lesson 5
Which of the following are sound containment steps in a malware case? (Select all that apply)
  • A) Network-isolating the host (block at switch, virtual local-area network (VLAN) move, host firewall rule).
  • B) Disabling the affected user accounts and rotating their tokens.
  • C) Killing known malicious processes and removing the autorun keys (only after evidence preservation).
  • D) Pulling the power cable as the first action.
  • E) Posting the malware sample to a public file-sharing site.
Hint — Three of these are correct, two are reflex errors with serious consequences.
Answer — A, B, C — Power-off destroys volatile evidence (D); public posting may tip off the attacker that they have been spotted, or expose customer data inside the sample (E). Network isolation, account disable, and process termination after evidence preservation are textbook containment.

Chapter 5 — Email Security Incidents (Phishing & Spam) (12 questions)

Q51 MSQ — Select all that apply Chapter 5 · Lesson 1
Which of the following are correctly defined? (Select all that apply)
  • A) Phishing — wide untargeted email lure to many recipients.
  • B) Spear-phishing — tailored email to a specific recipient or small group, using personal context.
  • C) Whaling — phishing targeted at senior executives.
  • D) Business Email Compromise (BEC) — typically credential or fraud-payment scheme using a compromised or look-alike sender, often without any malware payload.
  • E) Snail-phishing — the slow, hand-written postal version.
Hint — Four real definitions and one obvious invention.
Answer — A, B, C, D — The four flavours scale from indiscriminate to highly targeted. BEC is distinct because it usually involves no attachment or link — it is pure social engineering for wire-fraud or for credential change.
Q52 MCQ Chapter 5 · Lesson 1
What are the three stages of a phishing email's lifecycle as taught in the chapter?
  • A) Lure → Hook → Payload.
  • B) Open → Click → Pay.
  • C) Send → Wait → Profit.
  • D) Plan → Build → Deploy.
Hint — A three-word fishing metaphor, fittingly.
Answer — A — Lure (the attention-grabbing email content), Hook (the vector that the user is induced to interact with — link, attachment, reply), Payload (the actual harmful action — credential capture, malware drop, fraudulent wire).
Q53 MSQ — Select all that apply Chapter 5 · Lesson 2
Robert Cialdini's six social-engineering levers used in phishing include which of the following? (Select all that apply)
  • A) Reciprocity.
  • B) Commitment & Consistency.
  • C) Social Proof.
  • D) Authority.
  • E) Liking.
  • F) Scarcity.
  • G) Quantum Entanglement.
Hint — Six levers. The seventh option is from a physics textbook.
Answer — A, B, C, D, E, F — All six classic Cialdini levers appear in modern phishing. Scarcity and Authority are particularly common in BEC and parcel-courier lures. Quantum entanglement is not a social-engineering lever.
Q54 MCQ Chapter 5 · Lesson 3
What does the Sender Policy Framework (SPF), defined in Internet Engineering Task Force (IETF) Request for Comments (RFC) 7208, actually authenticate?
  • A) The display name in the From: header.
  • B) The Internet Protocol (IP) address (or address range) authorised to send email on behalf of a domain, by checking against the domain's published SPF Domain Name System (DNS) record.
  • C) The cryptographic signature on the message body.
  • D) The recipient's identity.
Hint — SPF is about who is allowed to send from this domain, expressed as IP ranges in DNS.
Answer — B — SPF is a published list of authorised sending IPs in a TXT record in DNS. The receiver checks the connecting IP against that list. SPF says nothing about the message body or the display name. DKIM (the next question) handles the body signature.
Q55 MCQ Chapter 5 · Lesson 3
What does DomainKeys Identified Mail (DKIM), defined in RFC 6376, do?
  • A) It encrypts the message in transit.
  • B) It cryptographically signs selected message headers and the body using a private key whose public counterpart is published in the sending domain's Domain Name System (DNS).
  • C) It scans attachments for malware.
  • D) It checks the recipient's reputation.
Hint — Public-key signature in DNS plus a signed message — the receiver verifies the signature to verify the message has not been tampered with.
Answer — B — DKIM proves the message originated from a server with the domain's private key and has not been altered in transit. It does not encrypt; it does not scan; it does not check the recipient. Encryption in transit is TLS; scanning is the gateway.
Q56 MCQ Chapter 5 · Lesson 3
What does Domain-based Message Authentication, Reporting and Conformance (DMARC), defined in RFC 7489, add to SPF and DKIM?
  • A) It encrypts the message body.
  • B) It tells the receiver what to do when SPF or DKIM fails (none / quarantine / reject) and provides a reporting feedback loop to the domain owner.
  • C) It blocks all attachments.
  • D) It signs the recipient's address.
Hint — DMARC is the policy and reporting layer on top of SPF and DKIM.
Answer — B — DMARC's two contributions are (1) a published policy on what to do when authentication fails, and (2) aggregate and forensic reports back to the domain owner so they can see who is sending mail purporting to be them.
Q57 MCQ Chapter 5 · Lesson 3
An attacker registers canadapost-delivery-notice.com to send phishing for a parcel-delivery scam. The legitimate domain is canadapost.ca. Which of the following is true about SPF, DKIM, and DMARC defences in this scenario?
  • A) Real Canada Post's DMARC policy will automatically block the attacker's email.
  • B) SPF, DKIM, and DMARC defend against spoofing of the protected domain, not against typosquatting domains owned and operated by the attacker.
  • C) The attacker's mail must fail DMARC because it does not match Canada Post's SPF record.
  • D) DKIM signing on the attacker's domain is impossible.
Hint — The attacker is using their own domain, not pretending to be Canada Post at the protocol level.
Answer — B — Authentication frameworks protect domains from being spoofed at the protocol level. They do not stop look-alike domains. The defence against typosquatting is gateway heuristics (look-alike-domain detection), DNS filtering, and user awareness.
Q58 MSQ — Select all that apply Chapter 5 · Lesson 4
Which of the following are core principles of effective phishing-awareness training? (Select all that apply)
  • A) Frequent, short, varied lessons rather than one long annual session.
  • B) Realistic, regularly-rotated phishing simulations.
  • C) A blameless reporting culture so users feel safe to click "Report" without fear.
  • D) Public shaming of users who fall for simulations.
  • E) Measurement of report rate, not only of click rate.
Hint — Four are accepted best practice. One is the worst possible thing you can do.
Answer — A, B, C, E — Public shaming destroys reporting culture; once users hide their clicks the programme blinds itself. Click rate alone is incomplete; the report rate is the leading indicator of a healthy human firewall.
Q59 MCQ Chapter 5 · Lesson 5
When a confirmed phishing email is reported by an end user, which of the following is the correct first analyst action?
  • A) Reply-all to the entire company warning of the phishing.
  • B) Verify the report (check headers, sender, link) and then run a message trace to identify everyone else who received the same email.
  • C) Forward the email to the police.
  • D) Delete the email from the reporter's mailbox immediately and close the case.
Hint — First step in any IR is verify. Second is scope.
Answer — B — Verify, then scope. Without scoping, you will purge one mailbox while ten other employees click the same link. The chapter's first-30-minute cheatsheet starts with verify-and-trace.
Q60 MSQ — Select all that apply Chapter 5 · Lesson 5
Which of the following are appropriate steps in the first 30 minutes of an email-phishing incident? (Select all that apply)
  • A) Verify the report by examining headers and the link.
  • B) Identify all recipients via a message trace.
  • C) Purge the email from all affected mailboxes.
  • D) Block the sender domain and the malicious URLs at the gateway and the Domain Name System (DNS) filter.
  • E) Tell the press.
  • F) If credentials may have been entered, force password reset and revoke session tokens for affected users.
Hint — Five are correct. One is way too early in the timeline.
Answer — A, B, C, D, F — Press communications, if needed at all, come much later, are coordinated by the Communications Lead, and almost never happen for an ordinary credential-phishing case.
Q61 MCQ Chapter 5 · Lesson 5
What is Adversary-in-the-Middle (AITM) phishing, and why is it dangerous?
  • A) It is the same as a normal phishing page, just slower.
  • B) It proxies the victim's session through an attacker server in real time, capturing not only the password but also the post-multi-factor-authentication (MFA) session cookie, effectively bypassing many MFA implementations.
  • C) It is a hardware attack that requires physical access.
  • D) It only works on Windows.
Hint — The danger is the session token capture, which neutralises MFA in many configurations.
Answer — B — AITM kits relay credentials and the MFA challenge in real time, then capture the resulting session cookie. The attacker now has an authenticated session without ever needing the password again. Phishing-resistant MFA (FIDO2 hardware keys) defends against this; SMS one-time passwords do not.
Q62 MCQ Chapter 5 · Lesson 1
A finance assistant receives an email that appears to be from the chief executive officer (CEO), asking urgently for a wire transfer to a new vendor before close of business. The body is plausible. There is no attachment and no link. What is this most likely?
  • A) A malware drop.
  • B) A Business Email Compromise (BEC) attempt.
  • C) A whaling attack against the CEO.
  • D) An ordinary phishing message.
Hint — No payload, no link, urgency, finance angle — the pattern is classic.
Answer — B — BEC typically uses no malware. Either the CEO's account is compromised, or a look-alike domain is used. Whaling targets the CEO; this targets the assistant. Defence is process: out-of-band verification of any new wire instruction over a known-good channel.

Chapter 6 — Web Application Security Incidents (13 questions)

Q63 MCQ Chapter 6 · Lesson 1
The Open Web Application Security Project (OWASP) Top 10 (2021) places which category at A01?
  • A) Injection.
  • B) Broken Access Control.
  • C) Cryptographic Failures.
  • D) Server-Side Request Forgery (SSRF).
Hint — In the 2021 revision, the category that used to share the top spot moved up to A01.
Answer — B — Broken Access Control is A01 in the 2021 OWASP Top 10. Cryptographic Failures (formerly Sensitive Data Exposure) is A02; Injection is A03; SSRF is A10.
Q64 MSQ — Select all that apply Chapter 6 · Lesson 1
Which of the following are categories present in the OWASP Top 10 (2021)? (Select all that apply)
  • A) A01: Broken Access Control.
  • B) A03: Injection.
  • C) A06: Vulnerable and Outdated Components.
  • D) A10: Server-Side Request Forgery (SSRF).
  • E) A11: Email Phishing.
Hint — Four are real Top 10 entries; one is a category that does not exist in OWASP Top 10.
Answer — A, B, C, D — There is no A11; the OWASP Top 10 has exactly ten categories. Phishing is not a web-application risk category in the OWASP Top 10.
Q65 MCQ Chapter 6 · Lesson 2
Which of the following is the canonical example of a SQL injection (SQLi) payload?
  • A) <script>alert(1)</script>.
  • B) ' OR '1'='1.
  • C) ../../etc/passwd.
  • D) http://internal/admin?cmd=ls.
Hint — SQLi payloads escape from a string context into the query's logic.
Answer — B — Classic tautology injection. A is XSS; C is path traversal; D is reminiscent of SSRF or command injection. SQLi belongs to A03 Injection.
Q66 MCQ Chapter 6 · Lesson 2
What does Cross-Site Scripting (XSS) let an attacker do?
  • A) Read the contents of the server's database directly.
  • B) Execute arbitrary JavaScript in the victim's browser in the context of the vulnerable site, allowing session-cookie theft, key-logging, or page-defacement.
  • C) Send arbitrary email from the server.
  • D) Power off the server.
Hint — The exploit runs in the user's browser, not on the server.
Answer — B — XSS exploits the browser's trust in the page. Reflected, stored, and DOM-based variants all share the principle that attacker-controlled JavaScript executes inside the same origin as the legitimate site, with the user's session.
Q67 MCQ Chapter 6 · Lesson 2
Cross-Site Request Forgery (CSRF) exploits which weakness?
  • A) The browser's automatic inclusion of session cookies in any request to the target site, including requests forged by a different site that the user happens to be visiting.
  • B) Weak password storage.
  • C) Out-of-date Java runtimes.
  • D) Open Wi-Fi networks.
Hint — The exploit relies on the user being already authenticated and the browser obediently sending the cookie.
Answer — A — CSRF abuses the browser's default behaviour of attaching cookies to every request to a site. Without anti-CSRF tokens (or modern cookie attributes such as SameSite), a malicious page on evil.com can trigger an authenticated request to bank.com using the victim's existing session.
Q68 MCQ Chapter 6 · Lesson 3
What is the NCSA Combined Log Format used for, and which fields does it record?
  • A) It is a Linux file system.
  • B) It is a web-server access-log format that records, per request, fields such as remote host, identity, user, timestamp, request line, status code, response size, referer, and user-agent.
  • C) It is a malware family.
  • D) It is a cryptographic hash function.
Hint — Look at the field names — they describe HTTP requests.
Answer — B — NCSA Combined is the standard web-server access-log format. Both Apache and nginx default to it. The fields are exactly the ones a forensic analyst needs to retrace the path of a SQL injection or webshell access.
Q69 MSQ — Select all that apply Chapter 6 · Lesson 3
Which of the following are layers of web log or web telemetry the chapter lists as relevant during web-application incident response? (Select all that apply)
  • A) Web-server access logs.
  • B) Application-level logs (the framework's own log of requests, exceptions, queries).
  • C) Web Application Firewall (WAF) logs.
  • D) Reverse-proxy or Content Delivery Network (CDN) logs.
  • E) Database query logs.
  • F) Operating-system kernel scheduler logs.
Hint — Five log layers are relevant. The sixth is too low-level to be helpful here.
Answer — A, B, C, D, E — Kernel scheduler logs are about CPU dispatch; they do not help with web-app forensics. The five web layers above are the ones that matter, and a good web IR investigation cross-references at least three of them.
Q70 MCQ Chapter 6 · Lesson 4
What is the correct order of the seven-rung containment ladder for a web-application compromise?
  • A) Eradicate → Confirm → Isolate → Restore → Patch → Lessons-Learned → Preserve.
  • B) Confirm → Isolate → Preserve → Eradicate → Patch & Harden → Restore → Lessons Learned.
  • C) Isolate → Patch → Restore → Confirm → Eradicate → Preserve → Lessons-Learned.
  • D) Confirm → Eradicate → Isolate → Patch → Restore → Preserve → Lessons-Learned.
Hint — Confirm first; isolate before you eradicate; preserve evidence before you destroy it; lessons learned at the end.
Answer — B — The seven rungs in order: Confirm, Isolate, Preserve, Eradicate, Patch & Harden, Restore, Lessons Learned. The order matters: every rung depends on the one before it.
Q71 MCQ Chapter 6 · Lesson 4
Why is "isolate before you eradicate" particularly important when a webshell has been found?
  • A) It is a regulatory requirement.
  • B) If the attacker is currently active and the team eradicates without first isolating, the attacker can simply re-deploy a new webshell from a foothold the team has not yet found.
  • C) It is faster.
  • D) It saves bandwidth.
Hint — If the attacker is watching, eradication without isolation is just a target-practice round.
Answer — B — Isolation severs the attacker's ability to interact with the host before the team starts visibly cleaning up. Otherwise the attacker observes the cleanup, deploys an additional webshell from another foothold, and the team is back where they started.
Q72 MCQ Chapter 6 · Lesson 5
The Equifax (2017) breach, frequently used as a Chapter 6 case study, is most directly attributable to which OWASP Top 10 category?
  • A) A01: Broken Access Control.
  • B) A03: Injection.
  • C) A06: Vulnerable and Outdated Components — specifically a known unpatched Apache Struts Common Vulnerabilities and Exposures (CVE).
  • D) A10: Server-Side Request Forgery (SSRF).
Hint — Apache Struts + missed patch + a public CVE = a single category.
Answer — C — Equifax failed to patch a known Struts CVE for which a fix was available for months. It is the canonical A06 case study. The Capital One case (next questions) is the canonical A10/SSRF case.
Q73 MCQ Chapter 6 · Lesson 5
The British Airways (2018) Magecart incident is most directly attributable to which root cause?
  • A) An unpatched web server.
  • B) A compromised third-party JavaScript loaded by the payment page, harvesting cardholder data client-side as users typed it.
  • C) A SQL injection in the booking search.
  • D) A misconfigured Amazon Simple Storage Service (S3) bucket.
Hint — Magecart-style attacks live in the browser, not on the server.
Answer — B — Attacker-controlled JavaScript ran inside the victim's browser as part of the legitimate page, capturing card data before it ever reached the back-end. Defences include Content Security Policy (CSP), Subresource Integrity (SRI), and minimising third-party scripts on payment pages.
Q74 MCQ Chapter 6 · Lesson 5
The Capital One (2019) breach is most directly attributable to which OWASP Top 10 category?
  • A) A01: Broken Access Control.
  • B) A02: Cryptographic Failures.
  • C) A07: Identification and Authentication Failures.
  • D) A10: Server-Side Request Forgery (SSRF).
Hint — Misconfigured Web Application Firewall + cloud metadata service + 100 million records = a single category.
Answer — D — SSRF was the root vector: a misconfigured WAF allowed an attacker to make the WAF itself fetch from the cloud instance metadata service, retrieving the credentials assigned to the WAF's role and using them to read S3 buckets. SSRF entered the OWASP Top 10 in 2021 partly because of this case.
Q75 MSQ — Select all that apply Chapter 6 · Lesson 4
Which of the following are valid defences that the chapter recommends at the Patch & Harden rung after a web-application compromise? (Select all that apply)
  • A) Add a server-side allow-list for file-upload endpoints (rejecting anything not on the list).
  • B) Disable script execution in any directory designed to hold user-uploaded content.
  • C) Enforce parameterised queries across the application to close out SQL-injection vectors.
  • D) Rotate every credential the compromised host had access to.
  • E) Trust client-side validation as the primary defence.
  • F) Add a Content Security Policy (CSP) to mitigate XSS exposure.
Hint — Five are correct hardening steps; one is the rookie error that opens half of the OWASP Top 10.
Answer — A, B, C, D, F — Client-side validation is a usability feature, not a defence; an attacker bypasses it by sending the request directly. The other five are mainstream hardening steps the chapter walks through.

Disclaimer

Intellectual property. This mid-term review document is the intellectual property of Cyber.SoHo Educational Hub and was authored as original course material by the instructor. It is intended exclusively for the registered students of the Incident Management course as a study aid for the mid-term examination. Redistribution outside the registered student cohort, reposting on third-party platforms, or use as training data for machine-learning systems requires the prior written consent of Cyber.SoHo Educational Hub.

No affiliation. Cyber.SoHo Educational Hub and the author are not affiliated with, endorsed by, sponsored by, or otherwise officially connected to any of the organisations, frameworks, regulators, vendors, products, or platforms named or hyperlinked in this document. References — including but not limited to the National Institute of Standards and Technology (NIST), the European Union, the Office of the Privacy Commissioner of Canada (OPC), the U.S. Federal Trade Commission (FTC), the Canadian Centre for Cyber Security (CCCS), the International Organisation for Standardisation (ISO), the Payment Card Industry Security Standards Council (PCI SSC), the Open Web Application Security Project (OWASP), MITRE Corporation, the Internet Engineering Task Force (IETF), the Information Commissioner's Office (ICO), the U.S. Department of Health and Human Services (HHS), the U.S. Securities and Exchange Commission (SEC), the U.S. Federal Bureau of Investigation (FBI), Europol, the Royal Canadian Mounted Police (RCMP), the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the FBI Internet Crime Complaint Center (IC3), the Carnegie Mellon Software Engineering Institute (SEI), International Business Machines Corporation (IBM), Microsoft Corporation, Mandiant (Google Cloud), CrowdStrike, Palo Alto Networks Unit 42, VirusTotal, SANS Institute, and YouTube, LLC — are made strictly for educational, reference, and pedagogical purposes. All trademarks, service marks, logos, and brand names are the property of their respective owners.

External-link safety. Every external link in this document points to what the author believes, at time of writing, to be the legitimate official site of the named organisation or to a reputable, freely available specification document (such as an IETF RFC or a NIST Special Publication). Students are nevertheless reminded that the wider Internet is a dynamic environment in which domains may expire, change ownership, or be subverted. Before clicking any link from any document — including this one — students are encouraged to scan the URL through VirusTotal or an equivalent service. Healthy paranoia is a job skill, not a personality flaw.

Liability and ethics. This document is provided for educational purposes only and is not a substitute for professional legal, compliance, regulatory, forensic, or operational advice in any specific incident. Reporting clocks, regulatory thresholds, and procedural requirements cited herein are summary descriptions and may change; always consult the current official text of the relevant law or framework before acting on it in a real incident. The deadlines and time budgets stated for triage, containment, eradication, and recovery activities are planning estimates drawn from industry experience; they are not contractual or regulatory commitments. The tools, products, and platforms named in this document are illustrative examples drawn from a wide and constantly changing market — every one of them has multiple alternatives, both commercial and open-source, and students are actively encouraged to research, compare, and experiment with substitutes that fit their own context and budget. Students performing any technical exercise — packet capture, log analysis, port scanning, vulnerability scanning, malware sandboxing, or any related activity — must do so only on systems and networks for which they have explicit written authorisation. Unauthorised access, scanning, or interference with computer systems is a criminal offence in most jurisdictions, including under the Criminal Code of Canada — Section 342.1 (Unauthorized Use of a Computer) and Section 430(1.1) (Mischief in Relation to Computer Data), the U.S. Computer Fraud and Abuse Act (CFAA), and the Council of Europe's Budapest Convention on Cybercrime — to which both Canada and the United States are signatories. Cyber.SoHo Educational Hub, the author, and the affiliated educational institution accept no liability for any loss, damage, regulatory exposure, or legal consequence arising from the use, misuse, or misinterpretation of this material. Think before you click. Think harder before you exploit.


End of mid-term review.

From the Industry to the Classroom — Building the IT's Future, Today and Together!