Malware Detection,
Analysis & Containment.
A five-hour laboratory exercise in safely isolating, dissecting, and signaturing malicious software — the foundational craft of the modern incident responder.
Read before proceeding.
Three non-negotiable disclaimers govern every aspect of this laboratory. Acknowledge each before you continue.
This laboratory exercise is designed exclusively for controlled educational environments. All techniques, tools, and methodologies introduced herein are presented strictly for defensive, research, and academic purposes within an isolated, authorised lab network. Attempting to apply any of these techniques against live systems, production environments, or third-party infrastructure without explicit written authorisation is illegal in most jurisdictions and may result in criminal prosecution under applicable cybercrime legislation (e.g., the Computer Fraud and Abuse Act — CFAA in the United States, or equivalent national laws). The instructor and Cyber.SoHo bear no responsibility for any misuse of the materials presented in this document.
All sample malware specimens used in this exercise must be sourced from reputable, legally accessible malware repositories (e.g., MalwareBazaar, theZoo (GitHub)) or generated as synthetic, benign test files such as EICAR test strings. Never introduce real, weaponised malware outside of a fully isolated, air-gapped environment.
All tools and techniques referenced throughout this lab (FlareVM, REMnux, VirusTotal, Cuckoo Sandbox, YARA, Ghidra, PEiD, strings, etc.) are open-source, freely available, or publicly accessible at the time of writing. These are suggestions only — not mandatory requirements. Students are actively encouraged to research, substitute, combine, or even create their own equivalent tools. The objective is to develop analytical thinking and methodological rigour, not tool dependency. Creative problem-solving and thorough personal research are rewarded.
All educational materials, lab exercises, assessments, and documentation contained within this package are the exclusive intellectual property of Cyber.SoHo. Reproduction, redistribution, adaptation, or resale of these materials — in whole or in part — without the express written consent of Cyber.SoHo is strictly prohibited. Cyber.SoHo is an independent educational content creator and is not affiliated with any academic institution. When referencing these materials, students must cite: "Cyber.SoHo Incident Management Lab Series, 2026."
The full project, weeks 2 to 10.
This laboratory exercise is part of Project 02 — itself one pillar of the overarching Incident Management Course Project. Below is the complete curriculum roadmap. Week 1 (Introduction to Incident Handling) is excluded from the assessed project. Your work today contributes directly to the cumulative final submission.
| Week | Dates | Class 1 — Topic | Class 2 — Topic |
|---|---|---|---|
| Week 2 | Apr 25 | First Response & Evidence Handling | Securing Crime Scenes & Collecting Digital Evidence |
| Week 3 | Apr 27 – 29 | Handling Malware Incidents | ★ Malware Detection, Analysis & Containment You are here |
| Week 4 | May 2 | Email Security Incidents (Phishing & Spam) | Analyzing Email Headers & Tracing Attacks |
| Week 5 | May 4 – 8 | Web Application Security Incidents | Mid-Term Review |
| Week 6 | May 9 | — | Mid-Term Examination |
| Week 7 | May 15 – 16 | Network Security Incidents (DoS, Unauthorized Access) | Network Traffic Analysis & Incident Validation |
| Week 8 | May 21 – 22 | Insider Threats | Detecting & Eradicating Insider Activity |
| Week 9 | May 23 – 25 | Endpoint Security (Mobile, IoT, OT) | Mobile & IoT Forensics and Analysis |
| Week 10 | May 28 – 29 | Final Review | Final Examination |
Each topic above represents a distinct deliverable. The skills, tools, and methodologies practised in each session build upon each other. By the final submission, students will have assembled a comprehensive Incident Management Portfolio that demonstrates competency across all major incident types.
By the end of this five-hour session, you will…
Set up and configure a safe, isolated sandbox environment suitable for malware analysis without endangering host systems or production networks.
Perform static analysis by inspecting malware artefacts — including file headers, embedded strings, metadata, and cryptographic hashes — without executing the file.
Conduct dynamic (behavioural) analysis by running suspected malware in a controlled environment and systematically documenting process creation, network connections, registry modifications, and file system changes.
Leverage industry-standard detection tools such as VirusTotal (VT), Cuckoo Sandbox, and YARA (Yet Another Ridiculous Acronym) rules to automate and scale detection efforts.
Produce a professional malware analysis report following a structured, reproducible format that could be handed off to a security operations team, management, or legal counsel.
Arrive ready. Time on the bench is precious.
Before your scheduled lab session begins, make sure the following items are ready. Failure to complete the pre-lab preparation will significantly reduce the time available for the actual exercise. Tap to check items off — your progress saves locally.
- A virtualisation platform is installed on your workstation (e.g., VMware Workstation Player or VirtualBox). Both are free for personal/educational use.
- You have downloaded at minimum one of the following analysis environments: FlareVM (Windows-based), REMnux (Linux-based), or an equivalent platform of your own choosing. Links are provided in Part 1.
- Your VM (Virtual Machine) is configured with network adapters set to
Host-OnlyorInternal Networkmode to prevent accidental internet access. - A clean VM snapshot has been taken BEFORE the introduction of any potentially malicious file.
- You have a text editor or IDE (Integrated Development Environment) ready for note-taking and YARA rule creation.
- You have access to your student lab notebook or report template (provided separately by your instructor).
- You have reviewed the key concepts from Day 05 — Handling Malware Incidents as a theoretical foundation.
Ponderation 2 — 3 — 3.
This laboratory is graded out of 8 points, distributed across three parts. The weight assigned to each part reflects the cognitive complexity and the time investment expected.
Sandbox Setup & Environment Configuration
Environment design, isolation, snapshotting.
Static & Dynamic Malware Analysis
Hands-on analysis, tool proficiency, methodology.
Detection, YARA Rules & Final Report
Detection logic, professional communication, reporting.
TOTAL
Sum of all three parts — eight points possible.
Students are encouraged to go beyond the minimum requirements. Bonus credit may be awarded — at the instructor's discretion — for exceptional creativity, the use of self-developed tools or scripts, or particularly insightful analysis commentary within the report.
Sandbox setup & environment configuration.
Estimated time: 50 – 70 minutes. This part focuses on the safe preparation of your analysis workspace. An improperly configured environment is not just a grading issue — it is a real-world security risk. Take your time here.
Task 1.1
Choose and Install Your Analysis Environment
A malware sandbox is an isolated, controlled system where potentially harmful software can be executed and observed without risking damage to production infrastructure. Two predominant, freely available environments are described below. You are, however, free to use any equivalent platform you discover through your own research — be creative.
Option A — FlareVM
FlareVM is a freely available, open-source Windows-based security distribution maintained by Mandiant (now part of Google). It transforms a standard Windows virtual machine into a complete reverse engineering and malware analysis workstation by automating the installation of dozens of specialised tools.
Official repository → github.com/mandiant/flare-vm
Recommended base: Windows 10 or Windows 11 VM. Minimum: 60 GB virtual disk and 4 GB RAM (8 GB recommended).
Key pre-installed tools:
Option B — REMnux
REMnux is a Linux-based toolkit purpose-built for reverse-engineering and analysing malware. It is maintained by Lenny Zeltser and is used extensively by malware researchers and incident responders worldwide.
Official site → remnux.org
Key pre-installed tools:
Step-by-Step: Creating Your Sandbox VM
- Download and install a hypervisor (VMware Workstation Player or VirtualBox) if not already present.
- Obtain the base operating system ISO or OVA image required by your chosen platform. For FlareVM, a licensed Windows 10/11 ISO is required. For REMnux, a free Ubuntu-based OVA is available directly from the official REMnux website.
- Create a new virtual machine. Allocate at minimum: 2 CPU cores, 4 GB RAM, 60 GB dynamically allocated virtual disk.
- Follow the installation guide for your chosen platform (FlareVM or REMnux). The FlareVM installation script can take 30 – 90 minutes depending on internet speed. Consider pre-downloading before the lab session.
- Critical: After the environment is fully installed and configured, take a clean VM snapshot. Name it clearly, for example:
CLEAN_BASELINE_DO_NOT_DELETE. This snapshot will be your restore point between analyses and between lab sessions.
Task 1.2
Network Isolation & Environment Hardening
One of the most critical — and most commonly overlooked — aspects of malware analysis is network isolation. A malware specimen that establishes a live network connection can beacon to its Command & Control (C2) server, potentially exposing your real-world IP address or enabling further payload delivery. The following steps are non-negotiable:
- Set the VM network adapter to
Host-OnlyorInternal Networkmode. This prevents the VM from routing traffic to the physical network or the internet. - Optionally, deploy FakeNet-NG (pre-installed on FlareVM) or INetSim to simulate network services (DNS, HTTP, SMTP) within the sandbox. This allows malware to "believe" it has network connectivity while actually communicating only with controlled local services.
- Disable any shared folders between your host operating system and the guest VM. Malware capable of detecting virtualisation environments may attempt to traverse shared directories.
- Disable clipboard sharing between host and guest.
- Ensure Windows Defender or equivalent antivirus is disabled within the sandbox (FlareVM handles this automatically). Antivirus interference will disrupt both static and dynamic analysis.
Task 1.3
Verify, Document & Screenshot Your Setup
Before proceeding to Part 2, you must document your environment configuration in your report. Verification demonstrates that your findings are reproducible and your methodology is sound.
- Screenshot of your VM summary screen (showing CPU, RAM, disk allocation, and network adapter type).
- Screenshot confirming the network adapter is set to Host-Only or Internal Network mode.
- Screenshot of your clean baseline snapshot in the VM snapshot manager.
- Brief written description (3 – 5 sentences in your report) explaining your choice of platform and the rationale behind it. If you chose a non-standard platform, explain why.
- List the 5 most important tools available in your environment, with a one-sentence description of each.
Static & dynamic analysis.
Estimated time: 2.5 – 3 hours. This is the core analytical section. You will apply two complementary methodologies — static analysis and dynamic analysis — to a sample malware artefact. Together, these approaches provide a full picture of the malware's identity, capabilities, and intent.
Task 2.1
Obtaining a Safe Malware Sample
Before any analysis can begin, you need a specimen. You must use only legitimate, authorised sources. The following repositories are widely used by professional malware analysts and security researchers:
- MalwareBazaar (Abuse.ch) — Free, publicly accessible repository of malware samples, each with associated metadata and community tags. Search by file type or malware family.
- VirusShare — Requires a free account. Hosts a very large collection of hashed malware samples for research use.
- theZoo (GitHub) — A curated collection of live malware samples intentionally made available for educational and research purposes. Read the repository's disclaimer carefully.
- Use the EICAR Anti-Malware Test File as a completely safe, non-malicious test string that is detected by all major antivirus engines — ideal for initial environment verification without any actual risk.
Transfer the selected sample to your sandbox VM only via a password-protected zip archive
(password: infected — the standard convention in the malware research community).
Never open or execute a sample outside of your fully isolated VM environment. Restore your
snapshot immediately if analysis is complete or if unexpected behaviour is detected on the host.
Task 2.2
Static Analysis — Examining Without Executing
Static analysis is the examination of a malware file without running it. This approach is low-risk and often reveals significant information about the file's origin, functionality, and intent. Work through the following sub-tasks sequentially, documenting your findings at each step.
2.2.1 — File Identification & Hashing
Your first step is to positively identify what kind of file you are dealing with and generate cryptographic hashes that uniquely fingerprint it. These hashes allow you to cross-reference the sample against threat intelligence databases.
- Compute the file's cryptographic hash values: MD5, SHA-1, and SHA-256. On REMnux, use the
md5sum,sha1sum, andsha256sumcommand-line utilities. On Windows (FlareVM), use PowerShell'sGet-FileHashcmdlet or a tool like HashMyFiles. Record all three hash values in your report. - Use the
filecommand on Linux (REMnux) to determine the true file type, regardless of extension. On Windows, use the PEiD tool or Detect It Easy (DIE) to identify the executable packer, compiler, or obfuscation method used. - Note the file size, creation date, and modification date from the file system metadata. Discrepancies between claimed dates and internal timestamps are a common indicator of tampering.
2.2.2 — String Extraction & Analysis
Embedded strings are one of the richest sources of intelligence in a malware binary.
Strings can reveal hardcoded IP addresses, domain names, registry keys, file paths, error messages,
encryption keys, or even fragments of code comments left by the author. Use the FLOSS
(FLARE Obfuscated String Solver) tool from Mandiant — pre-installed on FlareVM — or the standard
strings utility on REMnux:
- Run FLOSS or
strings -n 6(minimum 6-character strings) against your sample and redirect the output to a text file for review. - Search the output for: IP addresses and domain names (potential C2 infrastructure), file paths and registry keys (potential persistence mechanisms), URLs (potential download locations), and any readable error or status messages (which often reveal the malware's internal logic).
- Highlight and document a minimum of 10 significant strings in your report, with a brief explanation of what each string might indicate about the malware's behaviour or purpose.
Illustrative output (your sample's strings will differ):
- HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
- http://malicious-c2.example/beacon
- %APPDATA%\Microsoft\svchost.exe
- cmd.exe /c whoami /priv
- kernel32.dll : VirtualAlloc
2.2.3 — PE (Portable Executable) Header Analysis
Most Windows malware is packaged as a PE (Portable Executable) file. The PE file format contains a wealth of structural metadata that can be read without executing the file. Use PE Studio (Windows), pefile (Python library, cross-platform), or Cutter to examine:
- Import Address Table (IAT): Which Windows API (Application Programming Interface) functions does the malware import? Functions like
CreateRemoteThread,WriteProcessMemory,VirtualAlloc,RegSetValueEx, orInternetOpenare strong behavioural indicators. - Section headers: Standard PE sections include
.text(code),.data(initialised data),.rdata(read-only data),.rsrc(resources). Unusual sections, very high entropy values (above 7.0 suggesting compression or encryption), or sections with mismatched names and permissions are red flags. - Compilation timestamp: Record the linker timestamp from the PE header. Note whether this date appears plausible given the malware's known first-seen date (if available).
- Rich Header: If present, this header reveals information about the development environment — compiler version, build tool identifiers. This can help attribute the malware to a specific development kit or threat actor toolchain.
Task 2.3
Dynamic (Behavioural) Analysis — Watch It Run
Restore your clean baseline snapshot before starting this task. Every dynamic analysis session should begin from a known-clean state so that behavioural artefacts from prior sessions do not contaminate your results.
Dynamic analysis involves executing the malware in a controlled environment and systematically monitoring all changes it makes to the system. Set up your monitoring tools BEFORE launching the sample.
2.3.1 — System Monitoring Setup
The goal is to capture a complete "before and after" picture of the system state. Set up the following monitoring layers before executing the malware:
- Process monitoring: Launch Process Monitor (ProcMon) from Sysinternals — pre-installed on FlareVM — BEFORE launching the sample. Set filters to capture only events related to the target process name. On REMnux, use
stracefor system call tracing. - Registry monitoring: Use Regshot to take a "first shot" of the registry BEFORE execution. After the malware runs, take a "second shot" and compare to reveal all registry changes made.
- Network monitoring: Start Wireshark or Tcpdump and begin packet capture on all interfaces BEFORE execution. If using FakeNet-NG, start it first to intercept and log simulated network connections.
- File system monitoring: Note the current directory listing of key system locations (
\Temp,\AppData,\Windows\System32,\Startup) before execution.
2.3.2 — Execute & Observe
- Execute the malware sample within the sandboxed VM. Allow it to run for a minimum of 3 to 5 minutes. Some malware has sleep timers or logic to delay its primary payload.
- Observe the process tree. Did the malware spawn child processes? Did it inject code into a legitimate system process (a technique known as process injection or process hollowing)? Screenshot and annotate the process tree.
- After the observation window, stop execution (do NOT restart the VM yet). Take the Regshot "second shot" and generate the comparison report.
- Review ProcMon for file system reads/writes, registry access, and network connections. Filter by process name to focus on the sample's activity.
- Examine the Wireshark/FakeNet-NG capture for DNS queries, HTTP/HTTPS connections, and any unusual protocol usage.
2.3.3 — Document Behavioural Indicators (IOCs)
From your dynamic analysis session, compile a comprehensive list of Indicators of Compromise (IOCs). These are forensic artefacts that, once identified, can be used to detect the same malware across an entire organisation's infrastructure.
Minimum required IOCs for your report:
File Artefacts
Full paths of any files created, modified, or deleted by the malware.
Registry Artefacts
All registry keys created or modified — include full path and value data.
Process Artefacts
Names, PIDs, and parent-child relationships of all spawned processes.
Network Artefacts
All IP addresses, domain names, ports, and protocols observed during capture.
Persistence
Did the malware attempt persistence across reboots? (scheduled tasks, startup folder, run keys, services)
Defence Evasion
VM detection, security tool tampering, process token modification.
Immediately after completing your dynamic analysis and collecting all evidence, revert your VM to the clean baseline snapshot. Never allow the analysed environment to persist into future sessions without a revert. Label your network packet capture file and ProcMon log clearly with the sample's SHA-256 hash and the analysis date before saving them as evidence.
Detection, YARA rules & final report.
Estimated time: 1.5 – 2 hours. In this final part, you will pivot from analysis to detection — translating your analytical findings into reusable, shareable threat intelligence artefacts — and produce your formal report.
Task 3.1
VirusTotal & Multi-Engine Scanning
VirusTotal (VT) is a free online service operated by Google that aggregates scan results from over 70 different antivirus engines, URL scanners, and sandboxes. Submitting a file hash (NOT the actual sample for sensitive cases) allows you to quickly determine whether the malware is already known to the security community and by what names different vendors classify it.
If your sample was sourced from a legitimate research repository and is already publicly known, you may submit the file itself for a more complete analysis including VirusTotal's own sandbox execution results. For any sample of uncertain provenance, submit only the SHA-256 hash to avoid inadvertently sharing a novel malware specimen.
- Submit your sample's SHA-256 hash to VirusTotal. Screenshot the detection summary (number of engines detecting vs. total engines).
- Note the detection names assigned by at least three different antivirus engines. Do the names suggest a common malware family? Is there significant disagreement between vendors?
- Examine the "Details" tab for additional metadata — first submission date, file size confirmation, additional hashes, and any SSDEEP (fuzzy hash) value.
- Review the "Behaviour" tab if a dynamic analysis report is available. Cross-reference the network indicators and file system artefacts against your own dynamic analysis findings from Task 2.3. Note any discrepancies and investigate their cause.
Task 3.2
Cuckoo Sandbox Automated Analysis
Cuckoo Sandbox is an open-source automated malware analysis system that can execute samples in an isolated environment and produce detailed JSON and HTML reports covering network traffic, API calls, file system modifications, memory dumps, and screenshots of the malware's execution. A managed public instance is available at Malware.City and similar services. Alternatively, REMnux includes Cuckoo's components for local deployment.
If deploying Cuckoo locally, this is an advanced configuration exercise. The instructor will provide guidance if time permits. If using a public submission service:
- Submit your sample (or hash) to a Cuckoo-based submission service. Review the generated analysis report.
- Identify and document the following from the Cuckoo report: process tree summary, list of accessed registry keys, network connections established, API (Application Programming Interface) calls of interest, and any MITRE ATT&CK (Adversarial Tactics, Techniques & Common Knowledge) technique identifiers referenced in the report.
- Map at least 3 observed behaviours to specific MITRE ATT&CK technique IDs. Include these mappings in your report.
Common technique IDs you may encounter:
Task 3.3
Writing YARA Detection Rules
YARA (a recursive acronym: Yet Another Ridiculous Acronym — created by Victor Alvarez at VirusTotal) is the de facto standard language for malware identification and classification. A YARA rule describes a pattern — byte sequences, strings, structural characteristics — that uniquely identifies a malware specimen or family. Documentation: virustotal.github.io/yara.
Using the strings, PE imports, and file characteristics you identified in Part 2, write at least one custom YARA rule that would detect the malware sample you analysed. Your rule must include at minimum:
- A
metasection with author (use a handle or your student ID — do not use your full name in the rule), description, date, and sample SHA-256 hash. - A
stringssection defining at least 3 detection strings or byte patterns. Use a combination of full text strings, hex byte patterns, and optionally regular expressions to demonstrate varied rule types. - A
conditionsection specifying when the rule should fire (e.g.,any of them,2 of ($str*) and uint16(0) == 0x5A4Dto require at least 2 string matches and a valid PE header).
Example YARA rule structure:
rule Detect_SampleMalware_001 { meta: author = "StudentHandle_2026" description = "Detects Sample XYZ — C2 beacon stub" date = "2026-04-29" sha256 = "<insert_hash_here>" strings: $s1 = "MaliciousString" nocase $h1 = { 4D 5A 90 00 03 00 00 00 } // MZ header $r1 = /[a-z]{8}\.(ru|cn|tk)/ wide ascii condition: uint16(0) == 0x5A4D and 2 of them }
Test your YARA rule against the sample using the YARA command-line tool:
yara -r your_rule.yar /path/to/sample. Screenshot the result showing whether the rule fires (detects) the sample.
If it does not fire, debug your rule — this is a normal and valuable part of the process. Explain your debugging steps in the report.
Maximum 5 pages, A4 format.
Your Malware Analysis Report is a formal, structured document that a security operations centre (SOC — Security Operations Centre) analyst, incident responder, or manager could use to understand your findings and take action. It must not exceed 5 pages of A4, excluding any appendix. Quality over quantity — concise, precise writing is a professional skill. Adhere strictly to the structure below.
Required Report Structure
Page allocation guidance — your composite report at a glance.
-
ChapterI
Cover Page & Executive Summary
Report title, date of analysis, analyst handle (student ID), course reference. A 3 – 5 sentence executive summary written for a non-technical audience: what was analysed, key risks, immediate action.
max 0.5 p -
ChapterII
Malware Identification
File name, size, type · MD5, SHA-1, SHA-256 hashes (formatted in fixed-width monospace) · Source of the sample · VirusTotal detection ratio & most common vendor classification.
max 0.5 p -
ChapterIII
Static Analysis Findings
Significant strings & their interpretation · PE header analysis (key imports, section anomalies, entropy, compilation timestamp) · Evidence of packing, encryption, or obfuscation.
max 1.0 p -
ChapterIV
Dynamic Analysis Findings
Process behaviour, injections, privilege escalation · File system & registry changes (full paths) · Network behaviour (IPs, domains, ports, protocols, HTTP samples) · Persistence mechanisms · MITRE ATT&CK mappings (≥ 3).
max 1.5 p -
ChapterV
Detection Artefacts
Complete, formatted YARA rule with inline comments explaining each string and condition · IOC summary table: file hashes, IPs, domains, registry keys.
max 0.75 p -
ChapterVI
Conclusions & Recommendations
Overall risk classification · Recommended immediate containment actions · Long-term defensive improvements · Limitations of this analysis & suggested further investigation.
max 0.75 p
Screenshot, Image & Hyperlink Standards
- Annotate every screenshot with arrows, boxes, or callouts to direct the reader's eye to the relevant element.
- Add a caption below each screenshot in the format: "Figure X: [Brief description of what is shown and why it is significant]."
- Ensure screenshots are legible. Zoom in or crop before including. A screenshot where key data is unreadable serves no analytical purpose.
- Maximum 8 screenshots in the main body of the report. Additional screenshots may be placed in an appendix, clearly referenced.
- Do not reproduce copyrighted images or diagrams without proper attribution. Use the format: Author, Year, Title, URL, Date accessed.
- For icons, logos, or tool interface screenshots, note that tool interfaces (Wireshark, Process Monitor) are generally covered by the tool's own licence — use them for educational commentary.
- Prefer your own original screenshots over reproduced external images wherever possible.
- All tool names mentioned in your report must be hyperlinked to the tool's official website or documentation page.
- All external references — CVEs, MITRE ATT&CK techniques, threat reports — must include a full hyperlink and the date of access.
- Hyperlinks must be visible. Use descriptive anchor text: "official YARA documentation" is good; "click here" is not.
Academic Integrity & Anti-Plagiarism
All work submitted must be your own original analysis. The following points are strictly enforced:
- Cite every external source you use, including VirusTotal reports, blog posts, tool documentation, and academic papers. Use the following format: Author (Year). Title. Source. URL. Accessed: YYYY-MM-DD.
- Do not copy-paste analysis text from existing public malware analysis reports (e.g., Malware Traffic Analysis, Any.Run, Hybrid Analysis). Use these only as reference benchmarks and compare against your own findings.
- Your YARA rule must be original. You may use public YARA rules as a learning reference but the rule submitted in your report must be written by you, based on YOUR analysis of YOUR sample.
- AI-assisted writing is not prohibited but AI-generated analysis is. If you use AI tools (e.g., for grammar or formatting), you must disclose this. Your analytical findings — hashes, strings, registry keys, IOCs — must come from your own hands-on investigation.
Package, name, submit.
Submit all files through the designated Learning Management System (LMS) submission portal as instructed by your instructor. File naming conventions must be followed exactly — incorrectly named files may be returned unmarked.
-
PDF
Malware Analysis Report
LASTNAME_Firstname_Day06_MalwareAnalysis.pdfYour formal 5-page (max) report as a Portable Document Format file.
-
YAR
YARA Rule
LASTNAME_Firstname_Day06.yarYour original, custom YARA rule — plain text, ready to load with the
yaraCLI. -
CSV
IOC Summary
LASTNAME_Firstname_Day06_IOCs.csvComma-separated values with columns:
Type, Value, Description. -
ZIP
Screenshot Archive
LASTNAME_Firstname_Day06_Screenshots.zipAll screenshots used in the report, collected in a single compressed archive.
Midnight (Eastern Standard Time — EST), Sunday, May 10, 2026.
Late submissions will be subject to the penalty policy communicated at the start of the course. Technical issues are not grounds for extension unless reported to the instructor BEFORE the deadline.
Malware analysis is one of the most demanding and rewarding disciplines in the cybersecurity field. The professionals who do this work daily — reverse engineers, threat intelligence analysts, incident responders — are among the most skilled in the industry.
This laboratory gives you a small but genuine taste of their craft. Approach it with curiosity, rigour, and a healthy respect for what you are handling. The skills you develop here form the foundation of your ability to defend, investigate, and respond to real-world threats.
© 2026 Cyber.SoHo — All Educational Materials Are the Exclusive Property of Cyber.SoHo
All Rights Reserved