Executive Summary: The Imperative for a New Dialogue
In the contemporary operational environment, defined by industrialized cyber warfare and systemic supply chain compromises—exemplified by the Salt Typhoon and Volt Typhoon campaigns—the evaluation of network vendors has shifted from a procurement checklist to a strategic imperative. Organizations can no longer rely solely on perimeter defenses; they must assess the intrinsic security posture of the products that constitute their infrastructure. The traditional vendor assessment model, often characterized by static “yes/no” spreadsheets and compliance checkboxes, has failed to prevent large-scale compromises. A vendor can be fully compliant on paper while remaining operationally fragile.

This report is an updated and expanded version of the “Meaningful Security Conversations” framework. It operationalizes the concept of a dialectic assessment, enabling non-technical and semi-technical stakeholders to sit down with their vendors over a series of meetings to ask core questions that foster a conversation about the product’s Digital Safety, Cybersecurity, and Product Security.
The goal is not to find a vendor with zero vulnerabilities, as no such product exists. The goal is to determine the vendor’s maturity, the depth of their security development lifecycle (SDL), and their capacity for rapid response during crisis scenarios.1 “Gaps are OK. Gaps can be resolved. Known gaps can have workarounds until a resolution is reached. Hidden gaps are not OK and would be a red flag to the trust with the vendor”.3 This document expands the conversation into nine distinct phases, incorporating modern requirements such as Software Bills of Materials (SBOMs), AI safety governance, and defense against “Living off the Land” (LotL) techniques. It provides an extensive reading and reference section to help newcomers navigate the complex landscape of standards and threat intelligence.
Introduction: Moving from Compliance to Dialectic Assessment
For decades, the industry has relied on the Request for Proposal (RFP) security questionnaire as the primary gatekeeper for vendor risk. However, history—marked by events such as SolarWinds, Log4Shell, and the sophisticated state-sponsored intrusions into telecommunications infrastructure—has proven that a vendor’s certification status is less predictive of security outcomes than its engineering culture, transparency, and architectural resilience.4

The “Meaningful Security Conversations” approach shifts the engagement from an adversarial audit to a collaborative risk assessment. It uses a dialectical approach: asking open-ended questions that require the vendor to explain how they achieve security, rather than whether they possess a specific control. This methodology forces the vendor to consult their internal Subject Matter Experts (SMEs) rather than relying on pre-canned sales responses.1
The Core Principles of Engagement
- Trust Your Business Instincts: You do not need to be a code auditor or a cryptographer to lead these conversations. If a vendor cannot explain their security process in plain language, they likely do not understand it themselves. Trust your “BS meter.” If an answer feels evasive, overly complicated, or filled with marketing jargon, it is a red flag.5
- Time is the Primary Investment: Security requires time. Executive leadership must mandate that engineering and procurement teams dedicate time to these dialogues. A rushed procurement process is a security vulnerability in itself. The number one investment required for security is not capital for tools, but the time for teams to engage in these detailed dialogues.1
- Documentation is Mandatory: Conversations must be documented. Asking questions in writing and requiring written responses forces the vendor to commit to a position. It creates an audit trail and often reveals gaps that verbal assurances gloss over. If a vendor creates a feature request to fix a security gap, that is a victory. If they hide it, that is a risk.1
- Acceptance of Imperfection: The goal is not to find a vendor with zero vulnerabilities. The goal is to find a vendor who is honest about their vulnerabilities and has a mature plan to fix them. A vendor who claims “we have no security issues” is either lying or incompetent.3

Phase 1: The Foundation – Crisis Management and Vulnerability Response
The first phase of the conversation focuses on reaction. Since vulnerabilities are a “when, not if” reality, the most critical attribute of a vendor is their ability to manage a crisis. If a vendor cannot handle a vulnerability report professionally, they are a liability to your organization. You need to know that when the next major vulnerability hits the news cycle on a Friday afternoon, your vendor is already working on it.

Core Question 1.1: The Vulnerability Management Process
Question: “Walk us through your official Vulnerability Management Process. When a researcher or customer finds a flaw in your product, what are the specific steps you take from intake to patch release?”
Context: You are looking for a defined, repeatable process. Ad hoc responses lead to chaos during a crisis. A mature vendor treats a vulnerability report like a fire alarm: they have a procedure to handle it efficiently and safely.
Expected Answer (The Gold Standard):
The vendor should describe a formal process that aligns with international standards such as ISO/IEC 30111 (Vulnerability Handling Processes) or ISO/IEC 29147 (Vulnerability Disclosure).7 They should mention:
- Triage: How they validate the report to ensure it is real.
- Assessment: How they score severity (e.g., using the Common Vulnerability Scoring System or CVSS).
- Remediation: Their internal Service Level Agreements (SLAs) for fixing critical versus low-severity bugs (e.g., “We fix critical bugs within 7 days”).
- Communication: How they notify customers (e.g., Security Advisories pushed to your inbox, not just posted on a hidden webpage).
Red Flags:
- “We fix bugs as they come up.” (Indicates no process).
- “Our software is secure, so we don’t get many reports.” (Indicates lack of visibility or arrogance).
- The vendor cannot cite a standard or framework they follow.
- They mention they only patch during major feature releases.
Core Question 1.2: Public Accessibility
Question: “Do you have a publicly accessible ‘/security’ page (e.g., vendor.com/security) that lists your security team’s contact information and PGP keys?”
Context: Security researchers need a way to report bugs without going through customer support. If a researcher cannot find a security contact, they may publish the vulnerability publicly (“full disclosure”) out of frustration, leaving you exposed before a patch exists. A visible front door is a sign of maturity when it comes to security.1
Expected Answer:
- “Yes, our security page is easy to find. It lists our email (security@vendor.com), our PGP key for encrypted communication, and our disclosure policy.”
- They should demonstrate participation in the security community, not isolation.
Red Flags:
- The vendor requires a support contract to report a security flaw.
- Reporting a bug requires navigating a sales IVR or ticketing system designed for feature requests.
- They ask you to email the general “info@” or “support@” address.
Core Question 1.3: Product Security Incident Response Team (PSIRT)
Question: “Do you have a dedicated Product Security Incident Response Team (PSIRT) that operates independently of the development team? Are they available 24/7/365?”
Context: Developers are incentivized to release features and meet deadlines. A PSIRT is incentivized to protect the product’s integrity and the customer’s safety. These interests sometimes conflict. You need to know there is a team empowered to stop a release if a security flaw is found. Reference the FIRST PSIRT Maturity Document for benchmarks.7
Expected Answer:
- The vendor describes a specific team with the authority to declare a security embargo or stop a shipment.
- They confirm availability for critical incidents, even on holidays (adversaries do not take holidays).
- They participate in industry trust groups to share intelligence.
Red Flags:
- “Our lead developer handles security.” (This creates a conflict of interest).
- “We handle security issues during business hours.”
- They don’t know what a PSIRT is.
Core Question 1.4: Coordinated Vulnerability Disclosure (CVD)
Question: “Do you have a published Coordinated Vulnerability Disclosure (CVD) policy? Do you provide ‘safe harbor’ for good-faith security researchers?”
Context: Legal threats against researchers chill the ecosystem. A mature vendor encourages researchers to report bugs by promising not to sue them if they follow the rules. This is a requirement under the new CISA Secure by Design Pledge.10 Without safe harbor, researchers will sell bugs to brokers or keep them secret, meaning the “bad guys” find them before you do.
Expected Answer:
- The vendor confirms they have a policy that authorizes testing (within limits) and commits not to pursue legal action against ethical reporters.
- They view researchers as an extension of their quality assurance, not as enemies.
Red Flags:
- The vendor has a history of threatening researchers or using DMCA takedowns to hide security research.
- Their policy is punitive rather than collaborative.
Phase 2: Secure by Design and Default
Once you establish how the vendor reacts to failure, the conversation shifts to prevention. This phase assesses whether the vendor treats security as a core architectural requirement or a “bolt-on” feature. We want to know if the product is designed to be hard to hack, or if security is left up to the user to configure correctly. This section draws heavily from the CISA Secure by Design principles.12

Core Question 2.1: Default Credentials
Question: “Does your product ship with any default passwords (e.g., admin/admin)? If so, are they unique per device, or shared across the product line?”
Context: Default passwords are a primary vector for botnets (such as Mirai) and state-sponsored actors (such as Salt Typhoon) to gain initial access.14 The UK PSTI Act and EU Cyber Resilience Act largely ban universal default passwords. A product that ships with “admin/admin” in 2026 is effectively broken by design.
Expected Answer:
- “We have eliminated default passwords. The product forces the user to set a strong password upon first boot,” OR “Each device comes with a unique, random password printed on a label.”
- Reference to the CISA Secure by Design Pledge goal of eliminating default passwords.12
Red Flags:
- “Yes, the default is admin/password, but we tell users to change it in the manual.” (Users rarely read the manual; security should not depend on this.)
- “We need a hardcoded backdoor account for support purposes.” (This is a critical security failure and an immediate disqualifier).
Core Question 2.2: Multi-Factor Authentication (MFA)
Question: “Does the product support Multi-Factor Authentication (MFA) for all administrative access? Is MFA enforced by default?”
Context: Credential stuffing—where attackers use passwords stolen from one site to log into another—is ubiquitous. Single-factor authentication is no longer sufficient for any management interface. CISA’s Secure by Design Pledge explicitly calls for measurable increases in MFA adoption.10
Expected Answer:
- “MFA is supported and enabled by default for all privileged accounts.”
- “We support standard TOTP (like Google Authenticator) and FIDO2/WebAuthn (hardware keys) for phishing-resistant MFA.”
- “We support SSO (Single Sign-On) so you can use your own Identity Provider (IdP) like Okta or Azure AD.”
Red Flags:
- “MFA is on our roadmap.” (A common delay tactic.)
- “MFA is available only in the ‘Enterprise’ licensing tier.” (Security should not be an upsell; it is a fundamental safety requirement).
Core Question 2.3: Memory Safety
Question: “What is your roadmap for migrating critical code components to memory-safe languages (e.g., Rust, Go, Java, Python) to eliminate buffer overflow vulnerabilities?”
Context: Research from Microsoft and Google indicates that approximately 70% of all critical vulnerabilities are memory safety issues (buffer overflows, use-after-free). Continuing to write new code in C/C++ without strict controls is a liability. This is a key focus of the CISA Secure by Design initiative.12
Expected Answer:
- “We have a defined roadmap. New components are written in Rust or Go. Legacy C++ code is being refactored or wrapped in safe interfaces.”
- “We use extensive fuzzing and memory sanitizers (ASan/MSan) for legacy code we cannot yet migrate.”
Red Flags:
- “We trust our developers to write secure C code.” (Human error is inevitable; this is not a strategy.)
- “We don’t use memory-safe languages because of performance.” (Modern safe languages are performant; this is often an excuse for technical debt).
Core Question 2.4: Attack Surface Reduction
Question: “What services and ports are enabled by default? Do you follow a ‘deny-by-default’ philosophy where features must be explicitly turned on?”
Context: Unnecessary services (Telnet, FTP, UPnP) increase the attack surface. Salt Typhoon exploited legacy protocols left open on edge devices.2 Every open port is a potential entry point.
Expected Answer:
- “The product ships in a ‘hardened’ state. Only essential services (e.g., HTTPS, SSH) are active. All legacy protocols (Telnet, HTTP) are disabled and removed if possible.”
- “We provide a hardening guide that aligns with NIST or CIS Benchmarks.”
Red Flags:
- “We leave everything open for ease of setup.” (Prioritizing convenience over safety is a design flaw).
- “You can disable them, but it might break functionality.”
Phase 3: Supply Chain Transparency (The SBOM & VEX Conversation)
Modern software is assembled, not just written. It typically consists of 80-90% open-source components. You cannot secure what you cannot see. This phase demands transparency into the “ingredients” of the software so you can manage your own risk.8

Core Question 3.1: SBOM Provision
Question: “Can you provide a machine-readable Software Bill of Materials (SBOM) for your product? Does it comply with the CISA 2025 Minimum Elements or the EU Cyber Resilience Act standards?”
Context: An SBOM is a list of ingredients. Without it, you don’t know if you are affected by the next major library vulnerability like Log4Shell. The CISA 2025 update adds requirements for fields like “Component Hash” (to prove the code hasn’t been tampered with) and “Generation Context” (how the SBOM was made).8
Expected Answer:
- “Yes, we provide SBOMs in standard formats like SPDX (ISO/IEC 5962:2021) or CycloneDX.”
- “Our SBOMs include transitive dependencies (the libraries that our libraries use).”
- “We update the SBOM with every release.”
Red Flags:
- “We consider our dependency list proprietary information.” (Security through obscurity).
- “We can give you a PDF list.” (Must be machine-readable: JSON/XML for automated scanning).
- “We only track our top-level dependencies.” (Most vulnerabilities are buried deep in the dependency tree).
Core Question 3.2: Vulnerability Exploitability eXchange (VEX)
Question: “Do you provide VEX documents alongside your SBOMs to tell us which vulnerabilities are actually exploitable?”
Context: An SBOM might list a vulnerable library (e.g., OpenSSL), but the product might not use the vulnerable function. VEX allows the vendor to say “Status: NOT_AFFECTED” so you don’t waste time patching phantom bugs. This is critical for reducing alert fatigue.8
Expected Answer:
- “Yes, we publish VEX statements using the OpenVEX or CSAF format.”
- “We use status justifications like ‘Vulnerable_code_not_in_execute_path’ to reduce false positives for our customers.”
Red Flags:
- “What is VEX?” (Indicates lack of maturity in modern supply chain security).
- “Just scan our product with your own tools.” (Shifts the burden of triage to the customer).
Core Question 3.3: Supply Chain Integrity
Question: “How do you protect your software signing keys? Do you verify the provenance of the third-party code you ingest?”
Context: If a threat actor steals the vendor’s signing key (like in the SolarWinds or recent Microsoft incidents), they can impersonate the vendor and push malware as a legitimate update. You need assurance that the software really comes from them.
Expected Answer:
- “Signing keys are stored in Hardware Security Modules (HSM) with strict access controls.”
- “We use the SLSA (Supply-chain Levels for Software Artifacts) framework to verify build integrity.”
- “We scan all third-party code for malware and license issues before inclusion.”
Red Flags:
- “Keys are stored on a developer’s laptop.”
- “We don’t sign our firmware/software.”
Phase 4: The Architecture of Trust – Management Plane & Access
This phase addresses the specific tactics used by advanced adversaries like Salt Typhoon and Volt Typhoon to compromise critical infrastructure. These actors target the “management plane”—the administrative interfaces used to control networks. If they control the management plane, they control the network.2

Core Question 4.1: Management Plane Isolation
Question: “How is the management plane isolated from the data plane? Can management traffic be strictly restricted to a specific Virtual Routing and Forwarding (VRF) instance or physical port?”
Context: Salt Typhoon exploited routers where management interfaces were exposed to the internet or the general user network. Management interfaces (SSH, Web UI) should never be reachable from the public internet. This is a critical architectural requirement.4
Expected Answer:
- “Our architecture completely separates management traffic. It uses a dedicated out-of-band port or a Management VRF that does not route to the internet.”
- “We support Access Control Lists (ACLs) that restrict management access to specific IP ranges.”
Red Flags:
- “The management interface listens on all ports by default.”
- “You cannot disable the web UI on the public interface.”
Core Question 4.2: Legacy Protocol Hygiene
Question: “Do you still support legacy protocols like Telnet, HTTP (non-SSL), or SNMPv1/v2? If so, are they disabled by default and hard to enable?”
Context: Salt Typhoon utilizes legacy protocols to move laterally. SNMP community strings are often sent in cleartext, allowing attackers to harvest credentials. “Legacy” equals “Liability”.2
Expected Answer:
- “We have removed Telnet code entirely.”
- “We default to SNMPv3 with encryption. If you must use SNMPv2, the system warns you of the risk.”
- “All web management is forced to HTTPS with strong TLS 1.3 encryption.”
Red Flags:
- “We keep Telnet for backward compatibility with older scripts.” (This leaves a door open for attackers).
Core Question 4.3: Living off the Land (LotL) Prevention
Question: “Does the device contain built-in tools like ‘Guest Shells,’ Python environments, or packet capture tools? Can these be disabled or restricted?”
Context: “Living off the Land” refers to attackers using the system’s own administration tools against it (e.g., using a router’s built-in Python to run a script, or its packet capture tool to sniff traffic). Salt Typhoon used Cisco Guest Shells to hide malicious activity inside a legitimate container.2
Expected Answer:
- “Features like Guest Shells are disabled by default.”
- “If enabled, they run in a strictly sandboxed container with no root access to the host system.”
- “Usage of these tools generates high-priority logs that cannot be locally deleted (Immutable Logging).”
Red Flags:
- “The device is an open Linux box; you can run whatever scripts you want.” (While flexible, this is a massive security risk if not hardened).
- “We don’t log usage of internal diagnostic tools.”
Phase 5: The Factory Floor – Secure Development Lifecycle (SDL) & Testing
You cannot test quality into a product at the end; it must be built in. This phase interrogates the vendor’s “factory floor” practices. It reveals if they treat security as a science or an art.1

Core Question 5.1: Automated Testing and Fuzzing
Question: “Do you use automated Fuzz Testing on your protocol parsers (e.g., HTTP, SNMP, IPsec)? How often is this testing performed?”
Context: Fuzzing involves throwing random, malformed data at a system to see if it crashes. It is the only way to find edge-case memory corruption bugs that human testers miss. If the vendor doesn’t fuzz their code, the attackers will.1
Expected Answer:
- “We run continuous fuzzing in our CI/CD pipeline.”
- “We use commercial fuzzers (like Defensics) and open-source tools (like AFL++).”
Red Flags:
- “We rely on manual QA testing.” (Manual testing cannot catch complex parser logic errors).
- “We fuzz once before a major version release.” (Code changes daily; testing must be continuous).
Core Question 5.2: Static and Dynamic Analysis
Question: “Do you use Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) as gating criteria for code release?”
Context: SAST scans the source code; DAST scans the running application. A mature SDL requires both. “Gating criteria” means the software build fails automatically if a vulnerability is found.
Expected Answer:
- “Yes, a build fails automatically if SAST identifies critical vulnerabilities.”
- “We perform DAST scans on every nightly build.”
Red Flags:
- “We run scans occasionally.”
- “We don’t let tools block a release; we fix it in the next patch.”
Core Question 5.3: Root Cause Corrective Action (RCCA)
Question: “When a vulnerability is found, do you perform Root Cause Corrective Action (RCCA) to eliminate the class of vulnerability, or do you just patch the specific instance?”
Context: If a vendor patches a SQL injection bug but doesn’t change their coding guidelines to require parameterized queries, they will have another SQL injection bug next week. RCCA improves the process, not just the product.1
Expected Answer:
- “We analyze the root cause. For example, after finding a buffer overflow, we audited all similar code and updated our linter rules to prevent that pattern.”
Phase 6: Cloud & SaaS Security (Shared Responsibility)
If the vendor provides a Cloud or SaaS solution, the questions must shift to data residency, tenancy, and cloud-specific controls. You are outsourcing your infrastructure; you must validate their stewardship.8

Core Question 6.1: Data Residency and Sovereignty
Question: “Where is our data stored and processed? Can we pin our data to a specific geographic region (e.g., EU, US) to meet compliance requirements?”
Context: Regulatory frameworks like GDPR and data sovereignty laws require strict control over where data lives.
Expected Answer:
- “You can select your data region. We guarantee data will not leave that region for processing or backups.”
- “We comply with local data sovereignty laws.”
Red Flags:
- “We optimize for performance, so data moves dynamically globally.”
- “We cannot guarantee data stays in the EU.”
Core Question 6.2: Tenant Isolation
Question: “How do you ensure tenant isolation? If another customer is compromised, how do you guarantee they cannot access our data?”
Context: In a multi-tenant cloud, a vulnerability in the hypervisor or application logic could allow “cross-tenant” attacks.
Expected Answer:
- “We use logical isolation at the database level and strict IAM policies.”
- “For high-security customers, we offer single-tenant instances (dedicated infrastructure).”
Red Flags:
- “Our software handles separation logically, but we use a shared database without row-level security.”
Phase 7: The AI Safety Conversation
As vendors rush to integrate AI and Large Language Models (LLMs) into their products, new risks emerge: data leakage, hallucinations, and prompt injection. This phase is critical for any product with “Smart” or “AI-powered” features.17

Core Question 7.1: Data Residency and Training
Question: “Is our data used to train your AI models? Is our data segregated from other customers? Where does the inference processing happen?”
Context: You need to know if your proprietary data is feeding a public model that could leak your secrets to a competitor.
Expected Answer:
- “Customer data is NOT used to train public models. We use a private instance for your organization.”
- “We have a clear ‘opt-in’ policy for data training.”
- “Data processing occurs within, and we have Data Processing Agreements (DPAs) in place.”
Red Flags:
- “We use all customer interactions to improve the product.” (This implies training on your data).
- “We use a third-party public API” (without clarifying data privacy protections).
Core Question 7.2: Model Security
Question: “How do you protect your AI models against prompt injection, poisoning, and inversion attacks?”
Context: Attackers can trick AIs into revealing system instructions or bypassing safety filters (prompt injection). They can also poison training data to manipulate outputs.17
Expected Answer:
- “We use input validation and sanitation layers before the prompt reaches the LLM.”
- “We constantly red-team our models against jailbreaking techniques.”
Red Flags:
- “Our AI is secure because nobody knows the prompt.” (Security through obscurity).

Phase 8: Long-Term Resilience and End-of-Life
Security is a lifecycle commitment. Hardware and software age. This phase ensures you aren’t left with “zombie” infrastructure that cannot be patched.

Core Question 8.1: Support Lifecycle
Question: “What is the guaranteed support period for security patches? When this product goes End-of-Life (EOL), will you offer extended security support?”
Context: Network equipment often stays in production for 10+ years. If the vendor only supports it for 3 years, you have a 7-year risk window. The EU CRA requires transparency on this support period.18
Expected Answer:
- “We guarantee security updates for 5 years after the last date of sale.”
- “We offer an extended support contract for critical security fixes.”
Red Flags:
- “We support the product as long as it is popular.”
- “Updates are free for the first year, then you must buy a new device.”
Core Question 8.2: Patch Velocity
Question: “What is your Mean Time to Remediate (MTTR) for critical vulnerabilities (CVEs)? Can you push emergency patches outside of your standard release cycle?”
Context: The “half-life” of a vulnerability is days. If the vendor waits for a quarterly release to fix a critical bug, you are exposed for months.1
Expected Answer:
- “For critical CVEs, our target release time is <72 hours.”
- “We support ‘hot patching’ or ‘live patching’ to fix issues without rebooting the system.”
Red Flags:
- “We release patches in our bi-annual firmware updates.”

Phase 9: Transparency and Partnership (Verification)
The final phase determines the cultural fit. You want a partner, not just a supplier. Trust, but verify.

Core Question 9.1: Admission of Failure
Question: “Tell us about a significant security failure you had in the past two years. How did you handle it, and what did you change in your process?”
Context: Every mature vendor has failures. How they talk about them reveals their integrity. A vendor who hides failures will hide them from you when it matters most.
Expected Answer:
- A candid discussion of a specific incident, the timeline of the fix, and the specific process changes (e.g., “We added a new step to our CI/CD pipeline”).
Red Flags:
- “We haven’t had any significant security issues.” (Dishonest or unaware).

Core Question 9.2: Joint Exercises
Question: “Would you be willing to participate in a joint tabletop exercise or ‘fire drill’ with our team to test our combined response to a supply chain breach?”
Context: Testing the human lines of communication before a crisis is invaluable. It ensures you know exactly who to call.16
Expected Answer:
- “Yes, we can schedule a tabletop exercise with our incident response team.”
Conclusion: The “All-In” Vendor
By working through these nine phases, you will categorize your vendors into two buckets:
- Compliance-Focused: They check the box with a “yes,” but struggle with the “how.” They view security as a cost center. They rely on marketing fluff.
- Resilience-Focused (“All-In”): They engage in the conversation. They admit gaps and provide roadmaps. They view security as a differentiator. They can explain why they made certain architectural choices.

The goal of “Meaningful Security Conversations” is to identify the “All-In” vendors and build strategic partnerships with them. For the Compliance-Focused vendors, the goal is to use these questions to drive them toward maturity—or to document the risk clearly enough to justify switching to a better partner.
Remember: You can outsource the work, but you cannot outsource the risk. These conversations are your primary tool for managing that risk. The gap between what a vendor says and what they do is where your risk lives. It is your job to close that gap.

Appendices: Extensive Reading and Reference Section
This section provides the authoritative sources used to build this guide. It is categorized to help readers deepen their knowledge in specific domains.
1. General Vendor Assessment Guides
- Meaningful Security Conversations with Your Vendors (Senki.org): The foundational text for this report. Provides conversation scripts and checklists. 1
- NCSC Vendor Security Assessment: UK government guidance on assessing network equipment security, focusing on sustained monitoring and spot checks. 17
- Bitsight Vendor Due Diligence: A framework for scoring vendors based on cybersecurity, compliance, and stability. 17
2. Supply Chain & SBOM (Software Bill of Materials)
- CISA 2025 Minimum Elements for SBOM: The definitive US government standard for what an SBOM must contain (Author, Timestamp, Hash, License, etc.). 8
- EU Cyber Resilience Act (CRA): Mandatory requirements for products in the EU, including SBOMs and 24-hour reporting. 18
- NTIA SBOM Resources: Historical context and framing for software transparency. 9
- CycloneDX & SPDX: The two primary machine-readable formats for SBOMs. 8
- VEX (Vulnerability Exploitability eXchange): CISA guidance on how to communicate whether a vulnerability in an SBOM is actually exploitable. 8
3. Secure by Design & Development
- CISA Secure by Design Pledge: A voluntary pledge for manufacturers to adopt goals like MFA, removing default passwords, and reducing vulnerability classes. 10
- NIST SP 800-218 (SSDF): The Secure Software Development Framework. Mapped to EO 14028. 7
- BSIMM (Building Security In Maturity Model): A descriptive model of real-world software security initiatives. 1
- OWASP SAMM: A prescriptive model for software assurance maturity. 1
4. Vulnerability Management & PSIRT
- FIRST PSIRT Maturity Document: A framework for grading the maturity of a Product Security Incident Response Team. 7
- ISO/IEC 30111 & 29147: International standards for vulnerability handling and disclosure. 7
- CERT Guide to Coordinated Vulnerability Disclosure (CVD): Practical guidance for vendors and researchers. 7
5. Threat Intelligence & Case Studies (Salt Typhoon/Volt Typhoon)
- CISA/NSA/FBI Joint Advisory on Salt Typhoon: Detailed TTPs including modification of router configurations, lateral movement, and living off the land. 2
- Living off the Land (LotL) Guidance: Joint guidance on how adversaries use built-in tools (PowerShell, wmic, Netsh) to evade detection. 19
- Vectra AI & Versa Networks Guides: Questions specifically for evaluating AI vendors and post-compromise detection capabilities. 17
6. Regulatory & Regional Frameworks
- EU Cyber Resilience Act (CRA): Regulation (EU) 2024/2847. Mandatory cybersecurity requirements for products with digital elements. 18
- NIST SP 800-161r1: Cybersecurity Supply Chain Risk Management Practices. 8
- BSI TR-03183: German technical guideline for CRA compliance and SBOMs. 8
7. AI Security
- NIST AI Risk Management Framework (AI RMF): Standards for managing risks in AI systems. 17
- Delve & Delinea AI Questionnaires: Specific questions for evaluating AI governance, data handling, and model security. 17
Table: Summary of Key Maturity Indicators
| Domain | Low Maturity Indicator (Red Flag) | High Maturity Indicator (Gold Standard) |
| Vuln Response | “We fix bugs when found.” | Published CVD Policy, ISO 30111 compliance, 24/7 PSIRT. |
| Authentication | Default passwords, no MFA. | No default passwords, MFA by default (FIDO2), SSO support. |
| Supply Chain | “It’s proprietary.” PDF lists. | Machine-readable SBOM (SPDX/CycloneDX), VEX documents. |
| Architecture | Management exposed to internet. | Management Plane Isolation (VRF/OOB), Legacy protocols removed. |
| Development | Manual QA testing. | Automated Fuzzing, SAST/DAST gating in CI/CD pipeline. |
| AI Safety | “We use your data to improve.” | Private instances, Opt-in training, Data Processing Agreements. |
| Resilience | End of support = End of security. | Extended support contracts, defined EOL transition plans. |
End of Report.
Works cited
- Comprehensive Guide to Evaluating Network Vendor Product Security: Maturity, Resilience, and Rapid Response
- Countering Chinese State-Sponsored Actors Compromise of … – CISA, accessed January 22, 2026, https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-239a
- Demand Security from your Vendors – APRICOT 2021, https://drive.google.com/open?id=1BKydk-7LFwL-kZvcaG6PD3T1CQIx09sQ2T93QUntH8o
- The Largest Telecommunications Attack in U.S. History: What Really …, accessed January 22, 2026, https://blog.checkpoint.com/security/the-largest-telecommunications-attack-in-u-s-history-what-really-happened-and-how-we-fight-back/
- Meaningful Security Conversions Questions to ask vendors to gauge their commitment to “Secure Products” and Demand Security, https://drive.google.com/open?id=18FFFfI5b5PJBLgamN65xbj2owhAOSwhBfCKu1oNnIpE
- Demand Security from your Vendors – Updated 2021, https://drive.google.com/open?id=17FGt2XhgcIohULS1krVUYysdEC0I9zUCG6XSxZwG19M
- Claude – Step 1 – Network Vendor Product Security Evaluation Guide
- Claude – Step 1 – Network Vendor Product Security Evaluation Guide 1.1
- ChatGTP – Step 1 – Report Generated: 2026-01-22
- Secure by Design Pledge – CISA, accessed January 22, 2026, https://www.cisa.gov/securebydesign/pledge
- Rewind signs the CISA “Secure by Design” pledge, accessed January 22, 2026, https://rewind.com/blog/rewind-signs-cisa-secure-by-design-pledge/
- CISA Secure by Design Pledge, accessed January 22, 2026, https://www.cisa.gov/resources-tools/resources/cisa-secure-design-pledge
- Trend Micro and CISA Secure-By-Design Pledge, accessed January 22, 2026, https://www.trendmicro.com/en_us/research/25/a/cisa-secure-design-pledge.html
- Salt Typhoon Attack: 3 Lessons for Tech and Cybersecurity Pros – Dice, accessed January 22, 2026, https://www.dice.com/career-advice/salt-typhoon-attack-3-lessons-for-tech-and-cybersecurity-pros
- Salt Typhoon – NJCCIC, accessed January 22, 2026, https://www.cyber.nj.gov/threat-landscape/nation-state-threat-analysis-reports/china-linked-cyber-operations-targeting-us-critical-infrastructure/salt-typhoon
- M3AAWG 46 – Demand Security from your Vendors, https://drive.google.com/open?id=1-S1QOqCRHQNdQvkWHAaTIqUJb9-PqOnobdY-rmk7UGQ
- Perplitity – Step 1 – Report Generated: 2026-01-22
- Grok – Step 1 – Report Generated: 2026-01-22
- Identifying and Mitigating Living Off the Land Techniques – CISA, accessed January 22, 2026, https://www.cisa.gov/sites/default/files/2025-03/Joint-Guidance-Identifying-and-Mitigating-LOTL508.pdf
- Identifying and Mitigating Living Off the Land Techniques – CISA, accessed January 22, 2026, https://www.cisa.gov/resources-tools/resources/identifying-and-mitigating-living-land-techniques
