CSCF Analyst Rating™ — how we score cybersecurity consulting firms.

The CSCF Analyst Rating is a 0–100 composite score. Five dimensions, 20 points each. Designed to give buyers a consistent, comparable signal across firms that otherwise present themselves in incomparable ways.

Score structure

Each dimension is scored from 0 to 20, for a maximum composite score of 100. Dimensions are equally weighted. We do not apply multipliers or bonuses. The composite score is rounded to the nearest integer.

Scores are assigned by CSCF Research analysts based on available evidence. When evidence is insufficient to score a dimension, we assign a conservative score and note the gap. We do not extrapolate or assume capability from marketing claims alone.

Tier thresholds

Leader Exceptional across most dimensions. Clear market authority in primary service area.
85–100
Strong Performer Strong in core area. Reliable choice with competitive value and meaningful track record.
70–84
Contender Competent but with gaps or narrow focus. Suitable for specific buyer segments.
55–69
Challenger Limited track record or emerging in primary area. Potential upside, less data to evaluate.
40–54

Rating dimensions

Technical Capability 0–20
Specialization Depth 0–20
Client Scale Fit 0–20
Value & Transparency 0–20
Market Presence & Track Record 0–20
Composite 0–100

Data sources

  • — Firm websites and service documentation
  • — LinkedIn company and staff profiles
  • — Press releases and case studies
  • — Conference speaker records
  • — Industry analyst mentions
  • — Practitioner community discussion
  • — Regulatory filings (where applicable)

The five dimensions

01 / 05

Technical Capability

Max 20 points

Depth of hands-on technical expertise in the firm's claimed service areas. We look at staff certifications (CISSP, OSCP, CISA, GIAC, etc.), evidence of advanced practitioner knowledge through published research or conference presentations, technical training requirements for staff, and the firm's track record discovering novel vulnerabilities or handling sophisticated threats.

  • — Staff certification density (certified professionals per team size)
  • — Advanced certifications: OSCP, OSCE, GREM, GCFA vs baseline CEH
  • — Published CVEs, security research, tool development
  • — Conference presence at DEF CON, Black Hat, RSA as speakers (not attendees)
  • — Government or military security pedigree where relevant

02 / 05

Specialization Depth

Max 20 points

How focused and deep is the firm's expertise in their declared primary service area? A firm claiming SOC 2 expertise should have conducted hundreds of SOC 2 audits, not dozens. A healthcare-focused firm should employ former clinical staff or healthcare compliance officers, not just consultants with HIPAA training. We penalize breadth that dilutes depth.

  • — Years of focused practice in the primary service area
  • — Number of engagements completed in the specialty (where disclosed)
  • — Industry-specific staff backgrounds (e.g., former hospital CISO, former QSA)
  • — Dedicated practice groups vs general consultants assigned to specializations
  • — Client case studies with specific, verifiable outcomes

03 / 05

Client Scale Fit

Max 20 points

Does the firm's engagement model, pricing, and methodology match the buyers they claim to serve? A firm that primarily serves Fortune 500 companies will often be mismatched for a 200-person SaaS company — and vice versa. We evaluate whether the firm's minimum engagement size, communication style, and deliverable complexity are calibrated to the right buyer segment.

  • — Publicly disclosed minimum engagement size or pricing
  • — Client portfolio composition (enterprise vs mid-market vs SMB)
  • — Engagement duration and structure (project vs retainer)
  • — Scalability of methodology across company sizes
  • — Presence in relevant buyer communities (e.g., HIPAA compliance forums for healthcare firms)

04 / 05

Value & Transparency

Max 20 points

Does the firm make it easy to understand what they do, what it costs, and what buyers get? Pricing opacity is a consistent friction point in this market. Firms that publish pricing ranges, scope templates, or clear service definitions earn higher scores here. We also assess whether the firm's deliverables are genuinely useful or primarily compliance theater.

  • — Published pricing, ranges, or per-unit rates
  • — Clear service scope definitions on website or in proposals
  • — Sample reports or methodology documents publicly available
  • — Client-readable deliverables vs compliance checkbox artifacts
  • — Transparent conflict of interest disclosures

05 / 05

Market Presence & Track Record

Max 20 points

Longevity, client volume, and market reputation are proxies for reliability and institutional knowledge. We weight years in business, publicly disclosed client counts, external recognition (Gartner mentions, Inc. 5000 lists, industry awards where meaningful), and practitioner community reputation. A newer firm can score well here if they have strong community presence and disclosed track record.

  • — Years in business at primary service focus
  • — Disclosed client count or engagement count
  • — Industry recognition: Gartner, Forrester mentions, security awards
  • — Practitioner community reputation (forums, LinkedIn, security Twitter)
  • — Financial stability signals (funding, acquisitions, growth)

Limitations and known gaps

Phase 1 ratings are based on publicly available information only. This introduces structural limitations:

  • — Firms that operate quietly — little public content, few case studies, minimal conference presence — will score lower on Market Presence regardless of actual quality. This is a known bias in our data.
  • — Specialization Depth is harder to assess without access to engagement records or client interviews. Phase 1 scores in this dimension are more uncertain than others.
  • — Value & Transparency scores reflect published information. Firms may have pricing structures that are reasonable but not publicly disclosed.
  • — We have not independently verified certifications listed by firms. We rely on public disclosure accuracy.
  • — Phase 1 ratings may not reflect very recent changes to firm capabilities, acquisitions, or leadership.

Phase 2 will address these gaps through structured firm interviews, client reference checks, and direct methodology validation. We will publish updated scores with Phase 2 inputs noted.

Rating update policy

Ratings are reviewed quarterly. We update individual firm scores when material changes occur: leadership changes, acquisitions, significant service expansion or contraction, or new public evidence affecting any dimension. Firms can request a review by contacting CSCF Research directly. We do not guarantee review timelines.

All rating changes are versioned. Historical scores are preserved in our methodology documentation.

Methodology version 1.0 · Published February 2026 · CSCF Research