Skip to content
SECURE
IronSOC/How we engage

How we engage

Outcome-priced, not seat-priced. Measured, not promised.

We frame engagements around what changes in your queue, your dwell time, your audit posture, and your AI risk surface. List prices arrive when packaging is finalized; outcomes are committed before contracting.

Ideal customer profile

Three segments where the operating model fits cleanly.

We are direct about who this is for. If your stack and posture do not match, we will tell you in the first meeting and refer you to a partner that fits.

Cloud-native mid-market

200–2,000 employees, AWS / GCP / Azure first, EDR + SIEM in place, 1–10 named security engineers. Buying signal: dwell time and FP rate are tracked, not hidden.

Regulated mid-enterprise

Financial services, healthcare, life sciences, regulated SaaS. Audit cycles drive procurement. Buying signal: SOC 2 Type II is contractual, FedRAMP or HITRUST is on the roadmap.

AI-heavy SaaS and platforms

Companies shipping LLM agents to production. Real prompt-injection threat model, real OAuth-grant blast radius. Buying signal: 'we do not have a way to detect agent abuse today.'

Engagement tiers

Three tiers. Mix and match.

Each tier is priced to an outcome we will commit to before contracting. List prices land when packaging is finalized; until then, scoping is concrete and bounded.

Tier · Operate

Detection and response on your stack.

IronSOC operates as the detection, triage, and response layer above your existing SIEM and EDR. AI does evidence prep and recommends; analysts approve business-impacting actions; recovery runbooks are co-authored during onboarding.

  • Cross-surface detection: identity, cloud, AI, exploited-vuln
  • 24×7 coverage with bounded automation and approval gates
  • Quarterly business review on dwell time, FP rate, MTTR
Tier · AI defend

LLM and agent runtime defense.

Wraps your production AI surface — prompts, retrieved context, tool calls, MCP servers, OAuth grants — with detection, policy, and approval gates. Pairs with red-team campaigns so findings ship as runtime detections, not PDFs.

  • OWASP LLM Top 10 + MITRE ATLAS coverage
  • Tool registry, scope drift, shadow-AI detection
  • Eval set under version control with measurable lift over time
Tier · Vuln ops

Exploit-aware vulnerability operations.

Risk-ranked remediation queue driven by KEV, EPSS, asset reachability, and business criticality. The same ranking drives detection priority, so patch backlog and detection backlog share one risk model.

  • CISA KEV + EPSS + asset graph fused into one queue
  • Compensating controls applied automatically when patch windows slip
  • SLA reporting on mean-time-to-remediate by exploitation tier

Analyst leverage thesis

Sublinear cost, measured monthly.

The whole AI-SOC bet is that analyst:customer ratio collapses as you scale. We treat that as a measurable claim, not a pitch line. These are the four signals we report in every quarterly business review.

Tickets per analyst hour

We measure how many alerts move from open to closed — with full evidence — per analyst hour. The leverage from AI shows up here or it does not.

Cost-to-serve per protected estate

We track infrastructure and AI-token cost against the number of identities, workloads, and AI agents under coverage. Cost should compress as the estate scales; we report it openly to customers in QBRs.

Eval-set lift over time

Detections are versioned with positive and negative cases. Quarter-over-quarter precision and recall are visible to the customer. Drift is treated as a backlog item.

Time-to-first-detection

We commit to a time-to-first-detection target during onboarding. If we miss it, the contract pauses until the gap is closed.

Onboarding timeline

Time-to-first-detection is a contract term.

We publish the timeline before contracting and treat misses as our problem, not yours. The contract pauses on a missed acceptance metric until the gap is closed.

Standard scope: 8 weeks to QBR-ready. Faster on a focused AI-defend tier.
Week 0

Discovery and threat model

Map the stack, the AI surface, the asset graph, and the recovery requirements. Output: scoped operating model and acceptance metrics.

Week 1–2

Source connection

Connect SIEM, EDR, identity, cloud, and AI sources. Run integration smoke checks against a per-source detection sample.

Week 2–4

First detections live

Tier 1 detections promoted through eval CI to production. Time-to-first-detection captured against the contract target.

Week 4–8

Recovery rehearsal

Run tabletop drills against the customer-specific recovery playbooks. Adjust runbooks until the tabletop ends with restoration, not just containment.

Quarterly

Business review

QBR against published metrics: dwell, FP rate, MTTR, cost-to-serve, eval lift, recovery readiness. Misses are owned, not glossed.

References policy.

We do not list customer logos until the customer has approved the listing in writing. This page replaces the absent logo wall with the policy itself.

  • References are taken under mutual NDA and after a security review. Direct logos appear here only when the customer has explicitly approved the listing.
  • Until that approval is in hand, we will not infer endorsement from an integration name, a partnership, or a procurement engagement.
  • Reference customers are matched on industry and stack, not just willingness. A FinServ buyer talks to a FinServ reference, not a SaaS one.

Partner posture

Channel, carriers, and implementation.

MSSP and channel

Channel program is targeted for Series A. Today, deals close direct. Resellers and MSSPs interested in the operating model can reach the founding team.

Cyber insurance

We engage carriers as customers go through underwriting. Carrier-specific reporting is available for IronSOC customers who request it.

Implementation partners

Selective. We work with security-focused VARs and SI partners who can co-staff onboarding. We do not sub-contract incident response — that stays with IronSOC.

Scope an engagement