← All posts

apmp · 6 min read · 2026-04-29

Win themes vs discriminators: what differentiates an SDVOSB proposal that wins?

Win themes are claims about you. Discriminators are claims about you that are unique to you. SDVOSB-specific examples + the test that separates the two.

TL;DR. Win theme: 'We deliver fast.' Discriminator: 'We deliver in 4 weeks because our team is colocated with the customer's PMO and has been since 2022 — verifiable via CPARS V12345.' Discriminators must pass the substitution test: replace your name with a competitor's, and the claim must become false.

The 60-second answer

A win theme is a claim about your capability that supports your win strategy.

A discriminator is a claim about your capability that is uniquely true of your firm in a way the customer can verify.

Most losing SDVOSB proposals have win themes (every firm does). Most winning SDVOSB proposals have 2-3 discriminators that survive an explicit "could a competitor say the same thing?" challenge.

The substitution test

Take a sentence from your draft. Replace your firm's name with your top competitor's. Re-read.

  • If the sentence is still plausibly true, it's a win theme (or possibly just a marketing claim).
  • If the sentence becomes obviously false or unverifiable, it's a discriminator.

Examples:

SentenceSubstitutionResult
"Northstar delivers high-quality cloud solutions tailored to mission requirements.""Booz Allen delivers high-quality cloud solutions tailored to mission requirements."Both true → win theme
"Northstar's team includes 4 SDVOSB-certified engineers with TS/SCI clearance and 7+ years on VA OI&T contracts.""Booz Allen's team includes 4 SDVOSB-certified engineers..."Booz Allen isn't SDVOSB → discriminator
"Northstar offers 24/7 customer support.""Leidos offers 24/7 customer support."Both true → win theme
"Northstar holds active 8(a) certification (case ID 309876) expiring 2031, allowing direct sole-source eligibility on this requirement.""Leidos holds active 8(a) certification..."Leidos is too large → discriminator

SDVOSB-specific discriminators that work

The set-aside itself is a discriminator only if the contract is SDVOSB-set-aside. Beyond that:

  1. Veteran-led leadership team with operational experience in the customer's mission space. "Our CTO commanded a Marine Corps signals intelligence company; our solution architect was a Navy nuclear engineer" — verifiable via DD-214 references in the past performance volume.

  2. Cleared bench at the SDVOSB tier. Most SDVOSBs claim TS/SCI cleared staff; few have 5+ on the bench. Naming the count, citing the latest CAC enrollment date, makes it discriminating.

  3. Specific past performance on the customer's mission set, not just the customer's agency. "Awarded VA OI&T cloud migration on PIID 36C10X-22-D-0042; CPARS Exceptional on technical execution" beats "Has experience with VA cloud."

  4. Geographic colocation with the customer. "Our team is based in Augusta, GA, 14 minutes from Fort Eisenhower" matters for a Cyber Center contract in ways "we have a national footprint" doesn't.

  5. Tooling or methodology developed in-house. "Our proprietary security baseline tool ATO-Ready accelerates ATO from 18 months to 4 months — used on our prior 6 federal cloud migrations" — assuming the tool actually exists and is sourceable.

What doesn't work as a discriminator

  • "Best-in-class," "world-class," "industry-leading." These are noise. They'd survive substitution; they're not even win themes — they're filler.
  • "Strong customer focus." Every losing proposal contains this phrase.
  • Quantitative claims without source. "We reduce costs by 30%" with no contract reference is unverifiable; the evaluator marks it down.
  • Tooling claims that don't survive a Google check. If your "proprietary platform" doesn't have a marketing page, it doesn't exist.

The 3-discriminator rule

For a typical SDVOSB technical volume (10-25 pages), aim for 2-3 discriminators total, repeated 5-7 times each across the proposal.

Repetition matters. The evaluator skim-reads. A discriminator that appears once in the technical volume and never again will not register at scoring time.

Pattern: discriminator appears in:

  1. Executive summary
  2. Win theme statement (top of the technical volume)
  3. The technical section where the discriminator is most relevant
  4. The management approach section ("our [discriminator] enables us to...")
  5. The past performance section ("on PIID X, we used [discriminator]")
  6. The cost narrative ("our [discriminator] reduces our cost by Y%")

Win theme structure

For each discriminator, a one-sentence win theme:

[Customer], by [discriminator], you will [benefit/hot button], as proven by [proof point].

Example:

"DISA, by selecting Northstar's TS/SCI-cleared 7-engineer team — based 14 minutes from Fort Meade — you will achieve continuous on-site collaboration with no transit delays, as proven by our 96% on-site uptime over 24 months on PIID HC102819C0042 (CPARS Very Good)."

This is unwieldy as written prose; it's a structural exercise. The actual sentence in the proposal might be 1/3 the length, but the underlying logic is checked against this template.

How our agent handles this

When the agent drafts a capability statement or proposal section, the reflect_and_critique step explicitly runs the substitution test on every claim. If a claim survives substitution, it's flagged as "win theme — consider strengthening or removing." Only claims that fail substitution are kept as discriminators.

The agent also tracks discriminator repetition count per proposal. Below 5 repetitions = warning. Below 3 = forced rewrite.

A 3-step exercise for tomorrow morning

  1. Pull your last 3 proposal submissions.
  2. Highlight every sentence that you believe is a discriminator.
  3. Apply the substitution test (replace your firm with the largest competitor).

If less than 30% of the highlighted sentences survive substitution, your discriminator hit rate is below industry average. Rewrite the next proposal with explicit substitution-test gates at the Pink and Red review stages.

The improvement is measurable in CPARS scores within 6-9 months.