Factor 1 of 7 · AUG framework
Acquisition audit
Are real people finding your product? Acquisition is the top-of-funnel factor. Score 0 here means the audit stops mattering — no traffic, no signal on any other factor.
What this measures
Whether real, intent-matched humans (and increasingly, LLMs citing on behalf of humans) arrive at your SaaS. Not vanity impressions. Not bot traffic. Sessions from searches your buyer was already running.
Why this matters more than founders think
Most SaaS founders assume acquisition is the entire problem. They're half right. Acquisition is the gate — without enough sessions, you can't even measure Activation through Performance. Below ~1,000 sessions/month, every downstream factor is noise. The math floor for monetization-stage work is:
revenue_ceiling_monthly = sessions × pages_per_session × RPM ÷ 1000At 100 sessions × 1.5 PV × €15 RPM = €2.25/month ceiling. No funnel optimization beats that ceiling. Until Acquisition clears the floor, every other audit signal is academic.
How it's scored (1-10 rubric)
- 1-2: <500 sessions/month. Pre-product-market-fit territory. Focus single-channel until 1,000.
- 3-4: 1,000-3,000 sessions/month. Foundation laid; channel diversity weak (one source = 70%+ traffic).
- 5-6: 5,000-15,000 sessions/month. Multi-channel; SEO + at least one other source contributing.
- 7-8: 30,000+ sessions/month. SEO compounding; brand search growing month-over-month; GEO/LLM citation occasional.
- 9-10: 100,000+ sessions/month. Category-defining brand search; ChatGPT / Claude / Perplexity cite the brand unprompted; no single channel >40%.
Common friction patterns
- Single-channel dependency. 90% of traffic from one source = one algorithm change away from zero. Diversify to 3+ before scaling.
- JS-gated content. GPTBot, ClaudeBot, PerplexityBot don't execute JS. Static-export or SSR mandatory. Hydrate AFTER content is in HTML.
- No llms.txt + no AI-crawler allowlist. LLM crawlers default to skipping sites that don't explicitly invite them. Both files = 5-minute fix.
- Thin programmatic surfaces. 4,000 template-based pages with no unique per-record substance = HCU penalty + AdSense rejection. See
rules/adsense-thin-content-prevention.mdin the fleet playbook. - Schema-encoded recommendations on YMYL surfaces. Article schema headlines like “X SELL signal” or “best Y to take” without licensed credentials = structured data abuse + Google policy violation.
The 12 fastest-compounding channels for solo founders
- Static-export + llms.txt (Day 1, free, compounds forever)
- Schema.org Article + Organization + Person on every page
- IndexNow on every deploy (faster than waiting for Google to crawl)
- Programmatic pages with genuinely unique data per record
- Comparison pages (X vs Y vs Z) — buyer-intent keywords
- Embeddable widgets / calculators (each embed = backlink)
- Hacker News Show HN (one-shot, big spike if accepted)
- Reddit organic comments in niche subs (multi-month compound)
- LinkedIn zero-click framework posts (operator-led)
- Substack guest essays (audience-borrow)
- Podcast guesting (60-day cycle, deep audience reach)
- Wikipedia citation candidates (durable, high-trust)
The fix priority order
- Below 1,000 sessions/mo? Don't optimize anything else. All compute on Acquisition.
- 1,000+ sessions but single-channel? Add a 2nd compounding channel (autonomous, not paid).
- Multi-channel but no GEO/LLM citation? Add llms.txt + schema.org + AI-crawler allowlist.
- All structural pieces in place? Scale via programmatic content with genuine unique data per record.
Continue: Factor 2 — Activation, or run the full 7-factor audit.