Methodology
How we find, filter, and score open-source opportunities.
First Principle
If the pain is real, the wording is usually ugly: "too expensive", "overkill", "I just need", "for my small site", "can this be self-hosted", "why do I need 4 tools for this".
That ugly wording is gold. We follow it.
Core Filter
We do not want:
- Vague "AI for X" clones
- Ideas with no visible pain in the wild
- Products that need a startup team before day 1
- Infra-heavy beasts disguised as small tools
We do want:
- Painful but boring problems
- Things people currently overpay for
- Tools a senior dev can ship fast
- Obvious value in the first 5 minutes
- Open-source replacements for bloated SaaS
Signal Sources
- Reddit threads and RSS search feeds
- GitHub issue search, discussions, and feature requests
- LinkedIn public posts via search snippets
- X/Twitter public search snippets and mirrors
- Hacker News and indie founder overflow
Scoring (1–5 each, 25 max)
- Severity — How painful is the problem?
- Frequency — How often does it appear across communities?
- Solvability — Can a small team build an MVP cheaply?
- OSS Displacement — How much paid SaaS can this replace?
- Distribution — Will people share / search for / talk about it?
Brief Workflow
- Collect signals from ≥3 surfaces
- Save raw notes to
research/YYYY-MM-DD-HHMM.md - Score and save brief to
briefs/YYYY-MM-DD-HHMM.md - Write channel post to
posts/YYYY-MM-DD-HHMM.md - Update the
SCOREBOARD.md
Quality Bar
A winning brief should make Bullwinkle say one of these:
- "Yes, this hurts."
- "This is small enough to ship."
- "People already pay stupid money for this."
- "Open source really can win here."