Editorial standards
How we decide what goes on the site, what doesn’t, and what we’ll never do. Most of this is what every editorial outlet should put in writing; we’re putting it in writing because the bar across this industry is low and the difference is the actual product.
No fabricated numbers
Every dollar figure on this site comes from a public pricing page, an invoice we’ve actually paid, or a source we can name. If we don’t have a number, we don’t write one.
This is the line we won’t cross. The temptation in stack-cost editorial is to invent a plausible-looking number when the real one is awkward to look up — an extra digit on a bandwidth bill, a guess at usage that lets the receipt total match a budget. We don’t do it. When a price has changed since publish, the page gets a stale tag and we update it; we don’t backfill the article with whatever number would make our recommendation look right.
When we’re uncertain, we soften — around, often, tends to. Not because hedging reads better, but because it’s honest. A definite number we made up reads more confidently than a soft real one; the cost is that the reader trusts you less the moment they verify.
Affiliates don’t buy placement
This site contains affiliate links. We earn a commission at no extra cost to you when you sign up for a product through one of those links. Every affiliate link is disclosed on the page where it appears.
Affiliate relationships do not influence which tools appear in our stack guides, comparisons, framework articles, or category roundups. We link to tools we actually use or have tested, and we list tools that don’t pay affiliate commissions when they’re the right pick (Cloudflare, AWS, most open-source projects). The “our pick” tag on a roundup means it was the editorial pick — not the highest-paying affiliate.
We will not write a paid review or sponsored post under cheapstack. If a vendor asks for that, we say no. If we ever change this policy — and we don’t plan to — this page gets updated first; the public git history will show every change.
We say what we’ve actually used
When we say “we run X in production,” we run X in production. When we say “we tested X,”we tested X for long enough to have a real opinion. When we’re writing about a tool we haven’t personally used — mostly the runners-up sections and some of the comparison-page editorial — we frame it as observed pattern from public docs, community discussion, and technical blog posts. Not as first-hand experience.
The line is small but it matters. “A common pattern is” is honest editorial. “In our experience” would be a lie in some of those contexts. We use the first only when we mean it.
Where AI fits, and where it doesn’t
The site uses AI for code (Claude Code wrote much of the Next.js scaffolding and editorial-template HTML), and Claude Design for the visual prototypes that the templates port. The editorial — the actual stack guides, comparisons, framework articles, tool roundups — is human-written, human-edited, and the numbers are human-verified against public sources.
We don’t publish AI-generated stack costs. We don’t publish AI-summarized vendor docs as our own analysis. The reason is simple: AI is good at writing plausible-sounding cost editorial, and bad at knowing when the plausible answer is wrong. The whole point of cheapstack is the wrongness cost — if an AI-generated guide leads you to a $300/mo Vercel bill because it didn’t know about a free Cloudflare alternative, the value proposition collapses.
When we get something wrong
Pricing changes. Tools deprecate features. We make mistakes. When we update a page for a real correction (a wrong number, a sunsetted feature, a new recommendation), the page gets a fresh updated tag and the change shows up in the RSS feed. If we update a page in a way that meaningfully changes a recommendation, we keep the old text in the git history and you can see the diff via the public repo.
If you spot a real error — a price that’s wrong by more than rounding, a deprecated feature we still recommend, a tool that has changed in a material way — tell us via the contact page. We’ll fix it and credit the catch.
Indie-bias is the feature
cheapstack is written from the perspective of someone building products on $50–$200/mo budgets, often outside the US tech ecosystem, optimizing for per-tool cost-per-feature rather than enterprise polish. That’s a real bias and we won’t pretend otherwise.
Enterprise-grade tools (Auth0, AWS, Datadog, etc.) get listed for context but rarely as the editorial pick. If you’re an enterprise buyer, the recommendations on this site will frequently be wrong for your actual constraints. That’s on us — we’re not writing for you. We link to enterprise vendors fairly when they’re the right answer at scale, and we tell you when the indie pick stops scaling and the enterprise alternative becomes correct.