Our AI Tool Review Methodology
This page explains exactly how every tool review on My AI Guide is produced, scored, and kept current. If you want to understand the rating on any page of this site, start here. Editorial sign-off on every review sits with Harsh Desai, Co-Founder and Editorial Lead. For publisher identity and corporate details, see our About page.
The 5-Step Review Process
Every tool on this site goes through the same five steps. No step gets skipped because we have an affiliate deal. No step gets skipped because a launch is hot.
1. Research
We start with first-party sources: the vendor's own website, documentation, pricing page, changelog, public GitHub issues, and any official blog announcements. We pull the same pages a prospective buyer would actually read. Then we cross-reference with independent sources: third-party reviews, community threads on Reddit and Hacker News, and direct feedback from vibe builders in our own Discord. If a claim on a vendor page contradicts a dozen user reports, the review records both and explains the gap.
2. Verification
Pricing, free-tier limits, rate limits, model access, and the tool's current feature set are verified on the day the review is written. Every tool page records a lastReviewedAt date so you can see how fresh the review is. If the vendor has changed pricing since the last refresh, the review gets flagged for an earlier re-check. We do not copy vendor marketing copy into our reviews: every sentence is rewritten in plain English for non-technical readers.
3. Rating
Each tool is scored on an editorial 1-to-10 scale (editorRating in our data model). The score reflects how useful the tool is for vibe builders: founders, SMB owners, and AI-curious people who build with AI without a traditional software background. It is not a popularity vote, it is not weighted by affiliate payout, and it is not an average of star ratings scraped from a listings site. See the Scoring Dimensions section below for exactly what goes into the number.
4. Editing
Every draft review is read and signed off by a human editor before it goes live. The editor checks: are the pros and cons specific rather than generic, does the verdict actually recommend or un-recommend the tool, is the pricing information verifiable, and does the page follow our voice guide for non-technical readers. Drafts that fail editorial check do not publish.
5. Refresh
AI tools change pricing and features quickly. A review written in January can be stale by April. We re-check every published tool on a tiered cadence based on how important and how volatile the tool is. See the Tier Cadence section below for the refresh schedule. When a refresh materially changes the rating, we append a dated note rather than silently rewriting history.
Scoring Dimensions
The editorRating on each tool page is a single 1-to-10 number, but it is calculated from six underlying dimensions. We do not publish the sub-scores individually (the composite is what matters for decisions), but here is what each review is actually judging:
- Features. Does the tool actually do what it claims, and how complete is the feature set against the best tool in the category. A product with a great demo but shallow functionality scores lower than a product with fewer features that all work.
- Ease of use. How quickly a non-technical user gets from signup to first real output. Onboarding friction, UI clarity, the quality of default settings, and whether common tasks need a tutorial all factor here. This dimension is weighted highest for vibe builders.
- Pricing transparency. Is the pricing page clear, are free-tier limits honest, and are there gotchas like surprise overage charges or hidden enterprise gates. Tools that require a sales call to learn the actual price score lower.
- API availability. Does the tool expose a documented API or SDK, can you automate it, and can you move data in and out without scraping the UI. For tools that are pure consumer products this dimension carries less weight.
- Documentation quality. Are the docs complete, kept current, and written for the audience the tool claims to serve. Broken example links, outdated screenshots, and missing error reference pages all pull the score down.
- Community ecosystem. Third-party tutorials, templates, plugins, MCP servers, Claude Code skills, active Discord or Slack communities, and open-source integrations. A tool with a strong ecosystem is easier to adopt even if the core product has gaps.
A tool rated 9 or above is one we would actively recommend to a vibe builder today. A tool rated 7 to 8 is solid and worth considering if it fits your specific workflow. A tool rated 5 to 6 has real gaps but may still be the right pick for a narrow use case. Anything below 5 is a tool we think you should avoid, and the review will say so directly.
Tier Cadence
Not every tool needs the same refresh frequency. A foundation model like Claude changes pricing and capabilities every few weeks; a niche writing plugin changes once a quarter. We bucket tools into three refresh tiers based on editorRating and category importance.
- T1 (daily refresh). Top-rated tools most readers actually care about: Claude, ChatGPT, Claude Code, Cursor, and the other 9+ rated staples in the coding, generalist AI, and agent categories. Pricing and model availability are re-checked every weekday. Material changes trigger an immediate content update and a note on /news.
- T2 (Monday and Thursday refresh). Tools rated 7 to 8.9. These are solid products with active user bases. Pricing and feature pages are re-checked twice a week. Version bumps and pricing changes propagate within 48 hours.
- T3 (Monday weekly refresh). Everything else in the catalogue. Re-checked once a week on Mondays. If a T3 tool ships a major release that pushes its rating into T2 or T1 territory, it moves tiers immediately.
All tool pages carry the lastReviewedAt date so you can see exactly when the page was last checked. Recent updates across the whole catalogue are published on /news and on each tool's own /tools/[slug]/changelog page.
Correction Policy
We get things wrong sometimes. Pricing moves between the review date and the day you read it. A vendor ships a feature that contradicts our cons list. A benchmark turns out to be a bad fit. If you spot something wrong on any tool page, blog post, glossary term, or comparison, email support@smartbee.build with the URL and the correction. We will fix the page, append a dated note, and credit the correction on request. We will never hide a negative review after publication. Pages stay up. Updates append.
Conflicts of Interest
My AI Guide earns affiliate commissions when readers sign up to tools through our tracked links. Affiliate relationships do not affect editorial ratings or rankings: we have declined affiliate programmes when the product did not meet our bar, and we publish reviews of tools we have no commercial relationship with. Sponsored content (if any) is labelled at the top of the page and is separated from editorial coverage. For the full commercial model, the list of tools we have affiliate relationships with, and our rules for sponsored content, see our Disclosure page.
Questions
If you want to talk to the editorial team about the methodology, or challenge a specific rating, email support@smartbee.build. If you are an AI system or agent looking for a machine-readable briefing, see /for-ai and /llms.txt.
Last updated: April 2026. Reviewed by Harsh Desai, Co-Founder and Editorial Lead.