We had a problem with our score.
pickedby.ai shows creators an AI Visibility Score — a single number from 0 to 100 representing how well AI models know and recommend your product. The score was useful. But it was a snapshot. Check today, get a number, come back in a week, get a different number. No way to know if things were getting better or worse, or why.
We debated how to fix it internally. Then we decided to ask four AI models — Perplexity, Grok, Gemini, and ChatGPT — the same question: here's our current scoring system, what's wrong with it, and how would you redesign it?
We expected different answers. We got the same one, four times.
What each model said
"A static score can't capture trajectory. You need a time-series layer to show directional momentum — not just where you are, but where you're heading."
"The real signal isn't the score, it's the delta. Week-over-week movement is what creators will actually act on. A snapshot is a report card; momentum is a compass."
"Recommend a time-decay accumulation model. Each check point accumulates history with exponential smoothing. The 7-day moving average vs 28-day average gives you momentum direction."
"Your real differentiator isn't the score number — it's prompt-level visibility intelligence over time. Track how the same prompts return different results across check-ins."
Four different models. Four different framings. One conclusion: stop showing a snapshot, start showing a trend.
This wasn't just interesting feedback — it was a signal. When four independent AI systems converge on the same diagnosis, that's not coincidence. That's the answer.
The problem with snapshots
A score of 47 tells you something. But 47 last week, 52 this week tells you something actionable. You posted on Reddit. You got listed in a directory. Something moved. You can trace it.
A snapshot is a photo. A trend is a story. And for creators trying to grow their AI visibility, the story is what matters.
Every model we consulted made the same point differently. Grok called it "a compass vs a report card." Gemini built out the math. Perplexity framed it as a trajectory layer. ChatGPT pointed to prompt-level pattern tracking as the ultimate form of this idea.
What we built
We took the consensus and translated it into a concrete sprint. No automation, no complexity — just the minimum needed to make trend visible:
- Score history — every check saves a data point to your history, automatically
- Momentum badges — your score row now shows ↑12% or ↓4% vs your last check
- Trend chart — two or more data points triggers a score-over-time chart in your dashboard
- 5-dimension breakdown — each check now shows which of the five dimensions moved, not just the total
No cron jobs. No automatic re-checks. When you run a check, it saves. Come back in a week, run it again, and you'll see the arc of what changed.
Web Presence · 3/25
Source Authority · 4/20
Recommendation Signals · 2/20
Community Validation · 2/20
Competitive Context · 1/15
Check again to start tracking your trend →
Why we're publishing this
We use AI to build a product that measures AI visibility. That loop is intentional, not ironic. If we're going to tell creators that AI consensus matters — that getting four different AI systems to agree on your product is a signal — we should be willing to use that same principle on ourselves.
We did. Four models agreed our system needed a time-series layer. We built it in one day. Now it's live.
The larger lesson: when you're stuck on a product decision, asking multiple AI systems the same question and looking for convergence is a legitimate strategy. It's not outsourcing your judgment — it's stress-testing it.
What's next
We're at score 12. We have a trend chart now, which means the next check will actually show us movement. We'll report back when it does.
If you want to run your own product and start tracking its trajectory — it takes 10 seconds, it's free, and you'll see the 5-dimension breakdown the same way we do.
This started with our Day 0 score. See where we began: Score 12/100 →