KnitKnot
← All updates

April 2026

KnitKnot launches

AI Presence Management for B2B. Benchmark how ChatGPT evaluates your company against a competitor, get a gap report, act on the misses.

Read the story →

Four engines

Claude, Perplexity, and Gemini join ChatGPT. Same prompt set runs across all four — see exactly where your AI presence diverges between platforms.

AI Presence Score

77/100

By Engine

ChatGPT
75
Claude
81
Perplexity
84
Gemini
79

Per-product benchmarks

Companies with a portfolio (think a forecasting library company with multiple libraries) can benchmark each product separately instead of one company-wide blob. Each product gets its own prompts, score, and report.

Per-product reports

All products
Nixtla Brand
TimeGPT
StatsForecast
MLForecast
NeuralForecast
HierarchicalForecast

Playbook + shareable reports

Every benchmark ends with a public, shareable gap report and a ranked Playbook — which pages to write, which features to surface, scored by how many losing evals each one fixes.

Playbook

Ranked by impact

high

Add Snowflake-native deployment to homepage

Fixes 12 losing evals

high

Publish ML interface comparison vs Darts

Fixes 8 losing evals

medium

Update Nixtlaverse docs with sklearn-style examples

Fixes 5 losing evals

medium

Add benchmark to TimeGPT-2 launch page

Fixes 4 losing evals

low

Refresh deprecated docs.nixtla.io redirect

Fixes 2 losing evals