KnitKnot

Brand Health · 8 Runs

+24 pts / 8 wks Sentiment Week 1 Now

What a candidate asks AI about your company

Kevin Kho · Co-founder, KnitKnot

· 4 min read

You get a job offer. What do you do next?

You check Glassdoor. You look up the executive team on LinkedIn. You find the Reddit threads discussing the product. You ask a friend who knows someone who worked there. And now, if you’re like most candidates, you ask AI. “What should I know about working at {company}?” “Is this a good place for software engineers?” Same diligence people have always done. AI just made it instant.

A senior leader at a mid-size company reached out. His team was struggling to recruit AI engineers specifically. He wanted something to bring back to his executive team that showed what candidates were reading about them.

We ran a couple of prompts together. What came back was a mix of things the company already knew and things that stung a little.

ChatGPT was telling candidates the company was below the band on comp for AI roles. Whether that was accurate or not, that’s what candidates were reading before they decided whether to engage. You can have a perfectly reasonable comp structure, but if the model says you’re below market, that’s the first thing a candidate sees when they ask “should I take this offer?”

The company had a history of layoffs, and the AI engines surfaced that prominently. Not buried in a footnote. Right there in the first paragraph of the response. Every candidate who asked about the company got a reminder that people had been let go.

And then there were comments from executives, stuff that was reasonable in context, anti-AI-hype positions that probably made sense in a press interview. But when AI strips the context and feeds them back to a candidate weighing an offer, they land different. “The CTO doesn’t believe in AI” is a very different sentence when you’re an AI engineer deciding where to work.

It wasn’t all negative. The models also said this company was one of the leaders in their space. Competitive, well-positioned. And the AI work so far has been fairly minimal, which means it’s greenfield. That’s exciting if you’re the kind of person who wants to build something from scratch. Or it’s a red flag if you read “minimal AI investment” and see another indicator that the company isn’t serious about it. Depends on the candidate. AI doesn’t spin. It just presents both sides.

None of this was wrong, exactly. But it was shaping whether people showed up.

And this is the thing about brand presence in AI. It’s not a marketing problem. It’s not something you fix with better copy or a refreshed careers page. The model is pulling from press releases, earnings calls, Reddit threads, LinkedIn posts, news coverage. It’s synthesizing a story about your company in real time, and that story is what candidates read before they ever talk to a recruiter. Most companies have no idea what that story says.

We built KnitKnot for head-to-head sales bake-offs. Compare you vs your competitor, find where AI gets it wrong, fix it. Recruiting use cases weren’t on the roadmap.

But the infra could support it. Same prompts, same scoring, different question. With barely any changes, he had prompts running across ChatGPT, Claude, Perplexity, and Gemini. Track what shifts over time.

This is one of those moments where a customer walks in the door for one thing, sees what the product actually does, and asks “can I use it for this?” That’s the most honest signal you can get. We didn’t pitch recruiting. He saw the infrastructure and asked the question himself.

We’re not generating playbooks for this yet. Our processing pipelines are mainly built for head-to-head comparisons, so we don’t totally fulfill the use case. But we can get the prompts, track them, and keep a score. Now we know it works.

This isn’t a high priority compared to the head-to-head evaluations for B2B SaaS companies, but it is a clear reminder we’re just scratching the surface.