← Back to blog

How to name your AI agents (and why the ones with names get all the work)

Claude, Devin, Jules, Goose, Harvey — every AI agent people actually call by name went out the door with a human handle, not a version string. A short, warm read on why names matter, a lineup of real named agents in production, and three rules we now use when we ship a new one.

Apr 21, 20264 min read
ProductEngineeringnaming AI agentsmulti-agent systemsAI agent designClaude AnthropicDevin CognitionJules Google
Three friendly round blobs gathered around a glowing laptop at night, each with a paper role card floating above its head — a crescent moon, a house, and a question mark. A warm illustration of a multi-agent team playing a role-guessing game.

Three agents walk into a chat. Two have names. The third is called frontend-specialist-v2.3. Guess which one is still waiting to be useful.

Every AI team eventually runs into this. You spin up an impressive roster of specialists, and a few of them get called all the time while the rest sit in the dropdown like forgotten Slack channels. The difference, nine times out of ten, isn’t capability. It’s names.

Three friendly round blobs gathered around a glowing laptop at night, each with a paper role card floating above its head — a crescent moon, a house, and a question mark. A warm illustration of a multi-agent team playing a role-guessing game.
Your agents already have personalities. You just haven’t named them yet.

Why the named AI agents win

Anthropic didn’t call their model assistant-v3. They called it Claude, after Claude Shannon — the father of information theory. That choice wasn’t branding fluff. A name gives people somewhere to put their expectations. It’s a handle.

Cognition did the same when they shipped Devin in March 2024 and called it the "first AI software engineer". Whether or not you buy the claim, the name did the marketing for them. You remember Devin. You do not remember the number at the end of a config file.

Humans delegate to characters, not to slugs. We always have. A name is the cheapest possible mental model.

A smiling round blob character wearing a large paper name tag that reads "Claude" in bold text, with three unnamed blobs standing idle in the background.
The blob with the name gets the work. The nameless ones wait.

A short lineup of AI agents people actually call by name

Not a ranking, just a cast list. Every one of these went out the door with a real name attached, and every one of them gets talked about in meetings because of it.

Claude — Anthropic, 2023. Named after Claude Shannon. A name that sounds like someone you’d ask for advice.

DevinCognition Labs, March 2024. Pitched as an autonomous software engineer; went viral on launch day.

JulesGoogle Labs, GA in August 2025. An async coding agent that lives in your GitHub repo. Short name, one job.

GooseBlock’s open-source agent, Apache 2.0. The name sounds like a workhorse, and that’s what it is.

HarveyCounsel AI, used at more than half of the AmLaw 100. You hear the name and picture a lawyer. That’s the whole point.

Aideropen-source terminal pair programmer. Not a person’s name, but it reads like a role, not a version string. That’s enough.

Three rules for naming AI agents that stuck

1. One word, speakable. Claude, Devin, Jules, Goose, Harvey. You can say "hey Jules, open a PR" in Slack without feeling like you’re reading a build log.

2. Suggest the role, not the tech. Harvey feels like a lawyer before you’ve read a single feature. Goose feels like something you put to work. frontend-specialist-v2.3 feels like a JIRA ticket.

3. No version numbers. Ever. Nobody has ever said "let Claude-v3-turbo-preview handle it." When a prompt evolves, let the character evolve with it — or retire the agent and bring in a new one with a new name. Teammates, not SKUs.

Split illustration: on the left, a happy blob holds a clean paper tag that reads "Jules". On the right, a confused blob holds a crumpled, messy tag with a long technical slug nobody can read.
One of these agents gets called. The other one lives in the dropdown forever.

The mirror moment

We write this post and then we look at our own workflow. In Selene’s System Specialists, our agents are called Explore, Plan, Frontend Developer, Backend Architect. One of those passes the rules. Three of them don’t.

Naming is a live project for us too. That’s kind of the joke and kind of the point — the right names almost always come after the agent has been doing the work for a while.

The party-game test

Here’s the vibe check. Picture your agents playing one of those role-guessing games — vampire, villager, the quiet one who always turns out to be the werewolf. Can you tell who’s who from their names alone?

If yes, your team has character. If no, you have a spreadsheet.

Naming agents feels silly at first. It’s a little like naming your Roombas. Do it anyway.

The ones you name get work. The ones you don’t sit in the dropdown forever. Give them faces. You’ll ship more — and the humans on your team will stop dreading the handoff.

More from the blog

View all posts
A friendly cream-colored blob character sits at a wooden desk holding a long paper receipt with clean line items, next to a small glowing laptop. Warm terracotta palette, cozy late-night coding vibe.

Apr 21, 20267 min read

How to predict your AI coding costs in 2026: a practical model for teams

Most AI coding bills look unpredictable because the pricing is built out of synthetic credits on top of real tokens. Here is a four-variable cost model, verified Anthropic and OpenAI rate cards, and an honest monthly bill for a five-developer team — $368 uncached or roughly $150 with prompt caching, not $3,000.

IndustryEngineeringpredictable AI coding pricingAI coding costs 2026
GitHub repository page for tercumantanumut/selene showing the repo is Public, MIT licensed, with 165 stars, 32 forks and 1,471 commits.

Apr 19, 20263 min read

Why an open-source license outlasts any vendor policy: the Selene promise explained

Every AI coding vendor eventually re-prices its plans — Augment Code is a recent example, and the same gravity pulls on everyone else. Here’s why subscription pricing is structurally unstable, what an MIT license actually guarantees, and how Selene’s BYOK + self-host architecture shrinks the surface where promises can break.

ProductIndustryopen source AI agentAugment Code policy changes
Selene Settings panel titled "Choose your AI provider" showing ten providers — Anthropic, OpenRouter, Ollama, vLLM, Moonshot Kimi, MiniMax, BlackBox AI, Codex, Claude Code and Antigravity — as radio options, with an OpenAI Codex connected account card beneath.

Apr 19, 20267 min read

Augment Code vs Claude Code: the $3,000 vs $200 math, and the BYOK option most teams miss

A team swapped Augment Code at ~$3,000/month for Claude Code at $200/seat and saved 90%. The three AI coding pricing models — credit-metered, flat-rate and BYOK self-host — a five-developer month priced at provider list rates, and a migration checklist that works coming off any bundled tool.

ProductIndustryAugment Code vs Claude CodeAI coding pricing