Selene Blog

Long-form notes on building local-first multi-agent products

Borrowing modern reading UX patterns from 2025-2026 blogs: clear cards, useful metadata, side navigation, and media-rich storytelling with practical implementation detail.

A friendly cream-colored blob character sits at a wooden desk holding a long paper receipt with clean line items, next to a small glowing laptop. Warm terracotta palette, cozy late-night coding vibe.

Apr 21, 20267 min read

How to predict your AI coding costs in 2026: a practical model for teams

Most AI coding bills look unpredictable because the pricing is built out of synthetic credits on top of real tokens. Here is a four-variable cost model, verified Anthropic and OpenAI rate cards, and an honest monthly bill for a five-developer team — $368 uncached or roughly $150 with prompt caching, not $3,000.

IndustryEngineeringpredictable AI coding pricingAI coding costs 2026
Three friendly round blobs gathered around a glowing laptop at night, each with a paper role card floating above its head — a crescent moon, a house, and a question mark. A warm illustration of a multi-agent team playing a role-guessing game.

Apr 21, 20264 min read

How to name your AI agents (and why the ones with names get all the work)

Claude, Devin, Jules, Goose, Harvey — every AI agent people actually call by name went out the door with a human handle, not a version string. A short, warm read on why names matter, a lineup of real named agents in production, and three rules we now use when we ship a new one.

ProductEngineeringnaming AI agentsmulti-agent systems
GitHub repository page for tercumantanumut/selene showing the repo is Public, MIT licensed, with 165 stars, 32 forks and 1,471 commits.

Apr 19, 20263 min read

Why an open-source license outlasts any vendor policy: the Selene promise explained

Every AI coding vendor eventually re-prices its plans — Augment Code is a recent example, and the same gravity pulls on everyone else. Here’s why subscription pricing is structurally unstable, what an MIT license actually guarantees, and how Selene’s BYOK + self-host architecture shrinks the surface where promises can break.

ProductIndustryopen source AI agentAugment Code policy changes
Selene Settings panel titled "Choose your AI provider" showing ten providers — Anthropic, OpenRouter, Ollama, vLLM, Moonshot Kimi, MiniMax, BlackBox AI, Codex, Claude Code and Antigravity — as radio options, with an OpenAI Codex connected account card beneath.

Apr 19, 20267 min read

Augment Code vs Claude Code: the $3,000 vs $200 math, and the BYOK option most teams miss

A team swapped Augment Code at ~$3,000/month for Claude Code at $200/seat and saved 90%. The three AI coding pricing models — credit-metered, flat-rate and BYOK self-host — a five-developer month priced at provider list rates, and a migration checklist that works coming off any bundled tool.

ProductIndustryAugment Code vs Claude CodeAI coding pricing
Selene chat showing five parallel Agent tool calls — Correctness and Architecture reviews visible with their prompts and streaming results, and three more agent badges pinned at the top — while the composer shows an Agent is processing in background status

Apr 17, 20269 min read

Distributed code review on Selene — a real run, five lenses, one commit

A transcript-with-commentary of a real code review run on Selene. Five subagents, five lenses, one small delegation-lifecycle commit, dispatched as a single parallel batch. We walk through the fan-out, the findings clustered by theme, and the synthesis that turned them into concrete follow-ups.

EngineeringProductcode reviewmulti-agent
Selene desktop app welcome screen with chat history sidebar and prompt input

Apr 17, 20267 min read

Owning your AI agent platform: why we built Selene open, BYOK, and self-hosted

A short, honest note on how AI coding tool pricing evolved over the last eighteen months — and the five architectural decisions (open source, BYOK, model-agnostic per-task models, open MCP tool protocol, real multi-agent delegation) that we made to give Selene users a platform they can depend on for years.

ProductEngineeringopen sourceBYOK
SWE-bench Lite CLI result showing 182 resolved out of 300

Mar 14, 20266 min read

My first SWE-bench Lite run with Selene cleared 60.67%

This was my first real SWE-bench Lite pass with Selene, not a polished rerun. I used Claude Opus 4.6 in non-thinking mode, kept the default Selene agent, ran tasks sequentially, and still landed at 182 resolved out of 300.

EngineeringBenchmarksSWE-bench Litebenchmark