Today we are shipping aricode v0.5.0 -- the most ambitious release since the project began. This is not an incremental update. It is a rethinking of what a local-first coding agent can do when it has real memory, real autonomy, and real integration points.
If you have not heard of aricode before: it is a behavior-first coding agent designed for local models. You run it on your machine with Ollama or any OpenAI-compatible endpoint. It reads your codebase, builds a persistent knowledge graph, and works alongside you in a terminal REPL. No cloud account required. No data leaves your network.
v0.5.0 takes that foundation and builds six major features on top of it.
Autonomous Dreaming
This is the headline feature. Start a dream session before you leave for the night, and aricode will explore your entire codebase autonomously -- mapping architecture, tracing patterns, spotting issues, and imagining where the code could go next.
Dreaming happens in three phases:
- Light Sleep (Survey & Triage) -- aricode reads through your project with fresh eyes. It maps entry points, traces module boundaries, identifies naming inconsistencies, flags dead code, and builds a mental picture of how everything connects. Then it ranks what is worth investigating further.
- REM (Deep Dive & Futures) -- the interesting threads get pulled. aricode follows each question down the rabbit hole -- tracing call chains, reading test coverage, understanding why things were built the way they were. Then it imagines where the code could evolve: what new features would fit naturally, what refactors would pay off, what is one bad merge away from breaking.
- Deep Sleep (Synthesis) -- everything crystallizes into a dream journal. A first-person narrative of what it found, what surprised it, and what it recommends. It also extracts patterns, antipatterns, conventions, and a living schema of your codebase that persists into future sessions.
When you come back in the morning, you have a journal, a futures tree, extracted conventions, and an updated architecture schema waiting for you. It is like having a senior engineer do a deep code review while you sleep.
The Node SDK
aricode is no longer just a CLI. With v0.5.0, you can embed it directly in any Node.js application using the new SDK.
import { createAricode } from 'aricode/sdk';
const ari = await createAricode({
model: 'qwen2.5-coder:32b',
cwd: '/path/to/project',
});
for await (const event of ari.run('Refactor the auth module')) {
console.log(event.type, event.data);
}
The SDK gives you everything the CLI has, but programmable:
- createAricode() -- spin up a session with a model, working directory, and optional configuration.
- Event stream -- every action aricode takes (file reads, edits, commands, thinking steps) is emitted as a typed event you can subscribe to.
- Custom tools -- register your own tools that aricode can call during a session. Connect it to your database, your API, your deployment pipeline.
- Host-owned state -- the calling application controls session lifecycle. Pause, resume, fork, or terminate sessions programmatically.
This opens the door to building custom dev tools, CI integrations, and IDE plugins on top of aricode. The SDK reference has full documentation.
Behavioral Compilation
When tests fail, most tools dump raw output and hope you can figure it out. aricode takes a different approach: it compiles test failures into behavioral specifications.
Each failing assertion becomes a test witness -- a structured description of what a function is supposed to do. Instead of "expected '+14155550100' but got '(415)555-0100'", you get a specification: "formatPhone should transform a parenthesized US number into E.164 format."
aricode then uses the dependency graph to perform root cause analysis. It identifies which function, if fixed, would resolve the most failures at once. Across fix iterations, it tracks which witnesses are resolved, which regressed, and whether the overall trajectory is improving.
The result is that fixing test failures goes from "read wall of red text and guess" to "here is exactly what is wrong, here is the root cause, and here is the best place to start fixing it."
Knowledge Graph Improvements
The persistent knowledge graph -- the core of aricode's memory -- gets several upgrades in v0.5.0:
- Semantic concepts -- beyond tracking files and symbols, aricode now extracts higher-level patterns it discovers during exploration. Things like "this is the auth flow" or "these three files form the validation pipeline."
- Incremental updates -- the graph now uses content hashing and git diffs to only re-index files that actually changed. On a large project, re-indexing after a pull goes from seconds to milliseconds.
- Relationship depth -- imports, function calls, inheritance chains, type references, and runtime relationships are all tracked with source locations and confidence scores.
The knowledge graph is what makes aricode fundamentally different from stateless AI tools. It remembers what it has learned, and every session builds on the last.
Hooks System
v0.5.0 introduces a hooks system that lets you wire your own automation into aricode's lifecycle. Hooks are shell commands triggered at specific points:
- post-edit -- run formatters or linters after every file edit
- post-write -- trigger builds or type checks after new files are created
- pre-command -- validate or transform commands before execution
- post-command -- capture output or run follow-up actions
- session-start -- set up environment, pull latest, run migrations
- session-end -- clean up, commit, generate reports
Configuration is a single JSON file. No plugins to install, no API to learn. If it runs in a shell, it works as a hook.
Edit Intelligence
Every code change aricode makes now triggers a post-edit analysis pipeline. Before a change is considered done, aricode automatically:
- Runs your project's linter (ESLint, Pyright, clippy, or whatever you use)
- Checks the edit against your project's naming conventions and import style
- Traces the dependency graph to report the blast radius -- what other code might be affected
- Re-indexes the edited files in the knowledge graph so the next action has up-to-date context
This means aricode catches its own mistakes before they become your problem. It is self-correcting in a way that matters.
Why Local-First Matters
Every major AI coding tool today requires sending your code to someone else's servers. For many teams -- especially those working on proprietary software, in regulated industries, or simply with strong opinions about data ownership -- that is a dealbreaker.
aricode runs entirely on your machine. Your code never leaves your network. You can use Ollama with open-weight models, connect to a self-hosted inference server, or use any OpenAI-compatible cloud provider if you choose to. The point is that the choice is yours.
Local-first does not mean limited. With models like Qwen 2.5 Coder 32B running on consumer hardware, local inference is now genuinely capable. aricode is built to get the most out of these models -- through persistent context, structured tool use, and behavioral understanding of your code.
Get Started
Install aricode with a single command:
curl -fsSL https://install.aricode.dev | sh
Then launch it in any project directory and run /init to build the knowledge graph. Run /dream to start your first autonomous exploration.
For the full setup guide, head to the documentation. To explore every feature in depth, see the features page. And follow the project on GitHub — full source releases at v1.
Ready to try it?
aricode is free, open source, and runs entirely on your machine. Install it in 30 seconds and give your local model superpowers.
Read the docs →