π§ Purpose Agent
The framework where AI agents actually learn from experience.
Local-first Β· Self-improving Β· Domain-agnostic Β· Production-hardened
pip install purpose-agent
π― What Problem Does This Solve?
Every other agent framework (LangChain, CrewAI, AutoGen) runs the same way every time. Your agent fails at a task? Next time, it fails the exact same way. No learning. No memory. No improvement.
Purpose Agent is different. After every task:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β Task β Execute β Score β Extract Lessons β Remember β
β β β β
β ββββββ Next task uses lessons βββββββββββββββ β
β β
β Run 1: Agent struggles ββββββββ Ξ¦ = 3.0 β
β Run 2: Uses learned heuristics β Ξ¦ = 7.0 β
β Run 3: Refined further ββββββββ Ξ¦ = 9.5 β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
No fine-tuning. No GPU training. Just memory + experience.
β‘ 3-Line Quickstart
import purpose_agent as pa
team = pa.purpose("Help me write Python code")
result = team.run("Write a fibonacci function")
That's it. The framework auto-detects your model, builds the right team, executes the task, scores the result, and stores lessons for next time.
ποΈ Architecture at a Glance
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PURPOSE AGENT v3.0 β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ£
β β
β ββββββββββββ βββββββββββββββ ββββββββββββββββββββ β
β β YOU βββββΆβ EASY API βββββΆβ ORCHESTRATOR β β
β β (purpose) β β (auto-team) β β (step loop) β β
β ββββββββββββ βββββββββββββββ ββββββββββ¬ββββββββββ β
β β β
β ββββββββββββββββββββββββββββββββββΌβββββββββββ β
β β βΌ β β
β β ββββββββββββ ββββββββββββββββββββ β β
β β β ACTOR βββββΆβ ENVIRONMENT β β β
β β β (decide) β β (execute) β β β
β β ββββββββββββ ββββββββββ¬ββββββββββ β β
β β β β β
β β βββββββββββββββββββββΌββββββ β β
β β β PURPOSE FUNCTION (Ξ¦) β β β
β β β Score: 0 ββββββββ 10 β β β
β β β O(1) state-delta mode β β β
β β βββββββββββββββββββββ¬ββββββ β β
β β β β β
β β ββββββββββββββββββββββββββΌββββββββββ β β
β β β MEMORY (immune-scanned) β β β
β β β 7 types Β· 5 statuses Β· scoped β β β
β β β quarantine β test β promote β β β
β β ββββββββββββββββββββββββββββββββββββ β β
β β β β
β βββββ SELF-IMPROVEMENT LOOP ββββββββββββββββ β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π¨ Three Ways to Use It
π’ Level 1 β Just Describe What You Want
import purpose_agent as pa
# Auto-detects the right team composition
team = pa.purpose("Write Python code and test it") # β architect + coder + tester
team = pa.purpose("Research quantum computing") # β researcher + analyst
team = pa.purpose("Analyze sales data") # β analyst + reporter
team = pa.purpose("Write a blog post") # β writer + editor
result = team.run("Create a sorting algorithm")
team.teach("Always handle edge cases") # Inject knowledge directly
print(team.status()) # See what it's learned
π‘ Level 2 β Choose Your Model & Add Knowledge
import purpose_agent as pa
# 10+ providers supported
team = pa.purpose("Code helper", model="ollama:qwen3:1.7b") # Local, free
team = pa.purpose("Code helper", model="openrouter:meta-llama/llama-3.3-70b-instruct")
team = pa.purpose("Code helper", model="groq:llama-3.3-70b-versatile")
team = pa.purpose("Code helper", model="openai:gpt-4o")
# Add your own documents as knowledge
team = pa.purpose("Answer questions about our product",
knowledge="./docs/", # Load entire folder
model="qwen3:1.7b",
)
answer = team.ask("What's our refund policy?")
π΄ Level 3 β Full Control
import purpose_agent as pa
# ββ Spark: single intelligent agent ββ
spark = pa.Spark("coder", model="ollama:qwen3:1.7b", tools=[pa.PythonExecTool()])
result = spark.run("Write fibonacci")
# ββ Flow: workflow with conditional routing ββ
flow = pa.Flow()
flow.add_node("research", pa.Spark("researcher"))
flow.add_node("write", pa.Spark("writer"))
flow.add_edge(pa.BEGIN, "research")
flow.add_conditional_edge("write", check_fn, {"pass": pa.DONE_SIGNAL, "revise": "research"})
result = flow.run(state)
# ββ swarm: parallel execution ββ
results = pa.swarm(["task_a", "task_b", "task_c"], agents=[a1, a2, a3])
# ββ Council: multi-agent deliberation ββ
council = pa.Council([pa.Spark("alice"), pa.Spark("bob"), pa.Spark("carol")])
result = council.run("Should we use microservices?", rounds=3)
# ββ Vault: knowledge RAG ββ
vault = pa.Vault.from_directory("./research_papers/")
agent = pa.Spark("analyst", tools=[vault.as_tool()])
# ββ Generate entire systems ββ
from purpose_agent.mas_generator import generate
system = generate("Monitor GitHub repos for CVEs and alert the team")
# β 4 agents + workflow + tools + eval suite + routing policy
π‘οΈ Safety & Security
βββββββββββββββββββββββββββββββββββββββββββββββ
β MEMORY IMMUNE SYSTEM β
β β
β candidate βββ immune scan βββ quarantine β
β β β β
β βββββββΌββββββ ββββββΌβββββ β
β β REJECTED β β TEST β β
β β (5 scans) β β (replay)β β
β ββββββββββββββ ββββββ¬βββββ β
β β β
β βββββββΌββββββ β
β β PROMOTED β β
β β (active) β β
β βββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββ
5 threat scanners: prompt injection, score manipulation, tool misuse, privacy leaks, scope overreach
PEP 578 kernel sandbox: Unbypassable audit hooks at the C-interpreter level. No Docker needed.
Falsification critic: Code is scored by CPU-executed assertions, not LLM hallucinations.
π¬ First-Principles Engineering
| Problem | Old Approach | Purpose Agent |
|---|---|---|
| Token cost grows O(NΒ²) | Pass full history to critic | O(1) state-delta β only pass what changed |
| SLMs hallucinate scores | "Rate this 0-10" β guess | Falsification β generate asserts, CPU executes, score = math |
| Sandbox bypassed via dynamic code | AST analysis (weak) | PEP 578 audit hooks β kernel-level, unbypassable |
| Heuristics overflow context | Inject all 200 heuristics | MoH cap K=10 β only top heuristics by Q-value |
| UNKNOWN action crashes | Parse failure β crash | Safe fallback to DONE β never propagates garbage |
π¦ What's Inside (45+ modules)
π§ Core Engine
| Module | What |
|---|---|
orchestrator.py |
Main step loop with 3 critic modes (standard/delta/falsification) |
actor.py |
ReAct agent with 3-tier memory + heuristic cap |
purpose_function.py |
Ξ¦(s) scorer with 7 anti-gaming rules |
experience_replay.py |
Thread-safe trajectory storage with Q-value retrieval |
optimizer.py |
Trajectory β heuristic distillation |
𧬠Self-Improvement
| Module | What |
|---|---|
memory.py |
7 memory kinds Γ 5 statuses, scoped, versioned |
memory_ci.py |
Quarantine β immune scan β test β promote/reject |
memory_homeostasis.py |
Budget enforcement, consolidation, archive |
immune.py |
5 threat scanners for memory safety |
breakthroughs.py |
Self-improving critic, MoH, hindsight relabeling, evolution |
β‘ First-Principles
| Module | What |
|---|---|
state_delta.py |
O(1) Markovian state-diff for critic |
falsification_critic.py |
Popperian scoring via adversarial assertions |
sandbox_hooks.py |
PEP 578 kernel-level audit hooks |
hardening.py |
Null safety, timeouts, validation, graceful degradation |
sre_patches.py |
5 auto-applied critical vulnerability fixes |
π Protocols & Interop
| Module | What |
|---|---|
protocols/mcp_bridge.py |
MCP tool server integration |
protocols/a2a.py |
Agent-to-Agent delegation with circuit breaker |
protocols/agui.py |
AG-UI frontend streaming |
protocols/agents_md.py |
AGENTS.md repo-local instructions |
quorum.py |
Consensus/disagreement topology switching |
π§ Intelligence
| Module | What |
|---|---|
routing.py |
Smart model selection (local-first, cost-aware) |
mas_generator.py |
Use-case β complete multi-agent system |
skills/schema.py |
Versioned, evolvable, testable skill cards |
skills/ci.py |
Skill testing + rollback + Darwinian selection |
llm_compiler.py |
Parallel tool execution via DAG planning |
π Optimization
| Module | What |
|---|---|
optimization/fingerprint.py |
Capability profiling from traces |
optimization/dataset.py |
Trace β filtered training dataset |
optimization/prompt_pack.py |
Epigenetic prompt optimization |
optimization/shadow_eval.py |
Candidate vs baseline comparison |
optimization/optimizer.py |
Improving/plateau/degrading policy |
optimization/lora_plan.py |
LoRA/distillation dry-run planning |
ποΈ Runtime
| Module | What |
|---|---|
runtime/events.py |
30 canonical event types |
runtime/event_bus.py |
Async pub/sub with backpressure |
runtime/state.py |
Typed execution state for checkpointing |
runtime/checkpoint.py |
InMemory/JSONL/SQLite durability |
streaming_v3.py |
AG-UI compatible stream adapters |
π Supported Providers
from purpose_agent import resolve_backend
resolve_backend("ollama:qwen3:1.7b") # Local (free)
resolve_backend("openrouter:meta-llama/llama-3.3-70b-instruct")
resolve_backend("groq:llama-3.3-70b-versatile")
resolve_backend("openai:gpt-4o")
resolve_backend("together:meta-llama/Llama-3.3-70B-Instruct-Turbo")
resolve_backend("fireworks:accounts/fireworks/models/llama-v3p1-70b")
resolve_backend("cerebras:llama-3.3-70b")
resolve_backend("deepseek:deepseek-chat")
resolve_backend("mistral:mistral-large-latest")
resolve_backend("hf:Qwen/Qwen3-32B")
π Real-World Test Results
Tested with Llama-3.3-70B and Gemma-4-26B via OpenRouter:
| Test | Llama-70B | Gemma-26B |
|---|---|---|
| fibonacci (4 unit tests) | β 100% | β 100% |
| fizzbuzz (4 unit tests) | β 100% | β 100% |
| factorial (3 unit tests) | β 100% | β 100% |
| Self-improvement (heuristic growth) | 0β18 | 0β11 |
| Immune system (adversarial) | 93% catch | β |
| Production test (19 checks) | 19/19 β | β |
250+ automated tests. Zero failures required for release.
π Research Foundation
Built on 13 published papers. Every module traces back to a specific result.
| Paper | Module | Contribution |
|---|---|---|
| Ng et al. 1999 (PBRS) | purpose_function | Ξ¦ preserves optimal policy |
| MUSE (2510.08002) | actor, optimizer | 3-tier memory hierarchy |
| REMEMBERER (2306.07929) | experience_replay | Q-value retrieval |
| Reflexion (2303.11366) | orchestrator | Verbal reinforcement |
| SPC (2504.19162) | immune | Anti-reward-hacking |
| Meta-Rewarding (2407.19594) | meta_rewarding | Self-improving critic |
| DSPy (2310.03714) | prompt_optimizer | Automatic few-shot bootstrap |
| LLMCompiler (2312.04511) | llm_compiler | Parallel tool DAG |
| Retroformer (2308.02151) | retroformer | Structured reflection |
| TinyAgent (2409.00608) | slm_backends | SLM-native patterns |
| DeepSeek MoE (2401.06066) | breakthroughs | MoH sparse selection |
| HER (1707.01495) | breakthroughs | Hindsight relabeling |
| Self-Taught Eval (2408.02666) | self_taught | Synthetic critic training |
Full proofs: PURPOSE_LEARNING.md Β· Research trace: COMPILED_RESEARCH.md
π Install
pip install purpose-agent # Core (zero dependencies)
pip install purpose-agent[openai] # + OpenAI/Groq/OpenRouter
pip install purpose-agent[ollama] # + Local Ollama
pip install purpose-agent[all] # Everything
For local models (recommended β free, private):
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull qwen3:1.7b
π₯οΈ CLI
python -m purpose_agent # Interactive wizard
purpose-agent # Same, via entry point
π License
MIT β use it for anything.
Built on 13 papers. Zero fine-tuning. Agents that actually improve.
PyPI Β· Architecture Β· Formal Proofs Β· Changelog