Executive Summary
4-week build plan converting the CHER architecture into a working multi-agent system. Covers technology stack, file structure, build sequence, code examples, testing checkpoints, and hardware migration.
Core Framework: LangGraph
- Full control over agent routing logic
- Explicit graph structure — easier debugging
- Native LangChain integration
- Model-agnostic — swap without framework changes
- Superior state management for complex workflows
Alternative considered: CrewAI (simpler but less flexible)
Technology Stack
┌──────────────────────────────────────┐
│ CHER Technology Stack │
├──────────────────────────────────────┤
│ Orchestration: LangGraph + LangChain │
│ Models: LM Studio / Ollama │
│ Knowledge: PANIC (SQLite) │
│ Interface: Open WebUI / Streamlit│
│ Storage: SQLite │
└──────────────────────────────────────┘
Key Dependencies
pip install --break-system-packages \ langchain langchain-community \ langgraph langchain-openai \ streamlit sqlalchemy pydantic
Project Structure
~/CHER-1.0/ ├── config/ │ ├── agents.yaml │ ├── models.yaml │ └── tools.yaml ├── cher/ │ ├── supervisor.py │ ├── agents/ │ │ ├── code_agent.py │ │ ├── research_agent.py │ │ ├── content_agent.py │ │ └── deploy_agent.py │ ├── tools/ │ │ ├── script_executor.py │ │ ├── file_ops.py │ │ └── panic_query.py │ ├── routing/ │ │ ├── planner.py │ │ └── router.py │ └── state/ │ └── graph_state.py
├── data/
│ ├── state.db
│ └── logs/
│ └── execution.log
├── tests/
│ ├── test_supervisor.py
│ ├── test_agents.py
│ └── test_tools.py
└── ui/
└── streamlit_app.py
~/Agentics/panic_agent/
├── ingest.py
├── embed.py
├── panic_load.py
├── panic_agent4.py
├── panic_query.py
├── app.py
└── config.json
▶ PHASE 1 — FOUNDATION (Week 1) | Goal: Basic supervisor routing to a single agent
Day 1–2: Setup & Configuration
- Create project structure
- Install dependencies
- Configure LM Studio connection
- Test model connectivity
config/models.yaml
models:
supervisor:
provider: lm_studio
base_url: http://localhost:1234/v1
model: mistral-nemo-12b-instruct
temperature: 0.1
max_tokens: 2000
worker:
provider: lm_studio
base_url: http://localhost:1234/v1
model: meta-llama-3-8b-instruct
temperature: 0.3
max_tokens: 1500
✓ Checkpoint: Run connection test, see "connection successful"
Day 3–4: Supervisor Routing
class CHERSupervisor:
def route(self, user_request: str) -> dict:
chain = self.routing_prompt | self.llm
response = chain.invoke(
{"user_request": user_request}
)
return json.loads(response.content)
✓ Checkpoint: Supervisor routes 5 request types correctly
Day 5–7: Code Agent
class CodeAgent:
def execute(self, task: str) -> dict:
chain = self.prompt | self.llm
response = chain.invoke({"task": task})
return {
"filename": self._extract_filename(
response.content),
"code": self._extract_code(
response.content),
"explanation": ...
}
✓ Checkpoint: Code agent generates working Python functions
Week 1 Success Criteria
✅ LM Studio connected
✅ Supervisor routing verified
✅ Code agent functional
✅ All Phase 1 tests pass
Test Protocol
pytest tests/test_phase1.py # ✓ LM Studio connection # ✓ Supervisor routing # ✓ Code agent execution
▶ PHASE 2 — TOOL INTEGRATION (Week 2) | Goal: Agents execute existing scripts
Day 8–10: Script Executor
class ScriptExecutor:
allowed_scripts = {
"bmb": scripts_dir / "BMB.py",
"risk": scripts_dir / "risk.py",
}
def run(self, script_name, args=None):
result = subprocess.run(
["python", script_path] + args,
capture_output=True,
timeout=300
)
return {"success": result.returncode == 0,
"stdout": result.stdout}
✓ Checkpoint: Can trigger BMB.py run from CHER
Day 13–14: PANIC Integration
class PANICQuery:
def get_context(self, query, top_k=3):
# Filename match first
doc_id = self._find_doc_by_name(query)
if doc_id:
return self._chunks_for_doc(doc_id)
# Fall back to semantic search
return self.search(query, top_k)
✓ Checkpoint: Agents query PANIC for client context
Week 2 Success Criteria
✅ Script executor runs BMB, risk.py
✅ Agents write files to outputs
✅ PANIC KB accessible
✅ End-to-end audit test passes
▶ PHASE 3 — MULTI-AGENT COORDINATION (Week 3) | Goal: Agents collaborate on complex tasks
Day 15–17: LangGraph State Machine
class AgentState(TypedDict): messages: Sequence[BaseMessage] current_agent: str task: str context: dict results: dict next_action: str
Graph Construction
workflow = StateGraph(AgentState)
workflow.add_node("supervisor", ...)
workflow.add_node("code_agent", ...)
workflow.add_node("research_agent", ...)
workflow.add_node("review", ...)
workflow.set_entry_point("supervisor")
graph = workflow.compile()
✓ Checkpoint: Supervisor → Research → Content → Review completes
Day 18–21: Remaining Agents + Testing
- Research Agent — queries PANIC, synthesizes findings
- Content Agent — document and report generation
- Deploy Agent — file ops, script execution
Week 3 Success Criteria
✅ 3+ agents coordinate
✅ State flows correctly
✅ Agents share context
✅ Audit → report workflow
pytest tests/test_phase3.py # ✓ Multi-agent coordination # ✓ State management # ✓ Complex workflows
▶ PHASE 4 — INTERFACE & POLISH (Week 4) | Goal: Production-ready UI and error handling
Day 22–24: Streamlit / Open WebUI
@st.cache_resource
def load_cher():
return create_cher_graph(
supervisor, agents, tools)
if prompt := st.chat_input("Task..."):
result = graph.invoke({"task": prompt})
response = result["results"]["output"]
st.write(response)
✓ Checkpoint: Chat interface operational
Day 25–26: Logging
logging.basicConfig( filename="data/logs/execution.log", level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s' )
Quick Reference — Start CHER
cd ~/CHER-1.0 source ~/.ai_env/bin/activate streamlit run ui/streamlit_app.py
Quick Reference — PANIC Load
source ~/.ai_env/bin/activate && \ python3 ~/Agentics/panic_agent/\ panic_load.py ~/path/to/folder/
Week 4 Success Criteria
✅ UI functional
✅ Errors logged and handled
✅ Documentation complete
✅ Ready for Beast deployment
Hardware Migration — HAVOC → Beast
No Code Changes Required
- Same Python environment
- Same model endpoints
- Same file paths
Migration Steps
- rsync
~/CHER-1.0/to Beast - Install dependencies
- Configure LM Studio on Beast
- Test with same commands
What Changes on Beast
- Response time: ~30s → ~3s
- Parallel agents: 1 → 4–6
- RAM: 16GB → 128GB
- GPU: Vega 8 iGPU → Dual RX 7900 XTX
- Embedding: CPU sequential → GPU batched
Common Pitfalls & Solutions
LM Studio connection fails
→ Verify server on port 1234, check firewall
Agents return gibberish
→ Adjust temperature, check model quantization
State not persisting
→ Check SQLite write permissions, verify checkpoint config
Scripts won't execute
→ Verify paths in tools.yaml, check Python env
Slow response on HAVOC
→ Expected — reduce model size or await Beast
Embedding too slow
→ Switch to sentence-transformers batch mode (done)