This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Prompt-driven dialectical reasoning and autocoding system for LLMs. It generates structured prompts by default and can optionally execute them through MCP backends. Two modes:
- Dialectical Reasoning — Thesis → Antithesis → Synthesis (optional Council of perspectives and Judge)
- Autocoding (Player-Coach) — Based on Block AI's g3 agent research. Player implements, Coach independently verifies against requirements. Requirements are the single source of truth; the Coach discards the Player's self-report.
# Install dependencies (uv recommended)
uv sync --dev
# Run all tests
uv run pytest -v
# Run a single test file
uv run pytest tests/test_autocoding.py -v
# Run a single test
uv run pytest tests/test_autocoding.py::test_function_name -v
# Coverage
uv run pytest --cov=hegelion --cov-report=html
# Format
uv run black hegelion tests
# Lint
uv run ruff check hegelion tests
# Run MCP server
hegelion-server # installed entry point
python -m hegelion.mcp.server # from source
# Self-test MCP server tools
hegelion-server --self-test- Black formatter, line length 100. Black excludes
hegelion/engine.pyand.gemini/. - Ruff linter. Both enforced in CI.
- Type hints on public functions, Google-style docstrings for public APIs.
- Pytest with
asyncio_mode = "auto"— async test functions just work. - Conventional commits:
feat:,fix(mcp):,docs:,refactor:,style:,chore:.
The default MCP behavior is still prompt generation for the client (Claude Desktop, Cursor, etc.) to execute. Server-side execution is optional and now flows through a small backend layer (prompt, cli, codex_mcp, auto) so dialectic single-shot calls and coach turns can be executed without changing the core prompt/state model.
-
hegelion/core/— Pure prompt generation and state managementconstants.py— Enums:DialecticPhase,AutocodingPhaseprompt_dialectic.py—PromptDrivenDialecticclass generates thesis/antithesis/synthesis promptsprompt_autocoding.py—PromptDrivenAutocodingclass generates player/coach promptsautocoding_state.py—AutocodingStatedataclass: stateless session state machine with save/load persistence
-
hegelion/mcp/— MCP server layerserver.py— Entry point (main()), tool dispatchertooling.py—build_tools()returns MCP Tool definitions with schemasconstants.py—ToolNameenum (4 tools),MCP_SCHEMA_VERSIONexecution.py— Backend selection, env/config loading, retry handlingcodex_mcp_backend.py— Codex MCP stdio orchestration for independent coach executionvalidation.py— Input validationresponse.py— Response formattinghandlers/dialectic.py— Dialectic tool handlershandlers/autocoding.py— Autocoding tool handlers
-
hegelion/scripts/mcp_setup.py— Cross-platform MCP config generator for various hosts
AutocodingState is a stateless dataclass passed explicitly between MCP tool calls. Each turn gets fresh context to prevent context pollution. State includes schema_version for client stability.
Backend thread/session metadata must stay out of AutocodingState; execution details are additive tool-output fields only.
Three output formats: sections, json, synthesis_only — configurable per tool call via response_style parameter.
HEGELION_EXECUTION_BACKEND— Default execution backend (auto,prompt,cli,codex_mcp)HEGELION_AUTOCOACH_BACKEND— Backend override for coach turnsHEGELION_LLM_COMMAND_JSON/HEGELION_LLM_COMMAND— CLI command for optional server-side LLM executionHEGELION_MCP_AUTO_EXECUTE=1— Enable auto-execution modeHEGELION_CODEX_MCP_COMMAND_JSON— Codex MCP server command (defaults to["codex", "mcp-server"])HEGELION_CODEX_MODEL— Optional Codex model overrideHEGELION_CODEX_SANDBOX— Codex sandbox mode (default:read-only)HEGELION_CODEX_APPROVAL_POLICY— Codex approval policy (default:never)
GitHub Actions on push to main and PRs:
- Lint (black + ruff) on Python 3.12
- Tests on Python 3.10, 3.11, 3.12
- Publish to PyPI on version tags (
v*)