Keyboard shortcuts

Press ← or β†’ to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Architecture

πŸ—οΈ Deep dive into ReasonKit’s internal design.

Understanding ReasonKit’s architecture helps you extend it, debug issues, and contribute effectively.

High-Level Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         CLI / API                                β”‚
β”‚                    (rk-core binary)                              β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     Orchestrator                                 β”‚
β”‚              (Profile selection, tool sequencing)                β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                   ThinkTool Registry                             β”‚
β”‚         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”              β”‚
β”‚         β”‚GigaThinkβ”‚LaserLogicβ”‚BedRock β”‚ProofGuardβ”‚BrutalHonestyβ”‚
β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    LLM Provider Layer                            β”‚
β”‚         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”              β”‚
β”‚         β”‚Anthropicβ”‚ OpenAI  β”‚OpenRouterβ”‚ Ollama  β”‚              β”‚
β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Core Components

1. CLI Layer (src/main.rs)

Entry point for the application:

// Simplified structure
fn main() -> Result<()> {
    let args = Args::parse();
    let config = Config::load(&args)?;

    let runtime = Runtime::new()?;
    runtime.block_on(async {
        let result = orchestrator::run(&args.input, &config).await?;
        output::render(&result, &config.output_format)?;
        Ok(())
    })
}

Responsibilities:

  • Parse command-line arguments
  • Load and merge configuration
  • Initialize async runtime
  • Render output

2. Orchestrator (src/thinktool/executor.rs)

Coordinates ThinkTool execution based on profile:

#![allow(unused)]
fn main() {
pub struct Executor {
    registry: Registry,
    profile: Profile,
    provider: Box<dyn LlmProvider>,
}

impl Executor {
    pub async fn run(&self, input: &str) -> Result<Analysis> {
        let tools = self.profile.tools();
        let mut results = Vec::new();

        for tool in tools {
            let result = self.registry
                .get(tool)
                .execute(input, &self.provider)
                .await?;
            results.push(result);
        }

        self.synthesize(input, results).await
    }
}
}

Responsibilities:

  • Select tools based on profile
  • Execute tools in sequence or parallel
  • Synthesize final analysis

3. ThinkTool Registry (src/thinktool/registry.rs)

Manages available ThinkTools:

#![allow(unused)]
fn main() {
pub struct Registry {
    tools: HashMap<String, Box<dyn ThinkTool>>,
}

impl Registry {
    pub fn new() -> Self {
        let mut tools = HashMap::new();
        tools.insert("gigathink".to_string(), Box::new(GigaThink::new()));
        tools.insert("laserlogic".to_string(), Box::new(LaserLogic::new()));
        tools.insert("bedrock".to_string(), Box::new(BedRock::new()));
        tools.insert("proofguard".to_string(), Box::new(ProofGuard::new()));
        tools.insert("brutalhonesty".to_string(), Box::new(BrutalHonesty::new()));
        Self { tools }
    }

    pub fn get(&self, name: &str) -> Option<&dyn ThinkTool> {
        self.tools.get(name).map(|t| t.as_ref())
    }
}
}

4. ThinkTool Trait (src/thinktool/mod.rs)

Interface all ThinkTools implement:

#![allow(unused)]
fn main() {
#[async_trait]
pub trait ThinkTool: Send + Sync {
    /// Human-readable name
    fn name(&self) -> &str;

    /// Short alias (e.g., "gt" for GigaThink)
    fn alias(&self) -> &str;

    /// Tool description
    fn description(&self) -> &str;

    /// Execute the tool
    async fn execute(
        &self,
        input: &str,
        provider: &dyn LlmProvider,
        config: &ToolConfig,
    ) -> Result<ToolResult>;

    /// Generate the prompt for the LLM
    fn build_prompt(&self, input: &str, config: &ToolConfig) -> String;

    /// Parse the LLM response into structured output
    fn parse_response(&self, response: &str) -> Result<ToolResult>;
}
}

5. LLM Provider Layer (src/thinktool/llm.rs)

Abstraction over different LLM providers:

#![allow(unused)]
fn main() {
#[async_trait]
pub trait LlmProvider: Send + Sync {
    async fn complete(&self, request: &CompletionRequest) -> Result<CompletionResponse>;

    fn name(&self) -> &str;
    fn supports_streaming(&self) -> bool;
}

pub struct AnthropicProvider {
    client: reqwest::Client,
    api_key: String,
    model: String,
}

#[async_trait]
impl LlmProvider for AnthropicProvider {
    async fn complete(&self, request: &CompletionRequest) -> Result<CompletionResponse> {
        let response = self.client
            .post("https://api.anthropic.com/v1/messages")
            .header("x-api-key", &self.api_key)
            .header("anthropic-version", "2023-06-01")
            .json(&self.build_request(request))
            .send()
            .await?;

        self.parse_response(response).await
    }
}
}

Data Flow

Request Flow

User Input
    β”‚
    β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  CLI Parse  β”‚  Parse args, load config
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
    β”‚
    β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Executor   β”‚  Select profile, initialize tools
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
    β”‚
    β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  GigaThink  │──┐
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
    β”‚            β”‚
    β–Ό            β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚  Sequential or parallel
β”‚ LaserLogic  │───  based on profile
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
    β”‚            β”‚
    β–Ό            β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚   BedRock   │───
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
    β”‚            β”‚
    β–Ό            β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚ ProofGuard  │───
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
    β”‚            β”‚
    β–Ό            β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚BrutalHonestyβ”‚β”€β”€β”˜
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
    β”‚
    β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Synthesis  β”‚  Combine all tool outputs
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
    β”‚
    β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Output    β”‚  Format and render
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Tool Execution Flow

#![allow(unused)]
fn main() {
// Inside each ThinkTool
async fn execute(&self, input: &str, provider: &dyn LlmProvider) -> Result<ToolResult> {
    // 1. Build the prompt with tool-specific instructions
    let prompt = self.build_prompt(input);

    // 2. Call the LLM
    let request = CompletionRequest {
        prompt,
        max_tokens: self.config.max_tokens,
        temperature: self.config.temperature,
    };
    let response = provider.complete(&request).await?;

    // 3. Parse structured output
    let result = self.parse_response(&response.text)?;

    // 4. Validate and return
    self.validate(&result)?;
    Ok(result)
}
}

Configuration System

Configuration Hierarchy

Priority (highest to lowest):
1. Command-line flags
2. Environment variables
3. Config file (~/.config/reasonkit/config.toml)
4. Built-in defaults

Config Structure

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Deserialize)]
pub struct Config {
    pub profile: Profile,
    pub provider: ProviderConfig,
    pub output: OutputConfig,
    pub tools: ToolsConfig,
}

#[derive(Debug, Clone, Deserialize)]
pub struct ToolsConfig {
    pub gigathink: GigaThinkConfig,
    pub laserlogic: LaserLogicConfig,
    pub bedrock: BedRockConfig,
    pub proofguard: ProofGuardConfig,
    pub brutalhonesty: BrutalHonestyConfig,
}
}

Error Handling

Error Types

#![allow(unused)]
fn main() {
#[derive(Debug, thiserror::Error)]
pub enum ReasonKitError {
    #[error("Configuration error: {0}")]
    Config(String),

    #[error("Provider error: {0}")]
    Provider(String),

    #[error("Parse error: {0}")]
    Parse(String),

    #[error("Validation error: {0}")]
    Validation(String),

    #[error("Timeout after {0:?}")]
    Timeout(Duration),

    #[error("Rate limited, retry after {0:?}")]
    RateLimit(Duration),
}
}

Error Propagation

#![allow(unused)]
fn main() {
// Errors bubble up with context
fn run_analysis(input: &str, config: &Config) -> Result<Analysis> {
    let provider = create_provider(config)
        .map_err(|e| ReasonKitError::Config(format!("Provider setup: {}", e)))?;

    let result = executor.run(input, &provider)
        .await
        .map_err(|e| ReasonKitError::Provider(format!("Execution: {}", e)))?;

    Ok(result)
}
}

Extension Points

Adding a New ThinkTool

  1. Implement the ThinkTool trait:
#![allow(unused)]
fn main() {
pub struct MyTool {
    config: MyToolConfig,
}

#[async_trait]
impl ThinkTool for MyTool {
    fn name(&self) -> &str { "MyTool" }
    fn alias(&self) -> &str { "mt" }

    async fn execute(&self, input: &str, provider: &dyn LlmProvider) -> Result<ToolResult> {
        // Implementation
    }
}
}
  1. Register in the Registry:
#![allow(unused)]
fn main() {
registry.insert("mytool".to_string(), Box::new(MyTool::new()));
}

Adding a New Provider

  1. Implement LlmProvider:
#![allow(unused)]
fn main() {
#[async_trait]
impl LlmProvider for MyProvider {
    async fn complete(&self, request: &CompletionRequest) -> Result<CompletionResponse> {
        // API call implementation
    }
}
}
  1. Add to provider factory:
#![allow(unused)]
fn main() {
fn create_provider(config: &ProviderConfig) -> Result<Box<dyn LlmProvider>> {
    match config.name.as_str() {
        "myprovider" => Ok(Box::new(MyProvider::new(config)?)),
        // ...
    }
}
}

Performance Considerations

Async Execution

ThinkTools can run in parallel when independent:

#![allow(unused)]
fn main() {
// Parallel execution for independent tools
let (gigathink, laserlogic) = tokio::join!(
    registry.get("gigathink").execute(input, provider),
    registry.get("laserlogic").execute(input, provider),
);
}

Caching

Responses are cached to avoid redundant LLM calls:

#![allow(unused)]
fn main() {
pub struct CachedProvider<P: LlmProvider> {
    inner: P,
    cache: Arc<RwLock<LruCache<String, CompletionResponse>>>,
}
}

Connection Pooling

HTTP clients use connection pooling:

#![allow(unused)]
fn main() {
let client = reqwest::Client::builder()
    .pool_max_idle_per_host(10)
    .timeout(Duration::from_secs(30))
    .build()?;
}