Architecture
Technical overview of ReasonKit’s design.
Design Philosophy
ReasonKit follows these principles:
- Rust-first - Performance and safety as priorities
- Modular - Each ThinkTool is independent
- Extensible - Easy to add new tools and providers
- Observable - Clear visibility into reasoning process
High-Level Architecture
┌─────────────────────────────────────────────────────────────────┐
│ USER INTERFACE │
├───────────────┬───────────────┬───────────────┬────────────────┤
│ CLI │ REST API │ Rust Lib │ Python Lib │
└───────────────┴───────────────┴───────────────┴────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ CORE ENGINE │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌──────────────┐ ┌────────────────────────┐ │
│ │ Profiles │ │ Execution │ │ Output │ │
│ │ Manager │ │ Engine │ │ Formatter │ │
│ └─────────────┘ └──────────────┘ └────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ THINKTOOLS │
├──────────┬──────────┬──────────┬──────────┬──────────┬─────────┤
│ GigaThink│LaserLogic│ BedRock │ProofGuard│Brutal │PowerCombo│
│ │ │ │ │Honesty │ │
└──────────┴──────────┴──────────┴──────────┴──────────┴─────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ LLM PROVIDERS │
├───────────────┬───────────────┬───────────────┬────────────────┤
│ Anthropic │ OpenAI │ OpenRouter │ Ollama │
└───────────────┴───────────────┴───────────────┴────────────────┘
Core Components
1. User Interfaces
CLI (src/main.rs)
#![allow(unused)]
fn main() {
#[derive(Parser)]
struct Cli {
#[command(subcommand)]
command: Commands,
}
#[derive(Subcommand)]
enum Commands {
Think { question: String },
Gigathink { input: String },
LaserLogic { input: String },
// ...
}
}
REST API (src/server/)
#![allow(unused)]
fn main() {
async fn analyze_handler(
State(state): State<AppState>,
Json(request): Json<AnalyzeRequest>,
) -> impl IntoResponse {
let result = state.engine.analyze(request).await?;
Json(result)
}
}
2. Core Engine
Profiles Manager (src/profiles/)
#![allow(unused)]
fn main() {
pub struct ProfileManager {
profiles: HashMap<String, Profile>,
}
impl ProfileManager {
pub fn get(&self, name: &str) -> Option<&Profile> {
self.profiles.get(name)
}
pub fn list(&self) -> Vec<&str> {
self.profiles.keys().map(|s| s.as_str()).collect()
}
}
}
Execution Engine (src/engine/)
#![allow(unused)]
fn main() {
pub struct ExecutionEngine {
providers: ProviderRegistry,
tools: ToolRegistry,
}
impl ExecutionEngine {
pub async fn analyze(
&self,
question: &str,
profile: &Profile,
) -> Result<AnalysisResult> {
let provider = self.providers.get_default()?;
let mut results = Vec::new();
for tool_name in &profile.tools {
let tool = self.tools.get(tool_name)?;
let result = tool.analyze(question, &provider).await?;
results.push(result);
}
let synthesis = self.synthesize(&results).await?;
Ok(AnalysisResult {
question: question.to_string(),
tool_results: results,
synthesis,
})
}
}
}
3. ThinkTools
Each ThinkTool implements the ThinkTool trait:
#![allow(unused)]
fn main() {
#[async_trait]
pub trait ThinkTool: Send + Sync {
type Output: Serialize + Deserialize;
fn name(&self) -> &str;
fn description(&self) -> &str;
async fn analyze(
&self,
input: &str,
provider: &dyn LlmProvider,
) -> Result<Self::Output>;
fn prompt_template(&self) -> &str;
}
}
Example: GigaThink
#![allow(unused)]
fn main() {
pub struct GigaThink {
perspectives: usize,
include_contrarian: bool,
}
#[async_trait]
impl ThinkTool for GigaThink {
type Output = GigaThinkResult;
fn name(&self) -> &str {
"GigaThink"
}
async fn analyze(
&self,
input: &str,
provider: &dyn LlmProvider,
) -> Result<GigaThinkResult> {
let prompt = self.build_prompt(input);
let response = provider.complete(&prompt).await?;
let result = self.parse_response(&response)?;
Ok(result)
}
}
}
4. LLM Providers
Provider abstraction:
#![allow(unused)]
fn main() {
#[async_trait]
pub trait LlmProvider: Send + Sync {
fn name(&self) -> &str;
async fn complete(&self, prompt: &str) -> Result<String>;
async fn stream(
&self,
prompt: &str,
) -> Result<impl Stream<Item = Result<String>>>;
}
}
Anthropic Provider
#![allow(unused)]
fn main() {
pub struct AnthropicProvider {
client: Client,
api_key: String,
model: String,
}
#[async_trait]
impl LlmProvider for AnthropicProvider {
async fn complete(&self, prompt: &str) -> Result<String> {
let response = self
.client
.post("https://api.anthropic.com/v1/messages")
.header("x-api-key", &self.api_key)
.json(&json!({
"model": self.model,
"messages": [{"role": "user", "content": prompt}]
}))
.send()
.await?;
let data: AnthropicResponse = response.json().await?;
Ok(data.content[0].text.clone())
}
}
}
Data Flow
1. User Input
│
▼
2. Profile Selection
│ - Determine which tools to run
│ - Load tool configurations
│
▼
3. Execution Planning
│ - Identify parallel vs sequential
│ - Set up execution context
│
▼
4. Tool Execution (for each tool)
│ ┌────────────────────────────┐
│ │ a. Build prompt │
│ │ b. Send to LLM provider │
│ │ c. Parse response │
│ │ d. Validate output │
│ └────────────────────────────┘
│
▼
5. Synthesis
│ - Combine tool outputs
│ - Generate overall insight
│
▼
6. Output Formatting
│ - Format for requested output type
│ - Apply styling/structure
│
▼
7. Return to User
Configuration System
#![allow(unused)]
fn main() {
#[derive(Debug, Deserialize)]
pub struct Config {
pub default: DefaultConfig,
pub providers: HashMap<String, ProviderConfig>,
pub thinktools: HashMap<String, ToolConfig>,
pub profiles: HashMap<String, ProfileConfig>,
pub output: OutputConfig,
}
impl Config {
pub fn load() -> Result<Self> {
let config_path = Self::default_path()?;
let content = std::fs::read_to_string(&config_path)?;
let config: Config = toml::from_str(&content)?;
Ok(config)
}
}
}
Error Handling
Custom error types with context:
#![allow(unused)]
fn main() {
#[derive(Error, Debug)]
pub enum ReasonKitError {
#[error("Configuration error: {0}")]
Config(#[from] ConfigError),
#[error("Provider error: {provider} - {message}")]
Provider {
provider: String,
message: String,
#[source]
source: Option<Box<dyn std::error::Error + Send + Sync>>,
},
#[error("Tool execution failed: {tool} - {message}")]
ToolExecution {
tool: String,
message: String,
},
#[error("Analysis timed out after {0} seconds")]
Timeout(u64),
}
}
Testing Architecture
tests/
├── unit/ # Unit tests for individual components
├── integration/ # Integration tests
├── e2e/ # End-to-end tests
└── fixtures/ # Test data and mocks
Mock provider for testing:
#![allow(unused)]
fn main() {
pub struct MockProvider {
responses: HashMap<String, String>,
}
#[async_trait]
impl LlmProvider for MockProvider {
async fn complete(&self, prompt: &str) -> Result<String> {
let key = Self::hash_prompt(prompt);
self.responses
.get(&key)
.cloned()
.ok_or_else(|| Error::NotFound)
}
}
}
Extension Points
- New ThinkTools - Implement
ThinkTooltrait - New Providers - Implement
LlmProvidertrait - New Output Formats - Implement
OutputFormattertrait - New Integrations - Implement
Integrationtrait