Configuration
ReasonKit can be configured via config file, environment variables, or CLI flags.
Configuration File
Create ~/.config/reasonkit/config.toml:
# Default settings
[default]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
profile = "balanced"
output_format = "pretty"
# LLM Providers
[providers.anthropic]
api_key_env = "ANTHROPIC_API_KEY"
model = "claude-sonnet-4-20250514"
max_tokens = 8192
[providers.openai]
api_key_env = "OPENAI_API_KEY"
model = "gpt-4o"
max_tokens = 8192
[providers.openrouter]
api_key_env = "OPENROUTER_API_KEY"
default_model = "anthropic/claude-sonnet-4"
[providers.ollama]
base_url = "http://localhost:11434"
model = "llama3"
# Output settings
[output]
format = "pretty" # pretty, json, markdown
color = true
show_timing = true
show_tokens = false
# ThinkTool configurations
[thinktools.gigathink]
min_perspectives = 10
include_contrarian = true
[thinktools.laserlogic]
fallacy_detection = true
assumption_analysis = true
show_math = true
[thinktools.bedrock]
decomposition_depth = 3
show_80_20 = true
[thinktools.proofguard]
min_sources = 3
require_citation = true
source_tier_threshold = 3
[thinktools.brutalhonesty]
severity = "high"
include_alternatives = true
# Profile customization
[profiles.custom_quick]
tools = ["gigathink", "laserlogic"]
gigathink_perspectives = 5
timeout = 30
[profiles.custom_thorough]
tools = ["gigathink", "laserlogic", "bedrock", "proofguard", "brutalhonesty"]
gigathink_perspectives = 15
laserlogic_depth = "deep"
proofguard_sources = 5
timeout = 600
Environment Variables
# Required: Your LLM provider API key
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export OPENROUTER_API_KEY="sk-or-..."
export GOOGLE_API_KEY="..."
export GROQ_API_KEY="gsk_..."
# Optional: Defaults
export RK_PROVIDER="anthropic"
export RK_MODEL="claude-sonnet-4-20250514"
export RK_PROFILE="balanced"
export RK_OUTPUT_FORMAT="pretty"
# Optional: Logging
export RK_LOG_LEVEL="info" # debug, info, warn, error
export RK_LOG_FILE="~/.local/share/reasonkit/logs/rk.log"
CLI Flags
CLI flags override config file and environment variables:
# Provider and model
rk-core think "question" --provider anthropic --model claude-3-opus-20240229
# Profile
rk-core think "question" --profile deep
# Output format
rk-core think "question" --format json
# Specific tool settings
rk-core think "question" --min-perspectives 15 --min-sources 5
# Timeout
rk-core think "question" --timeout 300
# Verbosity
rk-core think "question" --verbose
rk-core think "question" --quiet
Configuration Precedence
- CLI flags (highest priority)
- Environment variables
- Config file
- Built-in defaults (lowest priority)
Provider-Specific Configuration
Anthropic Claude
[providers.anthropic]
api_key_env = "ANTHROPIC_API_KEY"
model = "claude-sonnet-4-20250514"
max_tokens = 8192
temperature = 0.7
Available models:
claude-opus-4-20250514(most capable)claude-sonnet-4-20250514(balanced, recommended)claude-haiku-3-5-20250514(fastest)
OpenAI
[providers.openai]
api_key_env = "OPENAI_API_KEY"
model = "gpt-4o"
max_tokens = 8192
temperature = 0.7
Available models:
gpt-4o(most capable)gpt-4o-mini(fast, cost-effective)o1(reasoning-optimized)
Google Gemini
[providers.google]
api_key_env = "GOOGLE_API_KEY"
model = "gemini-2.0-flash"
Groq (Fast Inference)
[providers.groq]
api_key_env = "GROQ_API_KEY"
model = "llama-3.3-70b-versatile"
OpenRouter
[providers.openrouter]
api_key_env = "OPENROUTER_API_KEY"
default_model = "anthropic/claude-sonnet-4"
300+ models available. See openrouter.ai/models.
Ollama (Local)
[providers.ollama]
base_url = "http://localhost:11434"
model = "llama3"
Run ollama list to see available models.
Custom Profiles
Create custom profiles for common use cases:
[profiles.career]
# Optimized for career decisions
tools = ["gigathink", "laserlogic", "brutalhonesty"]
gigathink_perspectives = 12
laserlogic_depth = "deep"
brutalhonesty_severity = "high"
[profiles.fact_check]
# Optimized for verifying claims
tools = ["laserlogic", "proofguard"]
proofguard_sources = 5
proofguard_require_citation = true
[profiles.quick_sanity]
# Fast sanity check
tools = ["gigathink", "brutalhonesty"]
gigathink_perspectives = 5
timeout = 30
Use custom profiles:
rk-core think "Should I take this job?" --profile career
Output Configuration
Pretty (Default)
[output]
format = "pretty"
color = true
box_style = "rounded" # rounded, sharp, ascii
JSON
[output]
format = "json"
pretty_print = true
Markdown
[output]
format = "markdown"
include_metadata = true
Logging
[logging]
level = "info" # debug, info, warn, error
file = "~/.local/share/reasonkit/logs/rk.log"
rotate = true
max_size = "10MB"
Validating Configuration
# Check config is valid
rk-core config validate
# Show effective config
rk-core config show
# Show config file path
rk-core config path
Next Steps
- CLI Reference — Full command documentation
- Custom ThinkTools — Create your own tools