Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

Turn Prompts into Protocols

ReasonKit is a structured reasoning engine that forces AI to show its work. Every angle explored. Every assumption exposed. Every decision traceable.

The Problem

Most AI responses sound helpful but miss the hard questions.

You ask: “Should I take this job offer?”

AI says: “Consider salary, benefits, and culture fit.”

What’s missing: Manager quality, team turnover, company trajectory, your leverage, opportunity cost, where people go after 2-3 years…

ReasonKit solves this by making AI reasoning structured, auditable, and reliable.

The Solution: ThinkTools

ReasonKit provides five specialized ThinkTools, each designed to catch a specific type of oversight:

ToolPurposeCatches
GigaThinkExplore all anglesPerspectives you forgot
LaserLogicCheck reasoningFlawed logic hiding in cliches
BedRockFind first principlesSimple answers under complexity
ProofGuardVerify claims“Facts” that aren’t true
BrutalHonestySee blind spotsThe gap between plan and reality

The 5-Step Process

Every deep analysis follows this pattern:

1. DIVERGE (GigaThink)     → Explore all angles
2. CONVERGE (LaserLogic)   → Check logic, find flaws
3. GROUND (BedRock)        → First principles, simplify
4. VERIFY (ProofGuard)     → Check facts, cite sources
5. CUT (BrutalHonesty)     → Be honest about weaknesses

Quick Example

# Install
cargo install reasonkit-core

# Ask a question with structured reasoning
rk-core think "Should I ask for a raise or look for a new job?" --profile balanced

Philosophy

Designed, Not Dreamed — Structure beats intelligence.

ReasonKit doesn’t make AI “smarter.” It makes AI show its work. The value is:

  • Structured output — Not a wall of text, but organized analysis
  • Auditability — See exactly what each tool caught
  • Catching blind spots — Five tools for five types of oversight

Who Is This For?

Anyone Making Decisions

  • Job offers, purchases, life changes
  • Career pivots, relationship decisions
  • Side projects and business ideas

Professionals

  • Strategic planning and due diligence
  • Research synthesis and fact-checking
  • Risk assessment and compliance

Teams

  • Architecture decisions
  • Product strategy
  • Investment analysis
  • Hiring decisions

Next Steps

Open Source

ReasonKit is open source under the Apache 2.0 license.

  • Free forever: 5 core ThinkTools + PowerCombo
  • Self-host: Run locally, own your data
  • Extensible: Create custom ThinkTools

View on GitHub

Quick Start

Get ReasonKit running in 30 seconds.

Installation

# Linux / macOS
curl -fsSL https://get.reasonkit.sh | bash

# Windows PowerShell
irm https://get.reasonkit.sh/windows | iex

Using Cargo

cargo install reasonkit-core

Using pip (with uv)

uv pip install reasonkit

From Source

git clone https://github.com/reasonkit/reasonkit-core
cd reasonkit-core
cargo build --release

Set Up Your LLM Provider

ReasonKit needs an LLM to power its reasoning. Set your API key:

# Anthropic Claude (Recommended)
export ANTHROPIC_API_KEY="your-key-here"

# Or OpenAI
export OPENAI_API_KEY="your-key-here"

# Or use OpenRouter for 300+ models
export OPENROUTER_API_KEY="your-key-here"

Your First Analysis

# Ask a simple question
rk-core think "Should I buy this $200 gadget?"

# Use a specific profile
rk-core think "Should I take this job offer?" --profile balanced

# See the difference ReasonKit makes
rk-compare "Is renting really throwing money away?" --profile balanced

Understanding the Output

ReasonKit shows structured analysis:

╔══════════════════════════════════════════════════════════════╗
║  GIGATHINK: Exploring Perspectives                           ║
╠══════════════════════════════════════════════════════════════╣
│  1. FINANCIAL: What's the total comp? 401k match? Equity?   │
│  2. CAREER: Where do people go after 2-3 years?             │
│  3. MANAGER: Your manager = 80% of job satisfaction         │
│  ...                                                         │
╚══════════════════════════════════════════════════════════════╝

╔══════════════════════════════════════════════════════════════╗
║  LASERLOGIC: Checking Reasoning                              ║
╠══════════════════════════════════════════════════════════════╣
│  ASSUMPTION DETECTED: "Higher salary = better"              │
│  HIDDEN VARIABLE: Cost of living in new location            │
│  ...                                                         │
╚══════════════════════════════════════════════════════════════╝

Choosing a Profile

ProfileTimeBest For
--quick~10 secDaily decisions
--balanced~20 secImportant choices
--deep~1 minMajor decisions
--paranoid~2-3 minHigh-stakes, can’t afford to be wrong

Next Steps

Installation

Get ReasonKit’s five ThinkTools for structured AI reasoning:

ToolPurposeUse When
GigaThinkExpansive thinking, 10+ perspectivesNeed creative solutions, brainstorming
LaserLogicPrecision reasoning, fallacy detectionValidating arguments, logical analysis
BedRockFirst principles decompositionFoundational decisions, axiom building
ProofGuardMulti-source verificationFact-checking, claim validation
BrutalHonestyAdversarial self-critiqueReality checks, finding flaws

Quick Install

Linux / macOS

curl -fsSL https://get.reasonkit.sh | bash

Windows (PowerShell)

irm https://get.reasonkit.sh/windows | iex

Prerequisites

  • Git (for building from source)
  • Rust 1.70+ (auto-installed if missing)
  • An LLM API key (Anthropic, OpenAI, OpenRouter, or local Ollama)

Installation Methods

The installer auto-detects your OS and architecture:

# Linux/macOS
curl -fsSL https://get.reasonkit.sh | bash

# Windows PowerShell
irm https://get.reasonkit.sh/windows | iex

This will:

  1. Install Rust if not present
  2. Clone and build ReasonKit
  3. Add rk-core to your PATH

Cargo

For Rust developers:

cargo install reasonkit-core

From Source

For development or customization:

git clone https://github.com/reasonkit/reasonkit-core
cd reasonkit-core
cargo build --release
./target/release/rk-core --help

Verify Installation

rk-core --version
# reasonkit-core 0.1.0

rk-core --help

LLM Provider Setup

ReasonKit requires an LLM provider. Choose one:

Best quality reasoning:

export ANTHROPIC_API_KEY="sk-ant-..."

OpenAI

export OPENAI_API_KEY="sk-..."

OpenRouter (300+ Models)

Access to many models through one API:

export OPENROUTER_API_KEY="sk-or-..."

# Specify a model
rk-core think "question" --model anthropic/claude-3-opus

Google Gemini

export GOOGLE_API_KEY="..."

Groq (Fast Inference)

export GROQ_API_KEY="..."

Local Models (Ollama)

For privacy-sensitive use cases:

ollama serve
rk-core think "question" --provider ollama --model llama3

Quick Test

Try each ThinkTool:

# GigaThink - Get 10+ perspectives
rk-core think "Should I start a business?" --tool gigathink

# LaserLogic - Check reasoning
rk-core think "This investment guarantees 50% returns" --tool laserlogic

# BedRock - Find first principles
rk-core think "What makes a good leader?" --tool bedrock

# ProofGuard - Verify claims
rk-core think "Coffee causes cancer" --tool proofguard

# BrutalHonesty - Reality check
rk-core think "My startup idea is perfect" --tool brutalhonesty

Configuration File

Create ~/.config/reasonkit/config.toml:

[default]
provider = "anthropic"
model = "claude-3-sonnet-20240229"
profile = "balanced"

[providers.anthropic]
api_key_env = "ANTHROPIC_API_KEY"

[providers.openai]
api_key_env = "OPENAI_API_KEY"
model = "gpt-4-turbo-preview"

[output]
format = "pretty"
color = true

Docker

docker run -e ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY \
  ghcr.io/reasonkit/reasonkit-core \
  think "Should I buy a house?"

Troubleshooting

“API key not found”

Make sure your API key is exported:

echo $ANTHROPIC_API_KEY  # Should print your key

“Rate limited”

Use a different provider or wait. Consider OpenRouter for high volume.

“Model not available”

Check that your provider supports the requested model:

rk-core models list  # Show available models

Next Steps

Your First Analysis

Let’s walk through a complete ReasonKit analysis step by step.

The Scenario

You’ve received a job offer. It pays 20% more than your current role, but requires relocating to a new city. You’re not sure whether to accept.

Running the Analysis

rk-core think "I received a job offer with 20% more salary but requires relocating to a new city. Should I take it?" --profile balanced

Understanding the Output

1. GigaThink: Exploring Perspectives

GIGATHINK: 10 Perspectives

1. FINANCIAL: 20% raise minus cost of living difference
2. CAREER: Is this a career accelerator or lateral move?
3. MANAGER: Who will you report to? Track record?
4. COMPANY: Growing, stable, or declining?
5. PERSONAL: How does your partner/family feel about relocating?
6. SOCIAL: What are you leaving behind? Friends, community?
7. RISK: What if this doesn't work out? Can you return?
8. OPPORTUNITY: Is this your best option or just the one in front of you?
9. TIMING: Is now the right time in your life for a major move?
10. GUT: When you imagine accepting, do you feel excited or anxious?

What to notice: GigaThink forces you to consider angles you might have missed while focused on the salary number.

2. LaserLogic: Checking Reasoning

LASERLOGIC: Hidden Assumptions

FLAW 1: "20% more salary = better"
→ Cost of living not factored in
→ $100K in Kansas City ≠ $100K in San Francisco

FLAW 2: "More money = happier"
→ Beyond $75K, correlation with happiness flattens
→ Other factors matter more at higher incomes

FLAW 3: "This opportunity won't come again"
→ Scarcity mindset may be causing rushed decision
→ Similar opportunities likely to arise

What to notice: LaserLogic catches the mental shortcuts you might have accepted uncritically.

3. BedRock: First Principles

BEDROCK: What Actually Matters

CORE QUESTION: Is this move aligned with your 5-year goals?

THE 80/20:
• Career growth potential (not just current salary)
• Quality of life in new location
• Reversibility if it doesn't work out

SIMPLIFIED DECISION FRAMEWORK:
If the role accelerates your career AND quality of life is acceptable
→ Strong yes

If it's a lateral move OR quality of life significantly worse
→ Probably no

What to notice: BedRock cuts through complexity to the essential question.

4. ProofGuard: Verifying Claims

PROOFGUARD: Fact Check

VERIFIED: Cost of living varies dramatically by city
→ San Francisco is 80% more expensive than national average
→ Your 20% raise might be a pay cut in real terms

VERIFIED: Job hopper stigma is decreasing
→ Average tenure is now 4.1 years (BLS, 2024)
→ Leaving after 1-2 years is increasingly common

TO VERIFY: Company's actual growth trajectory
→ Check their funding, revenue, layoff history
→ Ask for specific growth numbers in interview

What to notice: ProofGuard identifies claims worth verifying before deciding.

5. BrutalHonesty: Uncomfortable Truths

BRUTALHONESTY: Reality Check

UNCOMFORTABLE TRUTH 1:
You're probably overweighting the salary because it's quantifiable.
The harder-to-measure factors (manager quality, work-life balance)
often matter more for happiness.

UNCOMFORTABLE TRUTH 2:
Relocating is harder than you think.
Building new social connections takes years.
Most people underestimate the loneliness of a new city.

UNCOMFORTABLE TRUTH 3:
You might be running FROM something, not TO something.
Is there something about your current situation you're avoiding?

HONEST QUESTIONS:
• If the salary were the same, would you still want this move?
• Have you talked to people who work there (not recruiters)?
• What's your plan if this doesn't work out after 1 year?

What to notice: BrutalHonesty asks the questions you’ve been avoiding.

What to Do Next

Based on this analysis, you might:

  1. Gather more information

    • Calculate real cost-of-living adjusted salary
    • Talk to people who work at the company
    • Visit the new city before deciding
  2. Ask better questions

    • Why is this role open? Growth or replacement?
    • What does the career path look like?
    • What’s the team turnover like?
  3. Negotiate better

    • Armed with cost-of-living data, negotiate higher
    • Ask for relocation assistance
    • Negotiate a trial period if possible
  4. Make a decision framework

    • What would make this an obvious yes?
    • What would make this an obvious no?
    • Set a deadline to decide

Tips for Future Analyses

  1. Be specific — “Job offer” is better than “career question”

  2. Include context — Mention key constraints (timeline, family, etc.)

  3. Use appropriate profile — Major decisions deserve --deep or --paranoid

  4. Focus on BrutalHonesty — It’s usually the most valuable section

  5. Action the insights — Analysis is only useful if it changes behavior

Next Steps

Configuration

ReasonKit can be configured via config file, environment variables, or CLI flags.

Configuration File

Create ~/.config/reasonkit/config.toml:

# Default settings
[default]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
profile = "balanced"
output_format = "pretty"

# LLM Providers
[providers.anthropic]
api_key_env = "ANTHROPIC_API_KEY"
model = "claude-sonnet-4-20250514"
max_tokens = 8192

[providers.openai]
api_key_env = "OPENAI_API_KEY"
model = "gpt-4o"
max_tokens = 8192

[providers.openrouter]
api_key_env = "OPENROUTER_API_KEY"
default_model = "anthropic/claude-sonnet-4"

[providers.ollama]
base_url = "http://localhost:11434"
model = "llama3"

# Output settings
[output]
format = "pretty"  # pretty, json, markdown
color = true
show_timing = true
show_tokens = false

# ThinkTool configurations
[thinktools.gigathink]
min_perspectives = 10
include_contrarian = true

[thinktools.laserlogic]
fallacy_detection = true
assumption_analysis = true
show_math = true

[thinktools.bedrock]
decomposition_depth = 3
show_80_20 = true

[thinktools.proofguard]
min_sources = 3
require_citation = true
source_tier_threshold = 3

[thinktools.brutalhonesty]
severity = "high"
include_alternatives = true

# Profile customization
[profiles.custom_quick]
tools = ["gigathink", "laserlogic"]
gigathink_perspectives = 5
timeout = 30

[profiles.custom_thorough]
tools = ["gigathink", "laserlogic", "bedrock", "proofguard", "brutalhonesty"]
gigathink_perspectives = 15
laserlogic_depth = "deep"
proofguard_sources = 5
timeout = 600

Environment Variables

# Required: Your LLM provider API key
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export OPENROUTER_API_KEY="sk-or-..."
export GOOGLE_API_KEY="..."
export GROQ_API_KEY="gsk_..."

# Optional: Defaults
export RK_PROVIDER="anthropic"
export RK_MODEL="claude-sonnet-4-20250514"
export RK_PROFILE="balanced"
export RK_OUTPUT_FORMAT="pretty"

# Optional: Logging
export RK_LOG_LEVEL="info"  # debug, info, warn, error
export RK_LOG_FILE="~/.local/share/reasonkit/logs/rk.log"

CLI Flags

CLI flags override config file and environment variables:

# Provider and model
rk-core think "question" --provider anthropic --model claude-3-opus-20240229

# Profile
rk-core think "question" --profile deep

# Output format
rk-core think "question" --format json

# Specific tool settings
rk-core think "question" --min-perspectives 15 --min-sources 5

# Timeout
rk-core think "question" --timeout 300

# Verbosity
rk-core think "question" --verbose
rk-core think "question" --quiet

Configuration Precedence

  1. CLI flags (highest priority)
  2. Environment variables
  3. Config file
  4. Built-in defaults (lowest priority)

Provider-Specific Configuration

Anthropic Claude

[providers.anthropic]
api_key_env = "ANTHROPIC_API_KEY"
model = "claude-sonnet-4-20250514"
max_tokens = 8192
temperature = 0.7

Available models:

  • claude-opus-4-20250514 (most capable)
  • claude-sonnet-4-20250514 (balanced, recommended)
  • claude-haiku-3-5-20250514 (fastest)

OpenAI

[providers.openai]
api_key_env = "OPENAI_API_KEY"
model = "gpt-4o"
max_tokens = 8192
temperature = 0.7

Available models:

  • gpt-4o (most capable)
  • gpt-4o-mini (fast, cost-effective)
  • o1 (reasoning-optimized)

Google Gemini

[providers.google]
api_key_env = "GOOGLE_API_KEY"
model = "gemini-2.0-flash"

Groq (Fast Inference)

[providers.groq]
api_key_env = "GROQ_API_KEY"
model = "llama-3.3-70b-versatile"

OpenRouter

[providers.openrouter]
api_key_env = "OPENROUTER_API_KEY"
default_model = "anthropic/claude-sonnet-4"

300+ models available. See openrouter.ai/models.

Ollama (Local)

[providers.ollama]
base_url = "http://localhost:11434"
model = "llama3"

Run ollama list to see available models.

Custom Profiles

Create custom profiles for common use cases:

[profiles.career]
# Optimized for career decisions
tools = ["gigathink", "laserlogic", "brutalhonesty"]
gigathink_perspectives = 12
laserlogic_depth = "deep"
brutalhonesty_severity = "high"

[profiles.fact_check]
# Optimized for verifying claims
tools = ["laserlogic", "proofguard"]
proofguard_sources = 5
proofguard_require_citation = true

[profiles.quick_sanity]
# Fast sanity check
tools = ["gigathink", "brutalhonesty"]
gigathink_perspectives = 5
timeout = 30

Use custom profiles:

rk-core think "Should I take this job?" --profile career

Output Configuration

Pretty (Default)

[output]
format = "pretty"
color = true
box_style = "rounded"  # rounded, sharp, ascii

JSON

[output]
format = "json"
pretty_print = true

Markdown

[output]
format = "markdown"
include_metadata = true

Logging

[logging]
level = "info"  # debug, info, warn, error
file = "~/.local/share/reasonkit/logs/rk.log"
rotate = true
max_size = "10MB"

Validating Configuration

# Check config is valid
rk-core config validate

# Show effective config
rk-core config show

# Show config file path
rk-core config path

Next Steps

ThinkTools Overview

ThinkTools are specialized reasoning modules that catch specific types of oversight in AI analysis.

The Five Core ThinkTools

ToolPurposeBlind Spot It Catches
GigaThinkExplore all anglesPerspectives you forgot
LaserLogicCheck reasoningFlawed logic in cliches
BedRockFirst principlesSimple answers under complexity
ProofGuardVerify claims“Facts” that aren’t true
BrutalHonestySee blind spotsGap between plan and reality

How They Work Together

The ThinkTools follow a designed sequence:

┌─────────────────────────────────────────────────────────────┐
│                    THE 5-STEP PROCESS                        │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│   1. DIVERGE      →   Explore all possibilities first       │
│   (GigaThink)         Don't narrow too early                 │
│                                                              │
│   2. CONVERGE     →   Check logic, find flaws               │
│   (LaserLogic)        Question assumptions                   │
│                                                              │
│   3. GROUND       →   Strip to first principles             │
│   (BedRock)           What actually matters?                 │
│                                                              │
│   4. VERIFY       →   Check facts against sources           │
│   (ProofGuard)        Triangulate claims                     │
│                                                              │
│   5. CUT          →   Attack your own work                  │
│   (BrutalHonesty)     Find the uncomfortable truths          │
│                                                              │
└─────────────────────────────────────────────────────────────┘

Why This Sequence?

The order is deliberate:

  1. Divergent → Convergent: Explore widely before focusing
  2. Abstract → Concrete: From ideas to principles to evidence
  3. Constructive → Destructive: Build up, then attack

Using Individual Tools

You can invoke any tool directly:

# Just explore perspectives
rk-core gigathink "Should I start a business?"

# Just check logic
rk-core laserlogic "Renting is throwing money away"

# Just find first principles
rk-core bedrock "How do I get healthier?"

# Just verify a claim
rk-core proofguard "You should drink 8 glasses of water a day"

# Just get brutal honesty
rk-core brutalhonesty "I want to start a YouTube channel"

Using PowerCombo

PowerCombo runs all five tools in sequence:

# Full analysis
rk-core think "Should I take this job offer?" --profile balanced

The profile determines how thorough each tool’s analysis is.

Tool Configurations

Each tool can be configured:

# ~/.config/reasonkit/config.toml

[thinktools.gigathink]
min_perspectives = 10
include_contrarian = true

[thinktools.laserlogic]
fallacy_detection = true
assumption_analysis = true

[thinktools.bedrock]
decomposition_depth = 3

[thinktools.proofguard]
min_sources = 3
require_citation = true

[thinktools.brutalhonesty]
severity = "high"  # low, medium, high

Next Steps

Dive deep into each tool:

GigaThink

See All The Angles

GigaThink catches the perspectives you forget when you’re excited about an opportunity or focused on one aspect of a decision.

The Blind Spot

You’re only thinking about a decision from one angle. When we’re excited or stressed, we tend to focus narrowly and miss important perspectives.

Example

Question: “Should I take this job offer?”

Without GigaThink:

“Consider salary, benefits, and culture fit.”

With GigaThink:

10 PERSPECTIVES YOU MIGHT MISS:

1. FINANCIAL: What's the TOTAL comp? 401k match? Equity?
   → Base salary is only part of the picture

2. CAREER TRAJECTORY: Where do people go after 2-3 years?
   → This job is a stepping stone. Where does it lead?

3. MANAGER QUALITY: Your manager = 80% of job satisfaction
   → Who will you report to? What's their reputation?

4. TEAM HEALTH: Why is this role open? Growth or replacement?
   → High turnover = red flag

5. COMPANY TRAJECTORY: Growing, stable, or declining?
   → Joining a sinking ship vs. rocket ship matters

6. WORK-LIFE REALITY: Glassdoor hours vs stated hours
   → "Flexible hours" sometimes means "always working"

7. LEARNING VELOCITY: Will you learn more here?
   → Stagnation kills careers

8. YOUR LEVERAGE: Do they NEED you or just WANT you?
   → Negotiate from strength

9. OPPORTUNITY COST: Is this your best option or first option?
   → Keep looking? Wait for better?

10. GUT CHECK: When you imagine accepting, relief or dread?
    → Your intuition knows something

Usage

CLI

# Direct invocation
rk-core gigathink "Should I start a business?"

# With options
rk-core gigathink "Career change?" --min-perspectives 15

Rust API

#![allow(unused)]
fn main() {
use reasonkit::thinktools::GigaThink;

let gigathink = GigaThink::new()
    .min_perspectives(10)
    .include_contrarian(true);

let result = gigathink.analyze("Should I take this job offer?").await?;

for perspective in result.perspectives {
    println!("{}: {}", perspective.category, perspective.insight);
}
}

Python

from reasonkit import GigaThink

gt = GigaThink(min_perspectives=10)
result = gt.analyze("Should I take this job offer?")

for p in result.perspectives:
    print(f"{p.category}: {p.insight}")

Configuration

[thinktools.gigathink]
# Minimum number of perspectives to generate
min_perspectives = 10

# Include deliberately contrarian perspectives
include_contrarian = true

# Categories to always include
required_categories = [
    "financial",
    "career",
    "personal",
    "risk",
    "opportunity_cost"
]

# Maximum perspectives (to avoid analysis paralysis)
max_perspectives = 20

Output Format

{
    "tool": "gigathink",
    "query": "Should I take this job offer?",
    "perspectives": [
        {
            "category": "financial",
            "label": "TOTAL COMPENSATION",
            "insight": "What's the 401k match? Equity vesting schedule? Bonus structure?",
            "questions": ["What's the full compensation package?", "How does equity vest?"]
        }
    ],
    "meta": {
        "perspective_count": 10,
        "categories_covered": ["financial", "career", "personal", "risk", "opportunity"],
        "contrarian_included": true
    }
}

Best Practices

  1. Don’t stop at the obvious — The first 3-4 perspectives are usually obvious. The value is in perspectives 5-10.

  2. Question the frame — GigaThink might reveal that you’re asking the wrong question entirely.

  3. Note discomfort — If a perspective makes you uncomfortable, that’s often the most important one.

  4. Use for big decisions — GigaThink shines for career, financial, and relationship decisions.

  • LaserLogic — After exploring angles, check your reasoning
  • PowerCombo — Run all five tools in sequence

LaserLogic

Spot Bad Reasoning

LaserLogic catches arguments that sound wise but hide flawed logic. It exposes hidden assumptions and logical fallacies.

The Blind Spot

Arguments that sound wise but hide flawed logic. “Common wisdom” often contains false equivalences, missing variables, and unexamined assumptions.

Example

Claim: “Renting is throwing money away”

Without LaserLogic:

“Building equity is generally better long-term.”

With LaserLogic:

HIDDEN ASSUMPTIONS EXPOSED:

1. FALSE EQUIVALENCE
   Rent = 100% goes to housing (you get shelter)
   Mortgage = 60-80% goes to INTEREST (also "thrown away")
   → Early mortgage payments are mostly interest, not equity

2. MISSING VARIABLES
   - Down payment could be invested in S&P 500 (7-10% annual return)
   - Transaction costs: 6% realtor fees when selling
   - Maintenance: 1-2% of home value annually
   - Property taxes: ongoing cost that renters don't pay
   - Insurance: typically higher for owners
   - Opportunity cost of capital tied up in house

3. ASSUMES APPRECIATION
   "Houses always go up" — ask anyone who bought in 2007
   → Real estate is local and cyclical

4. IGNORES FLEXIBILITY
   Rent: 30 days to leave
   Own: 6+ months to sell, 6% transaction costs
   → Flexibility has economic value

5. SURVIVORSHIP BIAS
   You hear from people who made money on houses
   You don't hear from people who lost money

VERDICT: "Renting is throwing money away" is OVERSIMPLIFIED
Breakeven typically requires 5-7 years in same location.
The right answer depends on your specific situation.

Usage

CLI

# Direct invocation
rk-core laserlogic "Renting is throwing money away"

# Check specific argument
rk-core laserlogic "You should follow your passion" --check-fallacies

Rust API

#![allow(unused)]
fn main() {
use reasonkit::thinktools::LaserLogic;

let laser = LaserLogic::new()
    .check_fallacies(true)
    .check_assumptions(true);

let result = laser.analyze("Renting is throwing money away").await?;

for flaw in result.flaws {
    println!("{}: {}", flaw.category, flaw.explanation);
}
}

Fallacy Detection

LaserLogic identifies common logical fallacies:

FallacyDescriptionExample
False equivalenceTreating unlike things as equal“Rent = waste, mortgage = investment”
Missing variablesIgnoring relevant factorsIgnoring maintenance costs
Survivorship biasOnly seeing successes“My friend got rich from real estate”
Sunk cost fallacyOver-valuing past investment“I’ve spent too much to quit now”
Appeal to authorityTrusting credentials over logic“Experts say…”
Hasty generalizationToo few examples“Everyone I know…”
False dichotomyOnly two options when more exist“Buy or rent” (ignore: rent and invest)

Configuration

[thinktools.laserlogic]
# Check for logical fallacies
fallacy_detection = true

# Analyze hidden assumptions
assumption_analysis = true

# Show mathematical breakdowns where applicable
show_math = true

# Severity threshold (0.0 - 1.0)
min_severity = 0.3

Output Format

{
    "tool": "laserlogic",
    "claim": "Renting is throwing money away",
    "flaws": [
        {
            "category": "false_equivalence",
            "severity": 0.8,
            "description": "Treating rent and mortgage interest as different",
            "explanation": "Early mortgage payments are 60-80% interest",
            "counter": "Both rent and interest provide shelter value"
        }
    ],
    "verdict": {
        "classification": "oversimplified",
        "confidence": 0.85,
        "nuance": "True under specific conditions (5-7 year horizon, stable location)"
    }
}

Best Practices

  1. Question cliches — If advice is short and catchy, it’s probably oversimplified.

  2. Follow the math — Many “wisdom” claims fall apart when you do the actual calculation.

  3. Ask “compared to what?” — Every choice has an alternative. What’s being compared?

  4. Look for hidden variables — What factors is the argument ignoring?

Common Myths LaserLogic Exposes

  • “A college degree always pays off”
  • “You can’t time the market”
  • “Networking is everything”
  • “Hard work always pays off”
  • “Early is always better than late”
  • GigaThink — First explore angles, then check logic
  • ProofGuard — After checking logic, verify facts

BedRock

Find the Simple Answer

BedRock cuts through complexity to find first principles. It identifies what actually matters when you’re overwhelmed with options.

The Blind Spot

Overwhelmed with options, missing what actually matters. Information overload leads to analysis paralysis and missing the simple truth.

Example

Question: “How do I get healthier?”

Without BedRock:

“Consider diet, exercise, sleep, stress management, supplements, biohacking, intermittent fasting, cold exposure, meditation…”

With BedRock:

FIRST PRINCIPLES ANALYSIS:

What actually moves the needle? (Research consensus)

1. SLEEP: 7-9 hours
   → Most ignored, highest impact
   → Affects hormones, recovery, decision-making
   → Foundation for everything else

2. MOVEMENT: 150 min/week moderate OR 75 min vigorous
   → Doesn't need to be fancy
   → Walking counts

3. NUTRITION: Mostly plants, enough protein, not too much
   → The specifics matter less than the basics
   → Most diets work by reducing total calories

═══════════════════════════════════════════════════════════════

THE 80/20 ANSWER:

If you do ONLY these three things:
1. Sleep 7+ hours (non-negotiable)
2. Walk 30 min daily
3. Eat one vegetable with every meal

→ You'll be healthier than 80% of people.

Everything else (supplements, biohacking, specific diets)
is optimization on top of these basics.

═══════════════════════════════════════════════════════════════

THE UNCOMFORTABLE TRUTH:

You probably already know what to do.
The problem isn't information, it's execution.

The question isn't "how do I get healthier?"
The question is "what's stopping me from doing what I already know?"

Usage

CLI

# Direct invocation
rk-core bedrock "How do I get healthier?"

# With depth level
rk-core bedrock "How do I build a business?" --depth 3

Rust API

#![allow(unused)]
fn main() {
use reasonkit::thinktools::BedRock;

let bedrock = BedRock::new()
    .decomposition_depth(3)
    .show_80_20(true);

let result = bedrock.analyze("How do I get healthier?").await?;

println!("Core principles:");
for principle in result.first_principles {
    println!("- {}: {}", principle.name, principle.description);
}

println!("\n80/20 answer:\n{}", result.pareto_answer);
}

First Principles Method

BedRock follows a structured decomposition:

1. DECOMPOSE
   Break the question into fundamental components
   "Health" → Physical, Mental, Longevity

2. EVIDENCE CHECK
   What does research actually say?
   Filter signal from noise

3. PARETO ANALYSIS
   What 20% of actions give 80% of results?
   Find the vital few

4. UNCOMFORTABLE TRUTH
   What does the questioner already know but avoid?
   Address the real blocker

Configuration

[thinktools.bedrock]
# How deep to decompose (1-5)
decomposition_depth = 3

# Include 80/20 analysis
show_80_20 = true

# Include uncomfortable truths
include_uncomfortable_truth = true

# Require research backing
require_evidence = true

Output Format

{
    "tool": "bedrock",
    "query": "How do I get healthier?",
    "first_principles": [
        {
            "name": "Sleep",
            "priority": 1,
            "evidence_level": "strong",
            "description": "7-9 hours is foundation for everything else",
            "why_first": "Affects hormones, recovery, decision-making"
        }
    ],
    "pareto_answer": {
        "actions": ["Sleep 7+ hours", "Walk 30 min daily", "Eat vegetables"],
        "expected_impact": "Better than 80% of population"
    },
    "uncomfortable_truth": {
        "insight": "You probably already know what to do",
        "real_question": "What's stopping you from doing what you already know?"
    }
}

When to Use BedRock

BedRock is most valuable when:

  • Information overload — Too many options, too many opinions
  • Analysis paralysis — Can’t decide because of complexity
  • Seeking the “best” approach — When “good enough” is actually optimal
  • Recurring problems — When you keep revisiting the same issue

The 80/20 Principle

BedRock applies Pareto’s principle:

  • 20% of actions give 80% of results
  • The “best” solution is often worse than “good enough done consistently”
  • Complexity is often a form of procrastination

ProofGuard

Verify Before You Believe

ProofGuard catches widely-believed “facts” that aren’t actually true. It triangulates claims against multiple sources.

The Blind Spot

Acting on “facts” you never verified. Many commonly accepted beliefs have no scientific basis or are misunderstood.

Example

Claim: “You should drink 8 glasses of water a day”

Without ProofGuard:

“Yes, staying hydrated is important. Drink plenty of water.”

With ProofGuard:

CLAIM VERIFICATION:
"You should drink 8 glasses of water a day"

═══════════════════════════════════════════════════════════════

SOURCE 1: British Medical Journal (2007)
"Drink at least eight glasses of water a day. Really?"
→ "No scientific evidence for 8x8 recommendation"
→ Origin traced to 1945 Food and Nutrition Board misinterpretation
→ The original recommendation included water from food

SOURCE 2: Mayo Clinic (2022)
→ "Adequate daily fluid intake is about 15.5 cups for men, 11.5 for women"
→ This is TOTAL fluids (includes food), not just water
→ "Most healthy people can stay hydrated by drinking water when thirsty"

SOURCE 3: National Academy of Sciences (2004)
"Dietary Reference Intakes for Water"
→ "Most people meet hydration needs through normal thirst"
→ No evidence of widespread dehydration in general population
→ Urine color is a better indicator than counting glasses

═══════════════════════════════════════════════════════════════

CROSS-REFERENCE ANALYSIS:
✓ All three sources agree: 8x8 has no scientific basis
✓ All three sources agree: thirst is generally reliable
✓ All three sources agree: food provides significant water

═══════════════════════════════════════════════════════════════

VERDICT: MOSTLY MYTH

• "8 glasses" has no scientific basis
• Food provides 20-30% of water intake
• Coffee/tea count toward hydration (mild diuretic effect is offset)
• Your body has a hydration sensor: thirst
• Overhydration (hyponatremia) is actually more dangerous than mild dehydration

PRACTICAL TRUTH:
Drink when thirsty. Check urine color (pale yellow = good).
No need to count glasses.

Usage

CLI

# Direct invocation
rk-core proofguard "You should drink 8 glasses of water a day"

# Require specific number of sources
rk-core proofguard "Breakfast is the most important meal" --min-sources 3

Rust API

#![allow(unused)]
fn main() {
use reasonkit::thinktools::ProofGuard;

let proofguard = ProofGuard::new()
    .min_sources(3)
    .require_citation(true);

let result = proofguard.verify("8 glasses of water a day").await?;

println!("Verdict: {:?}", result.verdict);
for source in result.sources {
    println!("- {}: {}", source.name, source.finding);
}
}

Source Tiers

ProofGuard prioritizes sources by reliability:

TierSource TypeWeight
1Peer-reviewed journals, meta-analyses1.0
2Government health agencies (CDC, NHS)0.9
3Major medical institutions (Mayo, Cleveland)0.8
4Established news with citations0.5
5Uncited claims, social media0.1

Verification Method

1. IDENTIFY CLAIM
   Extract the specific, falsifiable claim

2. MULTI-SOURCE SEARCH
   Find 3+ independent sources
   Prioritize Tier 1-2 sources

3. TRIANGULATION
   Do sources agree or conflict?
   What's the consensus?

4. ORIGIN TRACE
   Where did this claim originate?
   Is it misquoted or out of context?

5. VERDICT
   True / False / Partially True / Myth / Nuanced

Configuration

[thinktools.proofguard]
# Minimum sources required
min_sources = 3

# Require citations to be verified
require_citation = true

# Include origin tracing
trace_origin = true

# Source tier threshold (1-5)
min_source_tier = 3

Output Format

{
    "tool": "proofguard",
    "claim": "You should drink 8 glasses of water a day",
    "sources": [
        {
            "name": "British Medical Journal",
            "year": 2007,
            "tier": 1,
            "finding": "No scientific evidence for 8x8 recommendation",
            "url": "https://..."
        }
    ],
    "triangulation": {
        "agreement": "strong",
        "conflicts": null
    },
    "origin": {
        "traced_to": "1945 Food and Nutrition Board",
        "misinterpretation": "Original included water from food"
    },
    "verdict": {
        "classification": "myth",
        "confidence": 0.9,
        "nuance": "Thirst is generally reliable; no need to count glasses"
    }
}

Common Myths ProofGuard Exposes

  • “Breakfast is the most important meal of the day”
  • “We only use 10% of our brains”
  • “Sugar makes kids hyperactive”
  • “You need 10,000 steps per day”
  • “Cracking knuckles causes arthritis”
  • “Reading in dim light damages your eyes”

Best Practices

  1. Question “everyone knows” claims — The more universal a belief, the more worth verifying

  2. Trace origins — Many myths start from misquoted studies or marketing

  3. Check for conflicts of interest — Who benefits from this claim?

  4. Update beliefs — Science changes; what was “known” 20 years ago may be wrong

BrutalHonesty

See Your Blind Spots

BrutalHonesty catches the gap between your optimistic plan and reality. It tells you what you don’t want to hear—before reality does.

The Blind Spot

You love your plan and can’t see its flaws. Optimism bias makes us overestimate success and underestimate challenges.

Example

Statement: “I’m going to start a YouTube channel as a side hustle”

Without BrutalHonesty:

“Focus on a niche and be consistent!”

With BrutalHonesty:

UNCOMFORTABLE TRUTHS:

═══════════════════════════════════════════════════════════════

1. THE MATH DOESN'T WORK FOR 99%

   • Median YouTuber income: $0
   • Top 3% of channels get 90% of views
   • Ad revenue: ~$3-5 per 1,000 views
   • To make $50K/year: need 10-15 million views/year
   • That's 30,000+ views per DAY, every day

═══════════════════════════════════════════════════════════════

2. TIME INVESTMENT IS MASSIVE

   • 1 quality video = 10-40 hours
     - Research: 2-5 hours
     - Scripting: 2-5 hours
     - Filming: 2-8 hours
     - Editing: 4-20 hours
     - Thumbnail/title: 1-2 hours

   • That's a part-time job for months with zero return
   • "Consistency" means this schedule EVERY WEEK

═══════════════════════════════════════════════════════════════

3. YOU'RE COMPETING WITH PROFESSIONALS

   • MrBeast has a 100-person team
   • Your "side hustle" competes with their full-time career
   • Top creators work 60-80 hour weeks
   • They have professional equipment, editors, writers

═══════════════════════════════════════════════════════════════

4. BURNOUT IS THE ACTUAL OUTCOME

   • 95% of channels quit within first year
   • Common pattern: excitement → grind → disappointment → quit
   • Most channels never hit 1,000 subscribers

═══════════════════════════════════════════════════════════════

HONEST QUESTIONS:

• Can you commit 20+ hrs/week for 2 years with zero return?
• Why YouTube specifically? (Newsletter/podcast may be easier)
• Is this for money or creative expression? (Different strategies)
• What's your unique advantage? (Why would anyone watch YOU?)
• Have you made 10 videos already? (Most quit before 10)

═══════════════════════════════════════════════════════════════

IF YOU STILL WANT TO DO IT:

• Make 10 videos before "launching" (tests commitment)
• Treat it as hobby, not business, until proven
• Set a 6-month review point with specific metrics
• Have a "quit threshold" to avoid sunk cost fallacy
• Consider it successful if you enjoy the process, not the outcome

Usage

CLI

# Direct invocation
rk-core brutalhonesty "I'm going to start a YouTube channel"

# Adjust severity
rk-core brutalhonesty "I'm going to quit my job to write a novel" --severity high

Rust API

#![allow(unused)]
fn main() {
use reasonkit::thinktools::BrutalHonesty;

let bh = BrutalHonesty::new()
    .severity(Severity::High)
    .include_alternatives(true);

let result = bh.analyze("I'm starting a YouTube channel").await?;

println!("Uncomfortable truths:");
for truth in result.uncomfortable_truths {
    println!("- {}", truth);
}

println!("\nHonest questions:");
for question in result.questions {
    println!("- {}", question);
}
}

Severity Levels

LevelDescriptionUse Case
LowGentle pushbackEarly exploration
MediumDirect feedbackNormal decisions
HighNo-holds-barredHigh-stakes, need reality

The BrutalHonesty Method

1. STATISTICAL REALITY
   What do the actual numbers say?
   Base rates, not anecdotes

2. COMPETITION ANALYSIS
   Who are you actually competing against?
   What's their unfair advantage?

3. TIME/EFFORT AUDIT
   What's the true time investment?
   Opportunity cost calculation

4. FAILURE MODE MAPPING
   How do most attempts like this fail?
   What's the most likely outcome?

5. HONEST QUESTIONS
   Questions that force confrontation with reality
   What you'd ask a friend in this situation

6. CONDITIONAL ADVICE
   "If you still want to do this..."
   How to approach it wisely

Configuration

[thinktools.brutalhonesty]
# Severity level: low, medium, high
severity = "high"

# Include alternative suggestions
include_alternatives = true

# Include conditional advice (if they proceed)
include_conditional = true

# Base rate lookup
use_statistics = true

Output Format

{
    "tool": "brutalhonesty",
    "plan": "Start a YouTube channel as a side hustle",
    "uncomfortable_truths": [
        {
            "category": "math",
            "truth": "Median YouTuber income is $0",
            "evidence": "Top 3% get 90% of views"
        }
    ],
    "questions": [
        "Can you commit 20+ hrs/week for 2 years with zero return?",
        "Why YouTube specifically?"
    ],
    "base_rates": {
        "success_rate": 0.01,
        "quit_rate_year_1": 0.95,
        "median_income": 0
    },
    "conditional_advice": [
        "Make 10 videos before launching",
        "Treat as hobby until proven",
        "Set a 6-month review point"
    ]
}

Common Plans BrutalHonesty Scrutinizes

  • “I’m going to become a content creator”
  • “I’m going to start a business”
  • “I’m going to write a book”
  • “I’m going to become a day trader”
  • “I’m going to become an influencer”
  • “I’m going to drop out and code”

When to Use BrutalHonesty

  • Before big commitments — Quitting job, major investment
  • When excited — Excitement impairs judgment
  • After being told “great idea!” — Friends are often too supportive
  • Recurring ideas — If you keep revisiting, get honest

The Value of Honest Feedback

BrutalHonesty isn’t about discouragement. It’s about:

  1. Informed decisions — Know what you’re getting into
  2. Better planning — Address challenges before they arise
  3. Appropriate expectations — Success metrics that make sense
  4. Early pivots — Recognize bad paths before sunk costs accumulate
  • GigaThink — Explore alternatives first
  • BedRock — Find what actually matters

PowerCombo

All Five Tools in Sequence

PowerCombo runs all five ThinkTools in the optimal sequence for comprehensive analysis.

The 5-Step Process

┌─────────────────────────────────────────────────────────────┐
│                      POWERCOMBO                              │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│   1. GigaThink      → Explore all angles                    │
│                        Cast a wide net first                 │
│                                                              │
│   2. LaserLogic     → Check the reasoning                   │
│                        Find logical flaws                    │
│                                                              │
│   3. BedRock        → Find first principles                 │
│                        Cut to what matters                   │
│                                                              │
│   4. ProofGuard     → Verify the facts                      │
│                        Triangulate claims                    │
│                                                              │
│   5. BrutalHonesty  → Face uncomfortable truths             │
│                        Attack your own conclusions           │
│                                                              │
└─────────────────────────────────────────────────────────────┘

Why This Order?

The sequence is deliberate:

  1. Divergent → Convergent

    • First explore widely (GigaThink)
    • Then narrow ruthlessly (LaserLogic, BedRock)
  2. Abstract → Concrete

    • Start with ideas (GigaThink)
    • Move to principles (BedRock)
    • End with evidence (ProofGuard)
  3. Constructive → Destructive

    • Build up possibilities first
    • Then attack your own work (BrutalHonesty)

Usage

CLI

# Run full analysis
rk-core think "Should I take this job offer?" --profile balanced

# Equivalent to:
rk-core powercombo "Should I take this job offer?" --profile balanced

With Profiles

ProfileTimeDepth
--quick~10 secLight pass on each tool
--balanced~20 secStandard depth
--deep~1 minThorough analysis
--paranoid~2-3 minMaximum scrutiny

Rust API

#![allow(unused)]
fn main() {
use reasonkit::thinktools::PowerCombo;
use reasonkit::profiles::Profile;

let combo = PowerCombo::new()
    .profile(Profile::Balanced);

let result = combo.analyze("Should I take this job offer?").await?;

// Access each tool's output
println!("GigaThink found {} perspectives", result.gigathink.perspectives.len());
println!("LaserLogic found {} flaws", result.laserlogic.flaws.len());
println!("BedRock principles: {:?}", result.bedrock.first_principles);
println!("ProofGuard verdict: {:?}", result.proofguard.verdict);
println!("BrutalHonesty truths: {:?}", result.brutalhonesty.uncomfortable_truths);
}

Example Output

Question: “Should I buy a house?”

╔══════════════════════════════════════════════════════════════╗
║  POWERCOMBO ANALYSIS                                         ║
║  Question: Should I buy a house?                             ║
║  Profile: balanced                                           ║
╚══════════════════════════════════════════════════════════════╝

┌──────────────────────────────────────────────────────────────┐
│  GIGATHINK: Exploring Perspectives                           │
├──────────────────────────────────────────────────────────────┤
│  1. FINANCIAL: Down payment, mortgage rates, total cost     │
│  2. LIFESTYLE: Stability vs. flexibility trade-off          │
│  3. CAREER: Does your job require mobility?                 │
│  4. MARKET: Is this a good time/location to buy?            │
│  5. OPPORTUNITY: What else could you do with that money?    │
│  6. MAINTENANCE: Are you prepared for ongoing costs?        │
│  7. TIMELINE: How long will you stay?                       │
│  8. EMOTIONAL: Ownership satisfaction vs. renting freedom   │
└──────────────────────────────────────────────────────────────┘

┌──────────────────────────────────────────────────────────────┐
│  LASERLOGIC: Checking Reasoning                              │
├──────────────────────────────────────────────────────────────┤
│  FLAW: "Renting is throwing money away"                     │
│  → Mortgage interest is also "thrown away"                  │
│  → Early payments are 60-80% interest                       │
│                                                              │
│  FLAW: "Houses always appreciate"                           │
│  → Real estate is local and cyclical                        │
│  → 2007-2012 counterexample                                 │
└──────────────────────────────────────────────────────────────┘

┌──────────────────────────────────────────────────────────────┐
│  BEDROCK: First Principles                                   │
├──────────────────────────────────────────────────────────────┤
│  CORE QUESTION: Will you be in the same place for 5-7 years?│
│                                                              │
│  THE 80/20:                                                  │
│  • Breakeven on transaction costs: 5-7 years                │
│  • If yes to stability → buying can make sense              │
│  • If no/uncertain → renting is financially rational        │
└──────────────────────────────────────────────────────────────┘

┌──────────────────────────────────────────────────────────────┐
│  PROOFGUARD: Fact Check                                      │
├──────────────────────────────────────────────────────────────┤
│  VERIFIED: Transaction costs are 6-10% (realtor, closing)   │
│  VERIFIED: Average homeowner stays 13 years (NAR, 2024)     │
│  VERIFIED: Maintenance averages 1-2% of home value/year     │
└──────────────────────────────────────────────────────────────┘

┌──────────────────────────────────────────────────────────────┐
│  BRUTALHONESTY: Uncomfortable Truths                         │
├──────────────────────────────────────────────────────────────┤
│  • You're asking because you want validation, not analysis  │
│  • "Investment" framing obscures lifestyle preferences      │
│  • Most people decide emotionally, then justify rationally  │
│                                                              │
│  HONEST QUESTION:                                            │
│  If rent and buy were exactly equal financially,            │
│  which would you choose? That's your real preference.       │
└──────────────────────────────────────────────────────────────┘

═══════════════════════════════════════════════════════════════

SYNTHESIS:
The buy-vs-rent decision depends primarily on timeline.
If staying 5-7+ years in one location: buying can make sense.
If uncertain or likely to move: renting is financially rational.
Most "rent is throwing money away" arguments are oversimplified.

Configuration

[thinktools.powercombo]
# Tools to include (default: all)
tools = ["gigathink", "laserlogic", "bedrock", "proofguard", "brutalhonesty"]

# Order (default: standard)
order = "standard"  # or "custom"

# Include synthesis at end
include_synthesis = true

Output Formats

# Pretty terminal output (default)
rk-core think "question" --format pretty

# JSON for programmatic use
rk-core think "question" --format json

# Markdown for documentation
rk-core think "question" --format markdown

Best Practices

  1. Use profiles appropriately — Quick for small decisions, paranoid for major ones

  2. Read all sections — Each tool catches different things

  3. Focus on BrutalHonesty — It’s often the most valuable

  4. Use the synthesis — The combined insight is greater than parts

Reasoning Profiles

Match your analysis depth to your decision stakes.

Profiles are pre-configured tool combinations optimized for different use cases. Think of them as “presets” that balance thoroughness against time.

The Four Profiles

┌─────────────────────────────────────────────────────────────────────────┐
│                         PROFILE SPECTRUM                                │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                          │
│   QUICK        BALANCED        DEEP          PARANOID                   │
│     │              │             │               │                       │
│    10s           20s           1min          2-3min                      │
│                                                                          │
│   "Should I     "Should I     "Should I      "Should I                  │
│    buy this?"   take this     move           invest my                  │
│                 job?"         cities?"       life savings?"             │
│                                                                          │
│   Low stakes    Important     Major life     Can't afford               │
│   Reversible    decisions     changes        to be wrong                │
│                                                                          │
└─────────────────────────────────────────────────────────────────────────┘

Profile Comparison

ProfileToolsTimeBest For
Quick2~10sLow stakes, reversible
Balanced5~20sStandard decisions
Deep5+~1minMajor choices
ParanoidAll~2-3minHigh stakes

Choosing a Profile

Quick Profile

Use when:

  • Decision is easily reversible
  • Stakes are low
  • Time is limited
  • You just need a sanity check

Example: “Should I buy this $50 gadget?”

Balanced Profile (Default)

Use when:

  • Important but not life-changing
  • You have a few minutes
  • Standard analysis depth is appropriate

Example: “Should I take this job offer?”

Deep Profile

Use when:

  • Major life decision
  • Long-term consequences
  • Multiple stakeholders affected
  • You want thorough analysis

Example: “Should I move to a new city?”

Paranoid Profile

Use when:

  • Cannot afford to be wrong
  • Very high stakes
  • Need maximum verification
  • Irreversible consequences

Example: “Should I invest my life savings?”

Profile Details

Tool Inclusion by Profile

ToolQuickBalancedDeepParanoid
💡 GigaThink
⚡ LaserLogic
🪨 BedRock-
🛡️ ProofGuard-
🔥 BrutalHonesty-

Pro Tip: ReasonKit Pro adds HighReflect (meta-cognition) and RiskRadar (threat assessment) for even deeper analysis.

Depth Settings by Profile

SettingQuickBalancedDeepParanoid
GigaThink perspectives5101520
LaserLogic depthlightstandarddeepexhaustive
ProofGuard sources-357
BrutalHonesty severity-mediumhighmaximum

Usage

# Explicit profile
rk-core think "question" --profile balanced

# Shorthand
rk-core think "question" --quick
rk-core think "question" --balanced
rk-core think "question" --deep
rk-core think "question" --paranoid

Custom Profiles

You can create custom profiles in your config file:

[profiles.my_profile]
tools = ["gigathink", "laserlogic", "proofguard"]
gigathink_perspectives = 8
laserlogic_depth = "deep"
proofguard_sources = 4
timeout = 120

See Custom Profiles for details.

Cost Implications

More thorough profiles use more tokens:

Profile~TokensClaude CostGPT-4 Cost
Quick2K~$0.02~$0.06
Balanced5K~$0.05~$0.15
Deep15K~$0.15~$0.45
Paranoid40K~$0.40~$1.20

Consider cost when choosing profiles, but don’t under-analyze high-stakes decisions to save money.

Quick Profile

Fast sanity check in ~10 seconds

The Quick profile provides a rapid analysis for low-stakes, easily reversible decisions.

When to Use

  • Decision is easily reversible
  • Stakes are low (<$100, no major consequences)
  • Time is limited
  • You just need a sanity check
  • Initial exploration before deeper analysis

Tools Included

ToolSettings
💡 GigaThink5 perspectives
⚡ LaserLogicLight depth

Usage

# Full form
rk-core think "question" --profile quick

# Shorthand
rk-core think "question" --quick

Example

Question: “Should I buy this $30 kitchen gadget?”

╔════════════════════════════════════════════════════════════╗
║  QUICK ANALYSIS                                            ║
║  Time: 28 seconds                                          ║
╚════════════════════════════════════════════════════════════╝

┌────────────────────────────────────────────────────────────┐
│  💡 GIGATHINK: 5 Quick Perspectives                        │
├────────────────────────────────────────────────────────────┤
│  1. UTILITY: Will you actually use it more than twice?    │
│  2. SPACE: Do you have room for another kitchen tool?     │
│  3. QUALITY: Is it well-reviewed or cheap junk?           │
│  4. ALTERNATIVE: Could existing tools do this job?        │
│  5. IMPULSE: Are you buying it or being sold it?          │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  ⚡ LASERLOGIC: Quick Check                                │
├────────────────────────────────────────────────────────────┤
│  FLAW: "I might use it someday"                           │
│  → Kitchen drawer full of "someday" gadgets               │
│  → If you haven't needed it before, you probably won't    │
└────────────────────────────────────────────────────────────┘

VERDICT: Skip it. Low stakes but also low value.

Appropriate Decisions

  • Small purchases (<$100)
  • What to eat for dinner
  • Which movie to watch
  • Minor work decisions
  • Social plans

Not Appropriate For

  • Job changes
  • Major purchases (>$500)
  • Relationship decisions
  • Health decisions
  • Anything with lasting consequences

Upgrading Analysis

If Quick analysis reveals complexity, upgrade:

# Started with quick, found it's actually complex
rk-core think "question" --balanced

Configuration

[profiles.quick]
tools = ["gigathink", "laserlogic"]
gigathink_perspectives = 5
laserlogic_depth = "light"
timeout = 30

Cost

~2K tokens ≈ $0.02 (Claude) / $0.06 (GPT-4)

Balanced Profile

Standard analysis in ~20 seconds

The Balanced profile is the default—thorough enough for most decisions, fast enough to be practical.

When to Use

  • Important decisions with moderate stakes
  • Job offers, career moves
  • Purchases $100-$10,000
  • Relationship discussions
  • Business decisions
  • Most everyday important choices

Tools Included

ToolSettings
💡 GigaThink10 perspectives
⚡ LaserLogicStandard depth
🪨 BedRockFull decomposition
🛡️ ProofGuard3 sources minimum
🔥 BrutalHonestyMedium severity

Usage

# Full form
rk-core think "question" --profile balanced

# Shorthand (default)
rk-core think "question" --balanced

# Also the default
rk-core think "question"

Example

Question: “Should I accept this job offer with 20% higher salary but longer commute?”

╔════════════════════════════════════════════════════════════╗
║  BALANCED ANALYSIS                                         ║
║  Time: 1 minute 47 seconds                                 ║
╚════════════════════════════════════════════════════════════╝

┌────────────────────────────────────────────────────────────┐
│  💡 GIGATHINK: 10 Perspectives                             │
├────────────────────────────────────────────────────────────┤
│  1. FINANCIAL: 20% raise minus commute costs              │
│  2. TIME: Extra commute hours per week/year               │
│  3. CAREER: Growth potential at new company               │
│  4. MANAGER: Who will you report to?                      │
│  5. TEAM: Culture and people you'll work with             │
│  6. HEALTH: Commute stress and lost exercise time         │
│  7. FAMILY: Impact on family time and responsibilities    │
│  8. OPPORTUNITY: Is this the best option available?       │
│  9. REVERSIBILITY: Can you go back if it doesn't work?    │
│  10. GUT: What does your instinct say?                    │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  ⚡ LASERLOGIC: Reasoning Check                            │
├────────────────────────────────────────────────────────────┤
│  FLAW 1: "20% more = better"                              │
│  → Commute costs (gas, wear, time) not subtracted         │
│  → 1 hour extra commute = 250 hours/year                  │
│                                                            │
│  FLAW 2: "I can always leave if it doesn't work"          │
│  → Job hopping has costs (reputation, vesting, etc.)      │
│  → Leaving within 1 year looks bad on resume              │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🪨 BEDROCK: First Principles                              │
├────────────────────────────────────────────────────────────┤
│  CORE QUESTION:                                            │
│  What's your hourly rate for the extra commute time?       │
│                                                            │
│  THE MATH:                                                 │
│  • Extra 1hr/day × 250 days = 250 hours/year              │
│  • 20% raise on $80K = $16K                               │
│  • $16K ÷ 250 hours = $64/hour for your time              │
│  • Is your free time worth $64/hour to you?               │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🛡️ PROOFGUARD: Verification                              │
├────────────────────────────────────────────────────────────┤
│  VERIFIED: Long commutes correlate with lower happiness   │
│  VERIFIED: Average commute cost is ~$0.50/mile            │
│  TO VERIFY: Actual growth trajectory at new company       │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🔥 BRUTALHONESTY: Reality Check                           │
├────────────────────────────────────────────────────────────┤
│  • You're anchoring on the 20% number                     │
│  • The commute will feel worse than you think             │
│  • Have you talked to people who work there?              │
│                                                            │
│  HONEST QUESTION:                                          │
│  If the salary were the same, would you want this job?    │
└────────────────────────────────────────────────────────────┘

═══════════════════════════════════════════════════════════════

SYNTHESIS:
The decision hinges on whether career growth justifies the
commute. If it's just a lateral move with more money,
probably not worth it. If it's a genuine career accelerator,
the commute is temporary pain for long-term gain.

Appropriate Decisions

  • Job offers and career changes
  • Purchases $100-$10,000
  • Moving apartments (same city)
  • Business partnerships
  • Hiring decisions
  • Relationship milestones

Configuration

[profiles.balanced]
tools = ["gigathink", "laserlogic", "bedrock", "proofguard", "brutalhonesty"]
gigathink_perspectives = 10
laserlogic_depth = "standard"
proofguard_sources = 3
brutalhonesty_severity = "medium"
timeout = 180

Cost

~5K tokens ≈ $0.05 (Claude) / $0.15 (GPT-4)

Deep Profile

Thorough analysis in ~1 minute

The Deep profile provides comprehensive analysis for major life decisions with long-term consequences.

When to Use

  • Major life changes
  • Decisions affecting multiple years
  • Large financial commitments ($10K+)
  • Career pivots
  • Relocation decisions
  • Starting a business
  • Major relationship decisions

Tools Included

ToolSettings
💡 GigaThink15 perspectives
⚡ LaserLogicDeep analysis
🪨 BedRockFull decomposition
🛡️ ProofGuard5 sources minimum
🔥 BrutalHonestyHigh severity

Pro Tip: ReasonKit Pro adds HighReflect (meta-cognition) for even deeper self-analysis.

Usage

# Full form
rk-core think "question" --profile deep

# Shorthand
rk-core think "question" --deep

Example

Question: “Should I quit my job to start a business?”

╔════════════════════════════════════════════════════════════╗
║  DEEP ANALYSIS                                             ║
║  Time: 4 minutes 32 seconds                                ║
╚════════════════════════════════════════════════════════════╝

┌────────────────────────────────────────────────────────────┐
│  💡 GIGATHINK: 15 Perspectives                             │
├────────────────────────────────────────────────────────────┤
│  1. FINANCIAL: How long can you survive with no income?   │
│  2. MARKET: Is there actual demand for your idea?         │
│  3. COMPETITION: Who else is solving this problem?        │
│  4. TIMING: Why now? What makes this the right moment?    │
│  5. SKILLS: Do you have the skills to execute?            │
│  6. NETWORK: Do you have connections to get customers?    │
│  7. FAMILY: How does your family feel about the risk?     │
│  8. HEALTH: Can you handle the stress?                    │
│  9. OPPORTUNITY: What are you giving up?                  │
│  10. REVERSIBILITY: Can you go back if it fails?          │
│  11. MOTIVATION: Running TO something or FROM something?  │
│  12. VALIDATION: Have paying customers expressed interest?│
│  13. COFOUNDERS: Are you doing this alone?               │
│  14. RUNWAY: How long before you need revenue?           │
│  15. EXIT: What does success look like? Timeline?         │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  ⚡ LASERLOGIC: Deep Reasoning Analysis                    │
├────────────────────────────────────────────────────────────┤
│  FLAW 1: Survivorship bias                                │
│  → You hear about successful founders, not the 90% who fail│
│  → Base rate: 90% of startups fail within 5 years         │
│                                                            │
│  FLAW 2: "I'll figure it out"                             │
│  → Planning fallacy: we underestimate time and difficulty │
│  → Most entrepreneurs underestimate by 2-3x               │
│                                                            │
│  FLAW 3: "I just need to work harder"                     │
│  → Hard work is necessary but not sufficient              │
│  → Market timing and luck matter more than most admit     │
│                                                            │
│  FLAW 4: Sunk cost setup                                  │
│  → Once you quit, you'll feel pressure to continue        │
│  → Define kill criteria BEFORE starting                   │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🪨 BEDROCK: First Principles                              │
├────────────────────────────────────────────────────────────┤
│  CORE QUESTION:                                            │
│  Have customers paid for this (or similar) yet?            │
│                                                            │
│  THE 80/20:                                                │
│  • 80% of success = market timing + distribution          │
│  • 20% = product quality                                  │
│  • Most founders over-index on product, under-index on    │
│    distribution                                            │
│                                                            │
│  FIRST PRINCIPLE:                                          │
│  Don't quit until you have:                               │
│  1. Paying customers (not "interested" - PAYING)          │
│  2. 12 months runway                                      │
│  3. Spouse/family buy-in                                  │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🛡️ PROOFGUARD: Verification (5 sources)                  │
├────────────────────────────────────────────────────────────┤
│  VERIFIED: 90% startup failure rate (Startup Genome)      │
│  VERIFIED: Average time to profitability: 3-4 years       │
│  VERIFIED: Solo founders 2x more likely to fail           │
│  VERIFIED: Previous startup exp improves odds 2x          │
│  TO VERIFY: Your specific market demand assumptions       │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🔥 BRUTALHONESTY: High Severity                           │
├────────────────────────────────────────────────────────────┤
│  UNCOMFORTABLE TRUTH 1:                                    │
│  You're probably not special. Most people who think       │
│  they're ready to start a business are not.               │
│                                                            │
│  UNCOMFORTABLE TRUTH 2:                                    │
│  "I hate my job" is not a business plan.                  │
│  Running FROM something is different from running TO.     │
│                                                            │
│  UNCOMFORTABLE TRUTH 3:                                    │
│  If you can't get paying customers while employed,        │
│  quitting won't magically create them.                    │
│                                                            │
│  HONEST QUESTIONS:                                         │
│  • Have 10 people paid you for this already?              │
│  • Could you work on this evenings/weekends first?        │
│  • What's your spouse's honest opinion?                   │
│  • If this fails in 2 years, then what?                   │
└────────────────────────────────────────────────────────────┘

═══════════════════════════════════════════════════════════════

SYNTHESIS:
Don't quit your job yet. Instead:
1. Validate with paying customers first (while employed)
2. Build 12-month runway minimum
3. Get family fully on board
4. Define specific "kill criteria" before starting

If you can get 10 paying customers while employed, you have
signal that it might work. If you can't, quitting won't help.

Appropriate Decisions

  • Quitting job to start business
  • Major relocations (new city/country)
  • Large investments ($50K+)
  • Career pivots
  • Marriage/divorce considerations
  • Major life direction choices

Configuration

[profiles.deep]
tools = ["gigathink", "laserlogic", "bedrock", "proofguard", "brutalhonesty"]
gigathink_perspectives = 15
laserlogic_depth = "deep"
proofguard_sources = 5
brutalhonesty_severity = "high"
timeout = 360

Note: ReasonKit Pro deep profile adds highreflect for meta-cognition analysis.

Cost

~15K tokens ≈ $0.15 (Claude) / $0.45 (GPT-4)

Paranoid Profile

Maximum scrutiny in ~2-3 minutes

The Paranoid profile applies every available check for decisions where you cannot afford to be wrong.

When to Use

  • Life savings at stake
  • Irreversible decisions
  • Legal/compliance matters
  • Due diligence requirements
  • Once-in-a-lifetime choices
  • When being wrong has catastrophic consequences

Tools Included

ToolSettings
💡 GigaThink20 perspectives
⚡ LaserLogicExhaustive analysis
🪨 BedRockDeep decomposition
🛡️ ProofGuard7 sources minimum
🔥 BrutalHonestyMaximum severity

Pro Tip: ReasonKit Pro adds HighReflect (meta-cognition) and RiskRadar (threat assessment) for maximum paranoid analysis.

Usage

# Full form
rk-core think "question" --profile paranoid

# Shorthand
rk-core think "question" --paranoid

Example

Question: “Should I invest my $200K life savings in this real estate opportunity?”

╔════════════════════════════════════════════════════════════╗
║  PARANOID ANALYSIS                                         ║
║  Time: 9 minutes 18 seconds                                ║
║  ⚠️  HIGH STAKES MODE                                      ║
╚════════════════════════════════════════════════════════════╝

┌────────────────────────────────────────────────────────────┐
│  💡 GIGATHINK: 20 Perspectives                             │
├────────────────────────────────────────────────────────────┤
│  1. SCAM CHECK: Is this a legitimate opportunity?         │
│  2. LIQUIDITY: Can you get your money out if needed?      │
│  3. DIVERSIFICATION: Is this your only investment?        │
│  4. DUE DILIGENCE: Have you verified all claims?          │
│  5. LEGAL: Is the structure legally sound?                │
│  6. TAX: What are the tax implications?                   │
│  7. TIMELINE: What's the realistic return timeline?       │
│  8. DOWNSIDE: What's the worst case scenario?             │
│  9. TRACK RECORD: What's the sponsor's history?           │
│  10. CONFLICTS: Who benefits from you investing?          │
│  11. LEVERAGE: Is there debt involved?                    │
│  12. MARKET: What if real estate market crashes?          │
│  13. ALTERNATIVES: What else could you do with $200K?     │
│  14. OPPORTUNITY COST: What are you giving up?            │
│  15. PRESSURE: Are you being rushed to decide?            │
│  16. REFERRAL: Who told you about this? Incentive?        │
│  17. DOCUMENTS: Have you read ALL the fine print?         │
│  18. PROFESSIONAL: Have you consulted CPA/attorney?       │
│  19. SPOUSE: Does your partner agree?                     │
│  20. REGRET: If this fails, how will you feel?            │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  ⚡ LASERLOGIC: Exhaustive Analysis                        │
├────────────────────────────────────────────────────────────┤
│  CRITICAL FLAW 1: "They showed me the returns"            │
│  → Past returns don't guarantee future performance        │
│  → Returns can be fabricated (see: every Ponzi scheme)    │
│  → VERIFY: Request audited financial statements           │
│                                                            │
│  CRITICAL FLAW 2: "The person who told me is successful"  │
│  → They may have gotten lucky                             │
│  → They may be getting referral fees                      │
│  → Survivorship bias: you don't hear from losers          │
│                                                            │
│  CRITICAL FLAW 3: "Real estate always goes up"            │
│  → 2008 counterexample                                    │
│  → Local markets can crash independently                  │
│  → Commercial ≠ residential ≠ land                        │
│                                                            │
│  CRITICAL FLAW 4: "I'm diversified because real estate"   │
│  → $200K in one deal = NOT diversified                    │
│  → True diversification = multiple asset classes          │
│                                                            │
│  CRITICAL FLAW 5: "Limited time offer"                    │
│  → MAJOR RED FLAG                                         │
│  → Legitimate investments don't pressure you              │
│  → This is a manipulation tactic                          │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🪨 BEDROCK: First Principles                              │
├────────────────────────────────────────────────────────────┤
│  FUNDAMENTAL QUESTION:                                     │
│  Why is this opportunity available to YOU?                 │
│                                                            │
│  If returns are genuinely good:                           │
│  → Institutions would have already funded it              │
│  → Banks would be lending against it                      │
│  → It wouldn't need YOUR $200K                            │
│                                                            │
│  FIRST PRINCIPLES:                                         │
│  1. If it sounds too good, it probably is                 │
│  2. High returns = high risk (no exceptions)              │
│  3. Illiquid investments are MUCH riskier                 │
│  4. Never invest more than you can lose completely        │
│                                                            │
│  THE CORE TEST:                                            │
│  Would a wealthy, experienced investor do this deal?      │
│  If not, why do you think YOU should?                     │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🛡️ PROOFGUARD: Maximum Verification (7 sources)          │
├────────────────────────────────────────────────────────────┤
│  ⚠️ VERIFICATION FAILURES:                                │
│                                                            │
│  • CANNOT VERIFY: Claimed returns (no audited statements) │
│  • CANNOT VERIFY: Sponsor track record (no public record) │
│  • CANNOT VERIFY: Property valuations (no independent)    │
│                                                            │
│  ✓ VERIFIED:                                              │
│  • SEC has warnings about similar structures              │
│  • State AG has complaints about sponsor (3 found)        │
│  • BBB rating: F (multiple complaints)                    │
│  • Better known competitors have better terms             │
│                                                            │
│  🚨 RED FLAGS FOUND: 4                                     │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🔥 BRUTALHONESTY: Maximum Severity                        │
├────────────────────────────────────────────────────────────┤
│  🚨 CRITICAL WARNING 1:                                    │
│  You are being targeted because you have money and        │
│  don't know enough to see the red flags.                  │
│                                                            │
│  🚨 CRITICAL WARNING 2:                                    │
│  The person who referred you is probably getting paid.    │
│  Ask them directly: "Are you getting a referral fee?"     │
│                                                            │
│  🚨 CRITICAL WARNING 3:                                    │
│  "Life savings" should NEVER go into a single illiquid    │
│  investment. This is a fundamental rule violation.        │
│                                                            │
│  🚨 CRITICAL WARNING 4:                                    │
│  If you lose this money, you cannot get it back.          │
│  Are you okay with that? Really?                          │
│                                                            │
│  HONEST QUESTIONS:                                         │
│  • Would Warren Buffett invest in this? (Probably not)    │
│  • Have you talked to people who LOST money here?         │
│  • What's your backup plan if this goes to zero?          │
│  • Why are you considering this instead of index funds?   │
└────────────────────────────────────────────────────────────┘

═══════════════════════════════════════════════════════════════

🚨 FINAL VERDICT: DO NOT INVEST

This opportunity has multiple red flags:
1. Verification failures on key claims
2. Pressure tactics (limited time)
3. Concentration risk (life savings)
4. Illiquidity risk
5. Sponsor complaints on record

If you want real estate exposure, consider:
- Publicly traded REITs (liquid, regulated, diversified)
- Real estate index funds
- Smaller allocation to syndications (10% max)

Never put life savings in a single illiquid investment.

Appropriate Decisions

  • Life savings investments
  • Signing legal contracts
  • Major business acquisitions
  • Irreversible medical decisions
  • Due diligence requirements
  • Anything where being wrong is catastrophic

Configuration

[profiles.paranoid]
tools = ["gigathink", "laserlogic", "bedrock", "proofguard", "brutalhonesty"]
gigathink_perspectives = 20
laserlogic_depth = "exhaustive"
proofguard_sources = 7
brutalhonesty_severity = "maximum"
timeout = 600

Note: ReasonKit Pro paranoid profile adds highreflect and riskradar for maximum verification depth.

Cost

~40K tokens ≈ $0.40 (Claude) / $1.20 (GPT-4)

Worth every penny for decisions of this magnitude.

Custom Profiles

🎛️ Build your own reasoning presets

Custom profiles let you create specialized tool combinations for your specific use cases.

Creating Custom Profiles

In Config File

# ~/.config/reasonkit/config.toml

[profiles.career]
# Optimized for career decisions
tools = ["gigathink", "laserlogic", "brutalhonesty"]
gigathink_perspectives = 12
laserlogic_depth = "deep"
brutalhonesty_severity = "high"
timeout = 180

[profiles.fact_check]
# Optimized for verifying claims
tools = ["laserlogic", "proofguard"]
proofguard_sources = 5
proofguard_require_citation = true
timeout = 120

[profiles.investment]
# Optimized for financial decisions
tools = ["gigathink", "laserlogic", "bedrock", "proofguard", "brutalhonesty"]
gigathink_perspectives = 15
proofguard_sources = 5
timeout = 300
# Pro: Add riskradar for risk quantification

[profiles.quick_sanity]
# Ultra-fast sanity check
tools = ["gigathink", "brutalhonesty"]
gigathink_perspectives = 5
brutalhonesty_severity = "medium"
timeout = 30

Usage

# Use custom profile
rk-core think "Should I take this job?" --profile career

# List available profiles
rk-core profiles list

# Show profile details
rk-core profiles show career

Profile Schema

[profiles.your_profile_name]
# Required: Which tools to include
tools = ["gigathink", "laserlogic", "bedrock", "proofguard", "brutalhonesty"]

# Optional: Tool-specific settings
gigathink_perspectives = 10          # 5-20
laserlogic_depth = "standard"        # light, standard, deep, exhaustive
bedrock_decomposition = "standard"   # light, standard, deep
proofguard_sources = 3               # 1-10
proofguard_require_citation = true   # true/false
brutalhonesty_severity = "medium"    # low, medium, high, maximum

# Optional: Advanced tools (Pro features)
highreflect_enabled = false
riskradar_enabled = false
atomicbreak_enabled = false

# Optional: Execution settings
timeout = 180                        # seconds
include_synthesis = true             # Include final synthesis
parallel_execution = false           # Run tools in parallel

Example Profiles

Research Profile

For academic or professional research:

[profiles.research]
tools = ["gigathink", "laserlogic", "proofguard"]
gigathink_perspectives = 15
laserlogic_depth = "deep"
proofguard_sources = 7
proofguard_require_citation = true
timeout = 300

Debate Prep Profile

For preparing arguments:

[profiles.debate]
tools = ["gigathink", "laserlogic", "brutalhonesty"]
gigathink_perspectives = 12
laserlogic_depth = "exhaustive"
brutalhonesty_severity = "high"
include_counterarguments = true
timeout = 240

Quick Decision Profile

For rapid decision support:

[profiles.rapid]
tools = ["gigathink", "brutalhonesty"]
gigathink_perspectives = 5
brutalhonesty_severity = "medium"
timeout = 30
parallel_execution = true

Due Diligence Profile

For business/investment vetting:

[profiles.due_diligence]
tools = ["gigathink", "laserlogic", "bedrock", "proofguard", "brutalhonesty"]
gigathink_perspectives = 20
laserlogic_depth = "exhaustive"
proofguard_sources = 10
brutalhonesty_severity = "maximum"
timeout = 600
# Pro: Add riskradar + highreflect for enterprise due diligence

Creative Exploration Profile

For brainstorming and ideation:

[profiles.creative]
tools = ["gigathink"]
gigathink_perspectives = 25
gigathink_include_contrarian = true
gigathink_include_absurd = true
timeout = 180

Tool Settings Reference

GigaThink Settings

SettingValuesDefaultDescription
gigathink_perspectives5-2510Number of perspectives
gigathink_include_contrariantrue/falsetrueInclude opposing views
gigathink_include_absurdtrue/falsefalseInclude unconventional angles

LaserLogic Settings

SettingValuesDefaultDescription
laserlogic_depthlight/standard/deep/exhaustivestandardAnalysis depth
laserlogic_fallacy_detectiontrue/falsetrueCheck for fallacies
laserlogic_assumption_analysistrue/falsetrueIdentify assumptions

BedRock Settings

SettingValuesDefaultDescription
bedrock_decompositionlight/standard/deepstandardDecomposition depth
bedrock_show_80_20true/falsetrueShow 80/20 analysis

ProofGuard Settings

SettingValuesDefaultDescription
proofguard_sources1-103Minimum sources required
proofguard_require_citationtrue/falsefalseRequire citation format
proofguard_source_tier_threshold1-33Minimum source quality

BrutalHonesty Settings

SettingValuesDefaultDescription
brutalhonesty_severitylow/medium/high/maximummediumFeedback intensity
brutalhonesty_include_alternativestrue/falsetrueSuggest alternatives

Sharing Profiles

Export Profile

# Export single profile
rk-core profiles export career > career_profile.toml

# Export all custom profiles
rk-core profiles export-all > my_profiles.toml

Import Profile

# Import from file
rk-core profiles import career_profile.toml

# Import from URL
rk-core profiles import https://example.com/profiles/research.toml

Best Practices

  1. Start with a built-in profile — Modify balanced or deep rather than starting from scratch

  2. Match tools to use case — Don’t include tools you don’t need

  3. Test your profile — Run it on sample questions before relying on it

  4. Document your profiles — Add comments explaining when to use each

  5. Share within teams — Custom profiles ensure consistent analysis

Career Decisions

💼 Navigate job offers, promotions, and career pivots with structured reasoning.

Career decisions are perfect for ReasonKit because they involve multiple factors, emotional bias, and long-term consequences.

Common Career Questions

“Should I take this job offer?”

rk-core think "I received a job offer with 30% higher salary but at a startup. Currently at stable Fortune 500. Should I take it?" --balanced

What ReasonKit catches:

  • Hidden costs (commute, benefits, work-life balance)
  • Startup risk factors (funding, runway, founder quality)
  • Career trajectory implications
  • Opportunity cost of staying

“Should I ask for a promotion?”

rk-core think "I've been at my company for 2 years and feel ready for promotion. My manager seems reluctant. Should I push for it?" --balanced

What ReasonKit catches:

  • Timing considerations
  • Relationship dynamics
  • Alternative paths (lateral move, leave)
  • Negotiation strategy

“Should I change careers entirely?”

rk-core think "I'm 35, making $120K in finance, but want to become a software engineer. Is this realistic?" --deep

What ReasonKit catches:

  • Financial runway requirements
  • Skills gap analysis
  • Age-related factors (bias, learning curve)
  • Reversibility assessment

Example Analysis

Question: “I’ve been offered a management role but I love being an IC (individual contributor). Should I take it?”

rk-core think "Offered management role, but I love being an IC. 15% raise. Should I take it?" --balanced
╔════════════════════════════════════════════════════════════╗
║  CAREER ANALYSIS: IC vs Management                         ║
╚════════════════════════════════════════════════════════════╝

┌────────────────────────────────────────────────────────────┐
│  💡 GIGATHINK: 10 Perspectives                             │
├────────────────────────────────────────────────────────────┤
│  1. COMPENSATION: 15% now, but management ceiling higher  │
│  2. SKILLS: Management skills are transferable anywhere   │
│  3. IDENTITY: "I love IC work" - is this core identity?   │
│  4. REVERSIBILITY: Can you go back to IC if you hate it?  │
│  5. GROWTH: Is there an IC track with similar ceiling?    │
│  6. ENERGY: What energizes you - code or people?          │
│  7. POLITICS: Management = more organizational politics   │
│  8. IMPACT: Which role has more impact at your company?   │
│  9. AGE: Management track often expected after 35-40      │
│  10. ALTERNATIVES: Staff/Principal IC roles available?    │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  ⚡ LASERLOGIC: Hidden Assumptions                         │
├────────────────────────────────────────────────────────────┤
│  FLAW: "I love being an IC"                               │
│  → You love your CURRENT IC role at THIS company          │
│  → You might love management, you've never tried it       │
│  → Identity ≠ job title                                    │
│                                                            │
│  FLAW: "15% raise"                                        │
│  → Short-term thinking                                    │
│  → Management salary ceiling typically 2-3x IC ceiling    │
│  → Total compensation over 10 years differs dramatically  │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🪨 BEDROCK: The Core Question                             │
├────────────────────────────────────────────────────────────┤
│  Do you want to scale through PEOPLE or through CODE?     │
│                                                            │
│  IC path: Impact = your output × leverage (tools, libs)   │
│  Mgmt path: Impact = team output × your leadership        │
│                                                            │
│  Neither is wrong. Which resonates more?                  │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🔥 BRUTALHONESTY: Reality Check                           │
├────────────────────────────────────────────────────────────┤
│  UNCOMFORTABLE TRUTH:                                      │
│  If you turn this down, you may not be offered again.     │
│  Companies often interpret "no" as "not leadership        │
│  material" and stop considering you.                      │
│                                                            │
│  UNCOMFORTABLE TRUTH:                                      │
│  "I love IC work" often means "I'm afraid of failing      │
│  at something new." Fear dressed as preference.           │
│                                                            │
│  HONEST QUESTION:                                          │
│  If management were easy and you'd definitely succeed,    │
│  would you want it then?                                  │
└────────────────────────────────────────────────────────────┘

SYNTHESIS:
Try it for 6-12 months. Most companies allow reverting.
If you discover you hate it, you've learned something valuable.
If you discover you love it, your career ceiling just tripled.
The only real risk is not finding out.

Career-Specific Profile

Create a custom profile for career decisions:

[profiles.career]
tools = ["gigathink", "laserlogic", "bedrock", "brutalhonesty"]
gigathink_perspectives = 12
laserlogic_depth = "deep"
brutalhonesty_severity = "high"
timeout = 180

Common Career Biases ReasonKit Catches

BiasExampleReasonKit Response
Status quo“I’m comfortable here”Quantify opportunity cost
Loss aversion“I might lose my benefits”Compare total packages
Social proof“Everyone says it’s a great company”Verify independently
Recency bias“My last job change was bad”Statistical base rates
Sunk cost“I’ve been here 10 years”Future focus, not past

Tips for Career Analysis

  1. Be specific — Include numbers (salary, years, company size)

  2. Include constraints — Family situation, location requirements

  3. Use deep/paranoid — Career decisions deserve thorough analysis

  4. Run multiple times — Different phrasings reveal different angles

  5. Act on BrutalHonesty — The uncomfortable truths are usually most valuable

Financial Decisions

💵 Make smarter money decisions with structured analysis.

Financial decisions—from budgeting to major purchases—benefit from structured thinking that cuts through emotion and marketing.

Common Financial Questions

“Should I buy this?”

rk-core think "I want to buy a $800 smartphone. My current phone works fine but is 3 years old. Good idea?" --balanced

“Can I afford this?”

rk-core think "Thinking of buying a $35K car. Income is $75K. Is this financially responsible?" --deep

“Should I pay off debt or invest?”

rk-core think "I have $10K saved. Should I pay off my 6% car loan or invest in index funds?" --balanced

“Is this subscription worth it?”

rk-core think "I'm paying $200/month in subscriptions. Which ones should I cut?" --quick

Example Analysis

Question: “Should I finance a new car or buy a used one with cash?”

rk-core think "I have $15K cash. Should I buy a used car outright or finance a new car at 5% APR?" --balanced
╔════════════════════════════════════════════════════════════╗
║  FINANCIAL ANALYSIS: New vs Used Car                       ║
╚════════════════════════════════════════════════════════════╝

┌────────────────────────────────────────────────────────────┐
│  💡 GIGATHINK: 10 Perspectives                             │
├────────────────────────────────────────────────────────────┤
│  1. DEPRECIATION: New cars lose 20-30% in year one        │
│  2. FINANCING COST: 5% APR on $30K = $4K+ in interest     │
│  3. OPPORTUNITY COST: $15K invested at 7% = $1K/year      │
│  4. RELIABILITY: New car has warranty, used may not       │
│  5. INSURANCE: New cars cost more to insure               │
│  6. MAINTENANCE: Used cars may need more repairs          │
│  7. CASH FLOW: Monthly payment vs. one-time expense       │
│  8. EMERGENCY: Keeping cash = financial flexibility       │
│  9. PSYCHOLOGY: "New car smell" satisfaction factor       │
│  10. TOTAL COST: Calculate 5-year total cost of ownership │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  ⚡ LASERLOGIC: Hidden Assumptions                         │
├────────────────────────────────────────────────────────────┤
│  FLAW: "New cars are more reliable"                       │
│  → Modern used cars (2-3 years old) are very reliable     │
│  → Reliability varies by brand more than age              │
│                                                            │
│  FLAW: "I can afford the payment"                         │
│  → Affordability ≠ wisdom                                 │
│  → Monthly payment hides total cost                       │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🪨 BEDROCK: First Principles                              │
├────────────────────────────────────────────────────────────┤
│  CORE QUESTION:                                            │
│  A car is transportation from A to B.                      │
│  How much are you paying for that function?                │
│                                                            │
│  THE MATH:                                                 │
│  • Used $15K car, 5 years = $3K/year + maintenance        │
│  • New $30K car financed = $6K/year + interest            │
│  • Difference: $3K+/year = $15K+ over 5 years             │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🛡️ PROOFGUARD: Verification                              │
├────────────────────────────────────────────────────────────┤
│  VERIFIED: Average new car loses 20% value in year one    │
│  VERIFIED: Average used car repair costs $500-1500/year   │
│  VERIFIED: S&P 500 average return ~7% after inflation     │
│  TO VERIFY: Specific used car reliability ratings         │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🔥 BRUTALHONESTY: Reality Check                           │
├────────────────────────────────────────────────────────────┤
│  UNCOMFORTABLE TRUTH:                                      │
│  You probably want the new car because it's nicer,        │
│  not because it makes financial sense.                    │
│                                                            │
│  The $15K difference could be:                            │
│  • 6+ months of emergency fund                            │
│  • Start of retirement savings                            │
│  • Down payment on a house                                │
│                                                            │
│  HONEST QUESTION:                                          │
│  In 3 years, will you be happier with the nice car        │
│  or the extra $15K in savings?                            │
└────────────────────────────────────────────────────────────┘

SYNTHESIS:
Buy the used car with cash unless you have a specific reason
the new car's features are worth $15K+ to you. The math strongly
favors used—you avoid depreciation, interest, and preserve cash
for emergencies or investing.

Financial-Specific Profile

[profiles.financial]
tools = ["gigathink", "laserlogic", "bedrock", "proofguard", "brutalhonesty"]
gigathink_perspectives = 10
laserlogic_depth = "standard"
proofguard_sources = 3
brutalhonesty_severity = "high"
timeout = 180

Financial Decision Types

Decision TypeProfileKey Tools
Daily purchases--quickGigaThink, LaserLogic
Major purchases ($500+)--balancedAll 5
Debt decisions--balancedBedRock, LaserLogic
Investment decisions--paranoidSee Investments

Common Financial Biases

ReasonKit helps you catch:

BiasExampleHow ReasonKit Helps
Anchoring“$1000 off!” (from inflated price)BedRock: What’s the actual value?
Mental accounting“It’s bonus money, I can spend it”LaserLogic: Money is fungible
Lifestyle inflationSpending more as income risesBrutalHonesty: Do you need this?
Sunk cost“I already spent $X on this”LaserLogic: Past spending is irrelevant
Present biasPreferring now over futureGigaThink: Future self perspective

Tips for Financial Analysis

  1. Include all costs — Purchase price, maintenance, opportunity cost, time
  2. Use real numbers — Not “about $X” but exact figures
  3. Consider time horizon — 1 year vs. 5 years vs. lifetime
  4. Question “normal” — Just because others do it doesn’t make it smart
  5. Run the comparison — Always compare to alternatives

Fact-Checking

🔍 Verify claims before you believe or share them.

In an era of misinformation, ReasonKit’s ProofGuard tool is designed specifically to verify claims against multiple sources.

Common Fact-Checking Questions

“Is this claim true?”

rk-core think "Claim: You should drink 8 glasses of water a day. Is this true?" --balanced

“Is this statistic accurate?”

rk-core think "Someone said 90% of startups fail. Is this accurate? What's the real number?" --balanced

“Is this quote real?”

rk-core think "Did Einstein really say 'The definition of insanity is doing the same thing and expecting different results'?" --balanced

“Is this news story accurate?”

rk-core think "Article claims [specific claim]. How can I verify this?" --deep

Example Analysis

Question: “Is it true that we only use 10% of our brain?”

rk-core think "People say we only use 10% of our brain. Is this true?" --balanced
╔════════════════════════════════════════════════════════════╗
║  FACT CHECK: "We only use 10% of our brain"                ║
╚════════════════════════════════════════════════════════════╝

┌────────────────────────────────────────────────────────────┐
│  💡 GIGATHINK: Examining the Claim                         │
├────────────────────────────────────────────────────────────┤
│  1. ORIGIN: Where did this claim come from?               │
│  2. DEFINITION: What does "use" mean here?                │
│  3. EVIDENCE: What does neuroscience say?                 │
│  4. PERSISTENCE: Why do people believe this?              │
│  5. INCENTIVES: Who benefits from this myth?              │
│  6. VARIATIONS: Are there kernels of truth?               │
│  7. EXPERTS: What do neurologists say?                    │
│  8. IMAGING: What do brain scans show?                    │
│  9. DAMAGE: What happens with brain injuries?             │
│  10. EVOLUTION: Would evolution allow 90% waste?          │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  ⚡ LASERLOGIC: Analyzing the Logic                        │
├────────────────────────────────────────────────────────────┤
│  LOGICAL PROBLEM 1: Definition ambiguity                  │
│  → "Use" is undefined. At any moment? Over a day?         │
│  → We don't use 100% of muscles simultaneously either     │
│                                                            │
│  LOGICAL PROBLEM 2: Evolutionary absurdity                │
│  → Brain uses 20% of body's energy                        │
│  → Evolution wouldn't maintain expensive unused tissue    │
│                                                            │
│  LOGICAL PROBLEM 3: Medical evidence                      │
│  → Small brain damage causes significant impairment       │
│  → If 90% were unused, damage would be mostly harmless    │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🛡️ PROOFGUARD: Source Verification                       │
├────────────────────────────────────────────────────────────┤
│  SOURCE 1: Scientific American                            │
│  → "We use virtually every part of the brain"             │
│  → Brain imaging shows activity throughout                │
│  → VERDICT: Myth is FALSE                                 │
│                                                            │
│  SOURCE 2: Johns Hopkins Medicine                         │
│  → All brain regions have known functions                 │
│  → No "unused" 90% has ever been found                    │
│  → VERDICT: Myth is FALSE                                 │
│                                                            │
│  SOURCE 3: Nature Reviews Neuroscience                    │
│  → fMRI studies show all regions active                   │
│  → Different tasks activate different regions             │
│  → VERDICT: Myth is FALSE                                 │
│                                                            │
│  ORIGIN TRACED: Misattributed to Einstein (no evidence)   │
│  Possibly from 1900s self-help movement                   │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🔥 BRUTALHONESTY: Why This Myth Persists                  │
├────────────────────────────────────────────────────────────┤
│  WHY PEOPLE BELIEVE IT:                                    │
│  • It's flattering: "Imagine if you unlocked 100%!"       │
│  • Self-help industry profits from it                     │
│  • It "explains" why we're not geniuses                   │
│  • It sounds scientific enough to be plausible            │
│                                                            │
│  THE REAL STORY:                                           │
│  We use all of our brain, just not all at once.           │
│  Like a keyboard—you don't press all keys simultaneously. │
│  Different tasks activate different regions.              │
└────────────────────────────────────────────────────────────┘

VERDICT: FALSE
The "10% of brain" claim is a well-documented myth with no
scientific basis. We use virtually all of our brain—just
different parts for different tasks at different times.

Fact-Checking Profile

[profiles.factcheck]
tools = ["laserlogic", "proofguard", "brutalhonesty"]
proofguard_sources = 5
proofguard_require_citation = true
brutalhonesty_severity = "medium"
timeout = 180

Source Quality Tiers

ProofGuard categorizes sources by reliability:

TierSource TypesTrust Level
Tier 1Peer-reviewed journals, official statistics, primary sourcesHigh
Tier 2Major news outlets, established institutions, expert interviewsMedium-High
Tier 3Wikipedia, general news, secondary sourcesMedium
Tier 4Blogs, social media, opinion piecesLow

Red Flags for Misinformation

ReasonKit watches for:

Red FlagExampleWhat to Do
No sources cited“Studies show…” without citationAsk for specific study
Emotional language“SHOCKING discovery!”Seek neutral sources
Single sourceEntire claim rests on one studyTriangulate
Old data“Research from 1995”Find recent data
Conflicts of interestStudy funded by interested partyNote potential bias
Appeals to authority“Einstein said…”Verify attribution

Verification Checklist

When fact-checking, ReasonKit helps you answer:

  • Who made this claim originally?
  • What’s their expertise or potential bias?
  • Can I find the primary source?
  • Do multiple independent sources confirm it?
  • Are there credible sources that dispute it?
  • Is the data current and relevant?
  • Am I emotionally invested in the answer?

Tips for Better Fact-Checking

  1. Start skeptical — Assume claims need verification
  2. Find the original — Trace claims to primary sources
  3. Check multiple sources — One source isn’t enough
  4. Consider motives — Who benefits from this being believed?
  5. Update beliefs — Be willing to change your mind with evidence
  6. Note uncertainty — It’s okay to say “I don’t know”

Business Decisions

📊 Strategic analysis for business owners, founders, and executives.

Business decisions benefit from structured reasoning because they involve multiple stakeholders, uncertain outcomes, and significant resources.

Common Business Questions

“Should I start this business?”

rk-core think "I want to start a SaaS business in project management space. Market seems crowded but I have a unique angle. Worth pursuing?" --deep

“Should I raise funding or bootstrap?”

rk-core think "My startup is doing $20K MRR after 1 year bootstrapped. VCs are interested. Should I raise?" --paranoid

“Should I hire this person?”

rk-core think "Candidate is brilliant but has job-hopped 4 times in 3 years. Red flag or okay?" --balanced

“Should I pivot or persist?”

rk-core think "6 months in, $50K spent, no traction. 3 users give great feedback but can't get more. Pivot or persist?" --deep

Example Analysis

Question: “Should I launch my product now (80% done) or wait until 100%?”

rk-core think "Product is 80% complete. Launch now and iterate, or wait 2 more months for polish?" --balanced
╔════════════════════════════════════════════════════════════╗
║  BUSINESS ANALYSIS: Launch Timing                          ║
╚════════════════════════════════════════════════════════════╝

┌────────────────────────────────────────────────────────────┐
│  💡 GIGATHINK: 10 Perspectives                             │
├────────────────────────────────────────────────────────────┤
│  1. MVP PHILOSOPHY: Launch early, iterate with feedback   │
│  2. FIRST IMPRESSION: You only launch once                │
│  3. COMPETITIVE: What are competitors doing?              │
│  4. CASH FLOW: 2 months = 2 months more burn             │
│  5. MARKET TIMING: Is there a window closing?             │
│  6. TEAM MORALE: Is team ready to ship?                   │
│  7. CUSTOMER: Are beta users asking to pay?               │
│  8. PERFECTIONISM: Is "100%" a moving target?            │
│  9. LEARNING: What will you learn from launch?            │
│  10. REVERSIBILITY: Can you iterate after launch?         │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  ⚡ LASERLOGIC: Reasoning Analysis                         │
├────────────────────────────────────────────────────────────┤
│  FLAW: "80% complete"                                     │
│  → 80% by whose standard?                                 │
│  → The last 20% often takes 80% of the time              │
│  → "2 more months" is almost certainly wrong              │
│                                                            │
│  FLAW: "Polish before launch"                             │
│  → You don't know what to polish until users tell you     │
│  → You're polishing based on guesses, not data           │
│  → Effort likely wasted on wrong features                │
│                                                            │
│  FLAW: "You only launch once"                             │
│  → False for software; you can iterate indefinitely       │
│  → First users are usually early adopters who forgive     │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🪨 BEDROCK: First Principles                              │
├────────────────────────────────────────────────────────────┤
│  CORE QUESTION:                                            │
│  Can someone pay you money for this today?                 │
│                                                            │
│  If YES: Launch. Everything else is premature optimization│
│  If NO: What's the minimum needed to get there?           │
│                                                            │
│  THE 80/20:                                                │
│  • 80% of value comes from 20% of features               │
│  • The 20% you're missing may not be in that 20%         │
│  • Real usage data > your assumptions                     │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🔥 BRUTALHONESTY: Reality Check                           │
├────────────────────────────────────────────────────────────┤
│  UNCOMFORTABLE TRUTH:                                      │
│  "Waiting for polish" is often fear of rejection.         │
│  If you're afraid to launch, that fear won't go away      │
│  when you reach "100%" - the bar will just move.          │
│                                                            │
│  UNCOMFORTABLE TRUTH:                                      │
│  Most products fail because of bad product-market fit,    │
│  not because of missing features. Launching tells you     │
│  if you have PMF. Not launching keeps you guessing.       │
│                                                            │
│  HONEST QUESTION:                                          │
│  What specifically are you afraid will happen if you      │
│  launch today?                                            │
└────────────────────────────────────────────────────────────┘

SYNTHESIS:
Launch now unless there's a specific, critical blocker.
"Polish" is a trap. Real user feedback is more valuable
than hypothetical improvements. The market will tell you
what's actually missing.

Business-Specific Profile

[profiles.business]
tools = ["gigathink", "laserlogic", "bedrock", "proofguard", "brutalhonesty"]
gigathink_perspectives = 12
laserlogic_depth = "deep"
proofguard_sources = 3
brutalhonesty_severity = "high"
timeout = 240

Business Framework Integration

ReasonKit complements standard business frameworks:

FrameworkReasonKit Enhancement
SWOT AnalysisGigaThink expands perspectives
Porter’s Five ForcesLaserLogic validates logic
Lean CanvasBrutalHonesty stress-tests assumptions
OKRsBedRock ensures first-principles alignment

Common Business Biases

BiasBusiness ContextReasonKit Response
Sunk cost“We’ve invested too much to stop”Future-focused analysis
Optimism“Our projections are conservative”Base rate comparison
Groupthink“Everyone on the team agrees”Contrarian perspectives
Survivorship“Successful startups did X”Full dataset analysis

Tips for Business Analysis

  1. Include financials — Numbers matter; include them

  2. Specify timeline — “Should I hire?” vs “Should I hire this quarter?”

  3. Name competitors — Generic questions get generic answers

  4. Use paranoid for big bets — Funding rounds, pivots, major hires

  5. Revisit decisions — Run analysis again as conditions change

Growth Hacking

🚀 Scientific marketing analysis for rapid user acquisition and scale.

Growth hacking often suffers from survivor bias, unverified “hacks”, and channel fatigue. ReasonKit applies structured reasoning to validate growth strategies before you burn cash.

Common Growth Questions

“How can I double my user base in 30 days?”

rk-core think "I have 1000 users. I want to hit 2000 in 30 days. Budget $500. How?" --scientific

“Which acquisition channel should I focus on?”

rk-core think "B2B SaaS product, $49/mo. Should I focus on LinkedIn Ads, cold email, or SEO?" --balanced

“Is my viral loop realistic?”

rk-core think "I expect each user to refer 1.2 friends. Is this K-factor realistic for a productivity tool?" --paranoid

Example Analysis

Question: “How can I double my app’s user base in 30 days?”

rk-core think "I want to double my app's user base in 30 days" --scientific
╔════════════════════════════════════════════════════════════╗
║  GROWTH ANALYSIS: User Acquisition                         ║
╚════════════════════════════════════════════════════════════╝

┌────────────────────────────────────────────────────────────┐
║  💡 GIGATHINK: 10 Perspectives                             │
├────────────────────────────────────────────────────────────┤
│  1. PATH 1 (VIRAL): Incentivize referrals ("Give 1mo, Get 1mo")
│  2. PATH 2 (CONTENT): "Ultimate Guide" SEO series
│  3. PATH 3 (PARTNER): Co-marketing with non-competing SaaS
│  4. PATH 4 (SALES): Cold outreach to high-value targets
│  5. PATH 5 (PRODUCT): Product-led growth (freemium)
│  6. PATH 6 (COMMUNITY): Build Discord/Slack community
│  7. PATH 7 (PAID): FB/LinkedIn Ads (instant but expensive)
│  8. PATH 8 (INFLUENCER): Sponsor niche creators
│  9. PATH 9 (MARKETPLACE): Launch on AppSumo/ProductHunt
│  10. PATH 10 (ACQUISITION): Buy a smaller newsletter/tool
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
║  ⚡ LASERLOGIC: Reasoning Analysis                         │
├────────────────────────────────────────────────────────────┤
│  FLAW: "We just need to go viral"
│  → Hope is not a strategy. Viral loops require K-factor > 1,
│  which is mathematically rare for most utilities.
│
│  FLAW: "Paid ads scale infinitely"
│  → CAC rises as you exhaust early adopters. Unit economics
│  usually break at scale.
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
║  🪨 BEDROCK: First Principles                              │
├────────────────────────────────────────────────────────────┤
│  CORE QUESTION:
│  Do you have Product-Market Fit?
│
│  If YES: Pour fuel (paid/sales).
│  If NO: Fixing the bucket (retention) matters more than
│  filling it (acquisition).
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
║  🛡️ PROOFGUARD: Fact Verification                          │
├────────────────────────────────────────────────────────────┤
│  VERIFIED: Average SaaS growth is 10-20% YoY.
│  VERIFIED: "Doubling in 30 days" usually requires paid spend
│  or viral coefficient > 1.
│  TO VERIFY: Your current churn rate.
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
║  🔥 BRUTALHONESTY: Reality Check                           │
├────────────────────────────────────────────────────────────┤
│  You want to double users in 30 days? Unless you have a
│  massive ad budget or a truly viral product, this is a
│  vanity metric that will kill your business. You'll likely
│  acquire low-quality users who churn immediately.
│  Focus on doubling revenue or engagement instead.
└────────────────────────────────────────────────────────────┘

SYNTHESIS:
For a 30-day sprint, Path 1 (Viral Loop) + Path 3 (Partnerships)
is the only realistic way to double without massive ad spend.
But warning: solving for "user count" usually hides a
retention problem. Fix the leaky bucket first.

Growth Framework Integration

ReasonKit complements standard growth frameworks:

FrameworkReasonKit Enhancement
AARRR (Pirate Metrics)BedRock identifies the weakest bottleneck
Bullseye FrameworkGigaThink brainstorms traction channels
ICE ScoreLaserLogic validates “Confidence” estimates
Viral LoopProofGuard verifies mathematical assumptions

Common Growth Fallacies

FallacyGrowth ContextReasonKit Response
Magic Bullet“We just need one big PR hit”Probability analysis of PR impact
Premature Scaling“Let’s pour ads before retention is fixed”Churn rate warnings (BedRock)
Cargo Culting“Dropbox did it, so we should too”Context validation (LaserLogic)
Vanity Metrics“Look at our signups (ignoring DAU)”Metric integrity check (BrutalHonesty)

Life Decisions

🌟 Navigate major personal decisions with clarity and structure.

Life decisions are often the most important and most emotionally charged. ReasonKit helps cut through the emotion to find clarity.

Common Life Questions

Relationships

rk-core think "We've been together 3 years. Partner wants to get married. I'm unsure. What should I consider?" --deep

Family

rk-core think "We're considering having children. Are we ready? What should we think about?" --deep

Health

rk-core think "Doctor recommends [procedure]. What questions should I ask? What should I consider?" --balanced

Life Direction

rk-core think "I'm 30, have a stable career, but feel unfulfilled. What's actually going on?" --deep

Example Analysis

Question: “I’m 35, single, successful career, but everyone asks when I’m settling down. Should I prioritize finding a partner?”

rk-core think "35, single, successful career. Family pressures me about settling down. Should finding a partner be my priority?" --deep
╔════════════════════════════════════════════════════════════╗
║  LIFE ANALYSIS: Relationship Priority                      ║
╚════════════════════════════════════════════════════════════╝

┌────────────────────────────────────────────────────────────┐
│  💡 GIGATHINK: 15 Perspectives                             │
├────────────────────────────────────────────────────────────┤
│  1. DESIRE: Do YOU want a partner, or do OTHERS want it?  │
│  2. TIMELINE: If kids desired, biology is a factor        │
│  3. LONELINESS: Are you lonely, or just alone?            │
│  4. FULFILLMENT: What specifically feels missing?         │
│  5. SOCIAL: Is this about belonging or partnership?       │
│  6. IDENTITY: Is "successful single" an identity you like?│
│  7. OPPORTUNITY: What would you sacrifice for dating?     │
│  8. PATTERNS: Why are you single? By choice or failure?   │
│  9. ALTERNATIVES: Fulfillment without traditional path?   │
│  10. PRESSURE: Are you reacting to external expectations? │
│  11. COMPARISON: Comparing to friends' timelines?         │
│  12. HISTORY: What do past relationships teach you?       │
│  13. GROWTH: What personal work would help you partner?   │
│  14. MARKET: Are you dating? What's working/not working?  │
│  15. VALUES: What does "partner" mean to you specifically?│
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  ⚡ LASERLOGIC: Hidden Assumptions                         │
├────────────────────────────────────────────────────────────┤
│  FLAW: "Everyone asks"                                    │
│  → Other people's questions are about THEIR discomfort    │
│  → Their timeline isn't your timeline                     │
│  → "Normal" varies dramatically by culture/context        │
│                                                            │
│  FLAW: "Should prioritize"                                │
│  → Binary framing: career OR relationship                 │
│  → Many people have both without "prioritizing" either    │
│  → Dating doesn't require abandoning career               │
│                                                            │
│  FLAW: "Settling down"                                    │
│  → Implies current state is unsettled/wrong               │
│  → Framing assumes partnership = stability                │
│  → Many partnered people are less settled than singles    │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🪨 BEDROCK: The Core Questions                            │
├────────────────────────────────────────────────────────────┤
│  THREE QUESTIONS THAT MATTER:                              │
│                                                            │
│  1. Do you WANT a partner (regardless of what others say)?│
│     → If yes: Why haven't you found one? What's blocking? │
│     → If no: Why are you asking this question?            │
│                                                            │
│  2. Do you want children?                                  │
│     → If yes: Timeline matters biologically               │
│     → If no/unsure: Less time pressure                    │
│                                                            │
│  3. What's actually missing from your life right now?     │
│     → Partner may or may not solve it                     │
│     → Be specific: companionship? family? status?         │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  🔥 BRUTALHONESTY: Reality Check                           │
├────────────────────────────────────────────────────────────┤
│  UNCOMFORTABLE TRUTH 1:                                    │
│  You're asking this question, which means you're not      │
│  100% confident in your current path. That's worth        │
│  examining honestly.                                       │
│                                                            │
│  UNCOMFORTABLE TRUTH 2:                                    │
│  "Successful career" is sometimes a deflection from       │
│  doing the emotional work relationships require.          │
│  It's easier to excel at work than at intimacy.           │
│                                                            │
│  UNCOMFORTABLE TRUTH 3:                                    │
│  If you wanted a partner badly enough, you'd be actively  │
│  pursuing one. The fact that you're not suggests either:  │
│  a) You don't actually want one that much, or             │
│  b) Something is blocking you that you're not addressing  │
│                                                            │
│  UNCOMFORTABLE TRUTH 4:                                    │
│  35 is not "running out of time," but it IS a time when   │
│  the dating pool dynamics change. Ignoring this fact      │
│  is different from deciding it doesn't matter to you.     │
│                                                            │
│  HONEST QUESTIONS:                                         │
│  • When was your last serious relationship attempt?       │
│  • What happened?                                          │
│  • If a perfect partner appeared tomorrow, would you      │
│    make room in your life?                                │
└────────────────────────────────────────────────────────────┘

SYNTHESIS:
This isn't a prioritization question—it's a self-knowledge question.

1. If you genuinely want a partner: Start actively dating with intent.
   Your career won't suffer from a few hours a week.

2. If you genuinely don't: Stop asking the question. Set boundaries
   with people who pressure you. Own your choice.

3. If you're unsure: That's the real issue. Explore what you actually
   want before deciding how to pursue it.

The family pressure is noise. What matters is what YOU want.

Life-Specific Profile

[profiles.life]
tools = ["gigathink", "laserlogic", "bedrock", "proofguard", "brutalhonesty"]
gigathink_perspectives = 15
laserlogic_depth = "deep"
brutalhonesty_severity = "high"
timeout = 300

Pro Tip: ReasonKit Pro adds highreflect for deeper meta-cognition and bias analysis.

Life Decision Framework

ReasonKit helps you distinguish:

Question TypeWhat It Really Asks
“Should I do X?”Do I WANT X? (desire)
“Is it time for X?”Is this MY timeline or others’?
“Am I ready for X?”What would ready look like?
“Is X the right choice?”By whose definition of right?

Common Life Biases

BiasExampleReasonKit Response
Social comparison“Friends are married”Your timeline isn’t theirs
Sunk cost“We’ve been together 8 years”Future matters more than past
Status quo“This is comfortable”Comfort ≠ right
External validation“Everyone says…”What do YOU say?

Sensitive Topics

ReasonKit can help with difficult questions:

  • Grief: Processing loss decisions
  • Health: Medical decision support
  • Relationships: Honest assessment
  • Identity: Life direction questions

For mental health crises, please contact professional support. ReasonKit is for decision clarity, not therapy.

Tips for Life Analysis

  1. Be honest in your question — The real question may differ from what you type

  2. Include context — Age, situation, constraints all matter

  3. Use deep or paranoid — Life decisions deserve thorough analysis

  4. Focus on BrutalHonesty — It usually surfaces what you’re avoiding

  5. Sleep on it — Run analysis, wait 24 hours, then decide

CLI Commands

Complete reference for all ReasonKit CLI commands.

Overview

rk-core [COMMAND] [OPTIONS] [ARGUMENTS]

Core Commands

think

Run a full analysis using PowerCombo (all tools in sequence).

rk-core think "Your question or statement" [OPTIONS]

Options:

FlagShortDescription
--profile-pProfile to use (quick/balanced/deep/paranoid)
--format-fOutput format (pretty/json/markdown)
--timeout-tMaximum execution time in seconds
--verbose-vShow detailed progress
--quiet-qMinimal output
--providerLLM provider (anthropic/openai/openrouter/ollama)
--model-mSpecific model to use

Examples:

# Basic usage
rk-core think "Should I take this job?"

# With profile
rk-core think "Should I invest my savings?" --profile paranoid

# With specific model
rk-core think "Is this a good business idea?" --provider anthropic --model claude-3-opus

# Output as JSON
rk-core think "question" --format json > analysis.json

Individual ThinkTools

Run specific tools directly:

# GigaThink - Generate perspectives
rk-core gigathink "question" [OPTIONS]

# LaserLogic - Check reasoning
rk-core laserlogic "claim or argument" [OPTIONS]

# BedRock - First principles
rk-core bedrock "question" [OPTIONS]

# ProofGuard - Verify claims
rk-core proofguard "claim" [OPTIONS]

# BrutalHonesty - Reality check
rk-core brutalhonesty "plan or idea" [OPTIONS]

# PowerCombo - All tools
rk-core powercombo "question" [OPTIONS]

Tool-Specific Options:

# GigaThink
rk-core gigathink "question" --perspectives 15

# LaserLogic
rk-core laserlogic "argument" --depth deep

# ProofGuard
rk-core proofguard "claim" --sources 5

# BrutalHonesty
rk-core brutalhonesty "plan" --severity high

Configuration Commands

config

Manage ReasonKit configuration.

# Validate configuration
rk-core config validate

# Show effective configuration
rk-core config show

# Show config file path
rk-core config path

# Edit config in default editor
rk-core config edit

# Reset to defaults
rk-core config reset

profiles

Manage reasoning profiles.

# List all profiles
rk-core profiles list

# Show profile details
rk-core profiles show balanced

# Export profile
rk-core profiles export career > career.toml

# Import profile
rk-core profiles import career.toml

Provider Commands

providers

Manage LLM providers.

# List available providers
rk-core providers list

# Test provider connection
rk-core providers test anthropic

# Set default provider
rk-core providers default anthropic

# Show provider models
rk-core providers models openrouter

models

Work with models.

# List available models
rk-core models list

# Test model
rk-core models test claude-3-sonnet

# Set default model
rk-core models default claude-3-sonnet

Utility Commands

version

Show version information.

rk-core version
# reasonkit-core 0.1.0 (built 2025-01-15)

help

Get help for any command.

# General help
rk-core help

# Command-specific help
rk-core help think
rk-core help gigathink

update

Update ReasonKit.

# Check for updates
rk-core update check

# Update to latest
rk-core update

Global Options

These options work with any command:

FlagDescription
--help, -hShow help
--version, -VShow version
--configUse specific config file
--no-colorDisable colored output
--debugEnable debug output

Environment Variables

VariableDescription
ANTHROPIC_API_KEYAnthropic API key
OPENAI_API_KEYOpenAI API key
OPENROUTER_API_KEYOpenRouter API key
RK_PROVIDERDefault provider
RK_MODELDefault model
RK_PROFILEDefault profile
RK_OUTPUT_FORMATDefault output format
RK_LOG_LEVELLogging level

Exit Codes

CodeMeaning
0Success
1General error
2Invalid arguments
3Configuration error
4API error
5Timeout

Shell Completions

# Bash
rk-core completions bash > /etc/bash_completion.d/rk-core

# Zsh
rk-core completions zsh > ~/.zfunc/_rk-core

# Fish
rk-core completions fish > ~/.config/fish/completions/rk-core.fish

Command-Line Options

🎛️ Complete reference for all CLI flags and options.

ReasonKit’s CLI is designed for power users and automation. Every option has both a short and long form.

Global Options

These options work with all commands:

ShortLongDefaultDescription
-p--profilebalancedReasoning profile to use
-o--outputprettyOutput format (pretty, json, markdown)
-v--verbosefalseEnable verbose output
-q--quietfalseSuppress all non-essential output
-h--help-Show help message
-V--version-Show version information
--config~/.config/reasonkit/config.tomlConfig file path
--no-colorfalseDisable colored output

Profile Selection

Quick Access

# Using --profile flag
rk-core think "question" --profile balanced

# Using shorthand flags
rk-core think "question" --quick      # ~10 seconds
rk-core think "question" --balanced   # ~20 seconds (default)
rk-core think "question" --deep       # ~1 minute
rk-core think "question" --paranoid   # ~2-3 minutes

Profile Options

FlagEquivalentTimeUse Case
--quick--profile quick~10sLow-stakes, time-sensitive
--balanced--profile balanced~20sMost decisions
--deep--profile deep~1mHigh-stakes decisions
--paranoid--profile paranoid~2-3mCritical, irreversible

Provider Options

Provider Selection

# Auto-detect (uses first available key)
rk-core think "question"

# Explicit provider
rk-core think "question" --provider anthropic
rk-core think "question" --provider openai
rk-core think "question" --provider openrouter
rk-core think "question" --provider ollama

Model Selection

# Use default model for provider
rk-core think "question" --provider anthropic

# Specify exact model
rk-core think "question" --model claude-sonnet-4-20250514
rk-core think "question" --model gpt-4-turbo
rk-core think "question" --model llama3.2

Provider-Specific Options

OptionDescriptionExample
--providerLLM provider to useanthropic, openai, openrouter, ollama
--modelSpecific model IDclaude-sonnet-4-20250514
--api-keyAPI key (overrides env)sk-...
--base-urlCustom API endpointhttp://localhost:11434

Output Options

Format Selection

# Pretty output (default, human-readable)
rk-core think "question" --output pretty

# JSON output (machine-readable)
rk-core think "question" --output json

# Markdown output (documentation-ready)
rk-core think "question" --output markdown

# Short alias
rk-core think "question" -o json

Output Control

OptionDescriptionExample
--output, -oOutput formatpretty, json, markdown
--no-colorDisable ANSI colorsFor piping/logging
--quiet, -qMinimal outputOnly final result
--verbose, -vDetailed outputInclude debug info

Output Redirection

# Save to file
rk-core think "question" --output json > analysis.json

# Pipe to jq
rk-core think "question" -o json | jq '.synthesis'

# Suppress progress, keep result
rk-core think "question" -q > result.txt

Timing Options

Timeout Control

# Default timeout (180 seconds)
rk-core think "question"

# Custom timeout
rk-core think "question" --timeout 300

# No timeout (wait indefinitely)
rk-core think "question" --timeout 0

Streaming

# Enable streaming (see output as it generates)
rk-core think "question" --stream

# Disable streaming (wait for complete response)
rk-core think "question" --no-stream

ThinkTool Options

Tool Selection

# Use specific tools only
rk-core think "question" --tools gigathink,laserlogic

# Exclude specific tools
rk-core think "question" --exclude proofguard

# Tool aliases
rk-core think "question" --tools gt,ll,br,pg,bh

Tool-Specific Settings

# GigaThink perspectives
rk-core think "question" --gigathink-perspectives 15

# ProofGuard source count
rk-core think "question" --proofguard-sources 5

# BrutalHonesty severity
rk-core think "question" --brutalhonesty-severity high
OptionValuesDefault
--gigathink-perspectives3-2010
--laserlogic-depthquick, standard, deepstandard
--proofguard-sources1-103
--brutalhonesty-severitylow, medium, highmedium

Input Options

Reading Input

# Direct question
rk-core think "Is this a good idea?"

# From stdin
echo "Should I do this?" | rk-core think -

# From file
rk-core think --file question.txt

# From clipboard (requires xclip/pbpaste)
rk-core think --clipboard

Context Addition

# Add context file
rk-core think "Should I merge this PR?" --context diff.txt

# Multiple context files
rk-core think "question" --context file1.txt --context file2.txt

# URL context (fetches and includes)
rk-core think "question" --context-url https://example.com/docs

Cache Options

# Enable caching (default)
rk-core think "question" --cache

# Disable caching
rk-core think "question" --no-cache

# Clear cache before running
rk-core think "question" --clear-cache

# Set cache TTL (seconds)
rk-core think "question" --cache-ttl 3600

Debug Options

# Show debug information
rk-core think "question" --debug

# Show API requests/responses
rk-core think "question" --debug-api

# Dry run (show what would be sent)
rk-core think "question" --dry-run

# Show timing breakdown
rk-core think "question" --timing

Combining Options

Options can be combined freely:

# Full example
rk-core think "Should I accept this job offer?" \
  --profile deep \
  --provider anthropic \
  --model claude-sonnet-4-20250514 \
  --output json \
  --timeout 300 \
  --proofguard-sources 5 \
  --verbose

# Shorthand version
rk-core think "Should I accept this job offer?" \
  --deep -o json -v --proofguard-sources 5

Option Precedence

Options are applied in this order (later overrides earlier):

  1. Built-in defaults
  2. Config file settings
  3. Environment variables
  4. Command-line flags
# Config says balanced, but CLI overrides to deep
rk-core think "question" --deep

Environment Variables

🌍 Configure ReasonKit through environment variables.

Environment variables provide a way to configure ReasonKit without modifying config files, making it ideal for CI/CD, Docker, and multi-environment setups.

API Keys

LLM Provider Keys

# Anthropic Claude (Recommended)
export ANTHROPIC_API_KEY="sk-ant-..."

# OpenAI
export OPENAI_API_KEY="sk-..."

# OpenRouter (300+ models)
export OPENROUTER_API_KEY="sk-or-..."

# Google Gemini
export GOOGLE_API_KEY="..."

# Local (Ollama) - no key needed
# Just ensure Ollama is running

Priority Order

If multiple keys are set, ReasonKit uses this priority:

  1. ANTHROPIC_API_KEY (Claude)
  2. OPENAI_API_KEY (GPT)
  3. OPENROUTER_API_KEY (OpenRouter)
  4. GOOGLE_API_KEY (Gemini)
  5. Local Ollama (if available)

Override with --provider:

rk-core think "question" --provider openai

Configuration Variables

Core Settings

# Default profile
export RK_PROFILE="balanced"

# Default provider
export RK_PROVIDER="anthropic"

# Default model
export RK_MODEL="claude-sonnet-4-20250514"

# Output format
export RK_OUTPUT="pretty"  # pretty, json, markdown

# Timeout (seconds)
export RK_TIMEOUT="180"

# Verbosity
export RK_VERBOSE="false"

Path Settings

# Config file location
export RK_CONFIG="$HOME/.config/reasonkit/config.toml"

# Cache directory
export RK_CACHE_DIR="$HOME/.cache/reasonkit"

# Log file location
export RK_LOG_FILE="$HOME/.local/share/reasonkit/reasonkit.log"

Feature Flags

# Enable/disable features
export RK_STREAMING="true"      # Stream output as it generates
export RK_CACHE="true"          # Cache responses
export RK_TELEMETRY="false"     # Anonymous usage stats
export RK_COLOR="auto"          # auto, always, never

Provider-Specific Variables

Anthropic (Claude)

export ANTHROPIC_API_KEY="sk-ant-..."
export ANTHROPIC_MODEL="claude-sonnet-4-20250514"
export ANTHROPIC_MAX_TOKENS="4096"

OpenAI

export OPENAI_API_KEY="sk-..."
export OPENAI_MODEL="gpt-4-turbo"
export OPENAI_ORG_ID="org-..."  # Optional
export OPENAI_BASE_URL="https://api.openai.com/v1"  # For proxies

OpenRouter

export OPENROUTER_API_KEY="sk-or-..."
export OPENROUTER_MODEL="anthropic/claude-sonnet-4"
export OPENROUTER_SITE_URL="https://yourapp.com"  # For rankings
export OPENROUTER_SITE_NAME="YourApp"

Ollama (Local)

export OLLAMA_HOST="http://localhost:11434"
export OLLAMA_MODEL="llama3.2"

Profile Variables

Override profile settings:

# GigaThink settings
export RK_GIGATHINK_PERSPECTIVES="15"
export RK_GIGATHINK_INCLUDE_CONTRARIAN="true"

# LaserLogic settings
export RK_LASERLOGIC_DEPTH="deep"
export RK_LASERLOGIC_FALLACY_DETECTION="true"

# ProofGuard settings
export RK_PROOFGUARD_SOURCES="5"
export RK_PROOFGUARD_REQUIRE_CITATION="true"

# BrutalHonesty settings
export RK_BRUTALHONESTY_SEVERITY="high"

Development Variables

# Debug mode
export RK_DEBUG="true"

# Log level
export RK_LOG_LEVEL="debug"  # trace, debug, info, warn, error

# Disable SSL verification (dev only!)
export RK_INSECURE="false"

# Mock responses (for testing)
export RK_MOCK="false"

Docker Usage

FROM rust:latest
RUN cargo install reasonkit-core

ENV ANTHROPIC_API_KEY=""
ENV RK_PROFILE="balanced"
ENV RK_OUTPUT="json"

ENTRYPOINT ["rk-core"]
docker run -e ANTHROPIC_API_KEY="$ANTHROPIC_API_KEY" \
    reasonkit think "question"

CI/CD Examples

GitHub Actions

jobs:
  analyze:
    runs-on: ubuntu-latest
    env:
      ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
      RK_PROFILE: balanced
      RK_OUTPUT: json
    steps:
      - uses: actions/checkout@v4
      - name: Install ReasonKit
        run: cargo install reasonkit-core
      - name: Run Analysis
        run: rk-core think "Is this ready to ship?" > analysis.json

GitLab CI

analyze:
  variables:
    ANTHROPIC_API_KEY: $ANTHROPIC_API_KEY
    RK_PROFILE: balanced
  script:
    - cargo install reasonkit-core
    - rk-core think "question" --output json

Precedence Order

Settings are applied in this order (later overrides earlier):

  1. Built-in defaults
  2. Config file (~/.config/reasonkit/config.toml)
  3. Environment variables (RK_*)
  4. Command-line flags (--profile, etc.)

Exit Codes

🔢 Understand CLI exit codes for scripting and automation.

ReasonKit uses standard exit codes to indicate success or failure, making it easy to integrate into scripts and CI/CD pipelines.

Exit Code Reference

CodeNameDescription
0SuccessCommand completed successfully
1General ErrorUnspecified error occurred
2Invalid ArgumentsInvalid command-line arguments
3Configuration ErrorInvalid or missing configuration
4Provider ErrorLLM provider connection failed
5Authentication ErrorAPI key invalid or missing
6Rate LimitProvider rate limit exceeded
7TimeoutOperation timed out
8Parse ErrorFailed to parse input or output
10Validation FailedConfidence threshold not met

Using Exit Codes in Scripts

Bash

#!/bin/bash

# Run analysis and check result
if rk-core think "Should we deploy?" --profile quick; then
    echo "Analysis complete"
else
    exit_code=$?
    case $exit_code in
        5)
            echo "Error: API key not set"
            ;;
        6)
            echo "Error: Rate limited, try again later"
            ;;
        7)
            echo "Error: Analysis timed out"
            ;;
        *)
            echo "Error: Analysis failed (code: $exit_code)"
            ;;
    esac
    exit $exit_code
fi

Check Specific Conditions

# Retry on rate limit
max_retries=3
retry_count=0

while [ $retry_count -lt $max_retries ]; do
    rk-core think "question" --profile balanced
    exit_code=$?

    if [ $exit_code -eq 0 ]; then
        break
    elif [ $exit_code -eq 6 ]; then
        echo "Rate limited, waiting 60s..."
        sleep 60
        retry_count=$((retry_count + 1))
    else
        exit $exit_code
    fi
done

CI/CD Integration

# GitHub Actions example
- name: Run ReasonKit Analysis
  run: |
    rk-core think "Is this PR ready to merge?" --profile balanced --output json > analysis.json
  continue-on-error: true

- name: Check Analysis Result
  run: |
    if [ $? -eq 10 ]; then
      echo "::warning::Analysis confidence below threshold"
    fi

Verbose Exit Information

Use --verbose to get more details on errors:

rk-core think "question" --profile balanced --verbose

On error, this outputs:

  • Error message
  • Error code
  • Suggested resolution
  • Debug information (if available)

Exit Code Categories

Success (0)

  • Analysis completed
  • Output written successfully
  • All validations passed

Client Errors (1-9)

  • User-fixable issues
  • Configuration problems
  • Invalid input

Provider Errors (10-19)

  • LLM provider issues
  • Network problems
  • External service failures

Validation Errors (20-29)

  • Confidence thresholds not met
  • Output validation failed
  • Quality gates not passed

Scripting Best Practices

  1. Always check exit codes — Don’t assume success
  2. Handle rate limits — Implement exponential backoff
  3. Log failures — Capture stderr for debugging
  4. Use timeouts — Set reasonable --timeout values
  5. Fail fast — Exit early on critical errors

Rust API

Native Rust integration for maximum performance.

Installation

Add to your Cargo.toml:

[dependencies]
reasonkit = "0.1"
tokio = { version = "1", features = ["full"] }

Quick Start

use reasonkit::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Initialize with API key from environment
    let rk = ReasonKit::from_env()?;

    // Run analysis
    let result = rk
        .think("Should I take this job offer?")
        .profile(Profile::Balanced)
        .execute()
        .await?;

    // Access results
    println!("Synthesis: {}", result.synthesis);

    Ok(())
}

ReasonKit Builder

#![allow(unused)]
fn main() {
use reasonkit::{ReasonKit, Provider, Model};

// Configure explicitly
let rk = ReasonKit::builder()
    .provider(Provider::Anthropic)
    .model(Model::Claude3Sonnet)
    .api_key("sk-ant-...")
    .timeout(Duration::from_secs(120))
    .build()?;

// Or from config file
let rk = ReasonKit::from_config("~/.config/reasonkit/config.toml")?;

// Or from environment
let rk = ReasonKit::from_env()?;
}

Individual ThinkTools

GigaThink

#![allow(unused)]
fn main() {
use reasonkit::thinktools::GigaThink;

let gt = GigaThink::new()
    .perspectives(15)
    .include_contrarian(true);

let result = gt.analyze("Should I start a business?").await?;

for perspective in result.perspectives {
    println!("{}: {}", perspective.category, perspective.content);
}
}

LaserLogic

#![allow(unused)]
fn main() {
use reasonkit::thinktools::LaserLogic;

let ll = LaserLogic::new()
    .depth(Depth::Deep)
    .fallacy_detection(true)
    .assumption_analysis(true);

let result = ll.analyze("Renting is throwing money away").await?;

for flaw in result.flaws {
    println!("FLAW: {}", flaw.claim);
    println!("  Issue: {}", flaw.issue);
    println!("  Evidence: {}", flaw.evidence);
}
}

BedRock

#![allow(unused)]
fn main() {
use reasonkit::thinktools::BedRock;

let br = BedRock::new()
    .decomposition_depth(3)
    .show_80_20(true);

let result = br.analyze("How should I think about this decision?").await?;

println!("Core question: {}", result.core_question);
println!("First principles: {:?}", result.first_principles);
println!("80/20: {}", result.eighty_twenty);
}

ProofGuard

#![allow(unused)]
fn main() {
use reasonkit::thinktools::ProofGuard;

let pg = ProofGuard::new()
    .min_sources(3)
    .require_citation(true)
    .source_tier_threshold(SourceTier::Tier2);

let result = pg.analyze("8 glasses of water a day is necessary").await?;

println!("Verdict: {:?}", result.verdict);
for claim in result.verified {
    println!("✓ {}: {}", claim.claim, claim.evidence);
}
for claim in result.unverified {
    println!("? {}", claim.claim);
}
}

BrutalHonesty

#![allow(unused)]
fn main() {
use reasonkit::thinktools::BrutalHonesty;

let bh = BrutalHonesty::new()
    .severity(Severity::High)
    .include_alternatives(true);

let result = bh.analyze("I'm going to become a day trader").await?;

for truth in result.uncomfortable_truths {
    println!("🔥 {}", truth);
}
for question in result.questions {
    println!("❓ {}", question);
}
}

PowerCombo

#![allow(unused)]
fn main() {
use reasonkit::thinktools::PowerCombo;
use reasonkit::profiles::Profile;

let combo = PowerCombo::new()
    .profile(Profile::Balanced);

let result = combo.analyze("Should I buy a house?").await?;

// Access individual tool results
println!("GigaThink: {} perspectives", result.gigathink.perspectives.len());
println!("LaserLogic: {} flaws", result.laserlogic.flaws.len());
println!("BedRock: {}", result.bedrock.core_question);
println!("ProofGuard: {:?}", result.proofguard.verdict);
println!("BrutalHonesty: {} truths", result.brutalhonesty.uncomfortable_truths.len());

// Or the synthesis
println!("Synthesis: {}", result.synthesis);
}

Profiles

#![allow(unused)]
fn main() {
use reasonkit::profiles::Profile;

// Built-in profiles
let profile = Profile::Quick;      // Fast, 2 tools
let profile = Profile::Balanced;   // Standard, 5 tools
let profile = Profile::Deep;       // Thorough, 6 tools
let profile = Profile::Paranoid;   // Maximum, all tools

// Custom profile
let profile = Profile::custom()
    .tools(vec![Tool::GigaThink, Tool::BrutalHonesty])
    .gigathink_perspectives(8)
    .brutalhonesty_severity(Severity::High)
    .timeout(Duration::from_secs(60))
    .build();
}

Output Formats

#![allow(unused)]
fn main() {
use reasonkit::output::{Format, OutputOptions};

let result = rk.think("question")
    .execute()
    .await?;

// Pretty print
println!("{}", result.format(Format::Pretty));

// JSON
let json = result.format(Format::Json);
std::fs::write("analysis.json", json)?;

// Markdown
let md = result.format(Format::Markdown);

// Custom formatting
let options = OutputOptions::builder()
    .include_metadata(true)
    .max_length(1000)
    .build();
let output = result.format_with(Format::Pretty, options);
}

Error Handling

#![allow(unused)]
fn main() {
use reasonkit::error::ReasonKitError;

match rk.think("question").execute().await {
    Ok(result) => println!("{}", result.synthesis),
    Err(ReasonKitError::ApiError(e)) => eprintln!("API error: {}", e),
    Err(ReasonKitError::ConfigError(e)) => eprintln!("Config error: {}", e),
    Err(ReasonKitError::Timeout) => eprintln!("Analysis timed out"),
    Err(e) => eprintln!("Error: {}", e),
}
}

Async Streaming

#![allow(unused)]
fn main() {
use futures::StreamExt;

let mut stream = rk.think("question")
    .stream()
    .await?;

while let Some(chunk) = stream.next().await {
    match chunk {
        Ok(ToolResult::GigaThink(gt)) => {
            println!("GigaThink complete: {} perspectives", gt.perspectives.len());
        }
        Ok(ToolResult::LaserLogic(ll)) => {
            println!("LaserLogic complete: {} flaws", ll.flaws.len());
        }
        // ... other tools
        Err(e) => eprintln!("Error: {}", e),
    }
}
}

Concurrent Analysis

#![allow(unused)]
fn main() {
use futures::future::join_all;

let questions = vec![
    "Should we launch feature A?",
    "Should we launch feature B?",
    "Should we launch feature C?",
];

let analyses: Vec<_> = questions
    .iter()
    .map(|q| rk.think(q).execute())
    .collect();

let results = join_all(analyses).await;

for (question, result) in questions.iter().zip(results) {
    match result {
        Ok(r) => println!("{}: {}", question, r.synthesis),
        Err(e) => eprintln!("{}: Error - {}", question, e),
    }
}
}

Full Example

use reasonkit::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Initialize
    let rk = ReasonKit::builder()
        .provider(Provider::Anthropic)
        .model(Model::Claude3Sonnet)
        .timeout(Duration::from_secs(300))
        .from_env()?
        .build()?;

    // Run deep analysis
    let result = rk
        .think("Should I quit my job to start a business?")
        .profile(Profile::Deep)
        .execute()
        .await?;

    // Process results
    println!("=== Analysis Complete ===\n");

    println!("Perspectives:");
    for p in &result.gigathink.perspectives {
        println!("  - {}: {}", p.category, p.content);
    }

    println!("\nLogical Flaws:");
    for f in &result.laserlogic.flaws {
        println!("  - {}", f.claim);
    }

    println!("\nCore Question: {}", result.bedrock.core_question);

    println!("\nUncomfortable Truths:");
    for t in &result.brutalhonesty.uncomfortable_truths {
        println!("  🔥 {}", t);
    }

    println!("\n=== Synthesis ===");
    println!("{}", result.synthesis);

    // Export as JSON
    std::fs::write(
        "analysis.json",
        result.format(Format::Json)
    )?;

    Ok(())
}

Python API

Python bindings for ReasonKit via PyO3.

Installation

pip install reasonkit
# Or with uv (recommended)
uv pip install reasonkit

Quick Start

from reasonkit import ReasonKit, Profile

# Initialize (uses ANTHROPIC_API_KEY from environment)
rk = ReasonKit()

# Run analysis
result = rk.think("Should I take this job offer?", profile=Profile.BALANCED)

# Access results
print(result.synthesis)

Configuration

from reasonkit import ReasonKit, Provider, Model

# Explicit configuration
rk = ReasonKit(
    provider=Provider.ANTHROPIC,
    model=Model.CLAUDE_3_SONNET,
    api_key="sk-ant-...",  # Or from env
    timeout=120
)

# From config file
rk = ReasonKit.from_config("~/.config/reasonkit/config.toml")

# From environment
rk = ReasonKit.from_env()

Individual ThinkTools

GigaThink

from reasonkit.thinktools import GigaThink

gt = GigaThink(perspectives=15, include_contrarian=True)
result = gt.analyze("Should I start a business?")

for p in result.perspectives:
    print(f"{p.category}: {p.content}")

LaserLogic

from reasonkit.thinktools import LaserLogic, Depth

ll = LaserLogic(depth=Depth.DEEP, fallacy_detection=True)
result = ll.analyze("Renting is throwing money away")

for flaw in result.flaws:
    print(f"FLAW: {flaw.claim}")
    print(f"  Issue: {flaw.issue}")

BedRock

from reasonkit.thinktools import BedRock

br = BedRock(decomposition_depth=3, show_80_20=True)
result = br.analyze("How should I think about this decision?")

print(f"Core question: {result.core_question}")
print(f"First principles: {result.first_principles}")

ProofGuard

from reasonkit.thinktools import ProofGuard, SourceTier

pg = ProofGuard(
    min_sources=3,
    require_citation=True,
    source_tier_threshold=SourceTier.TIER2
)
result = pg.analyze("8 glasses of water a day is necessary")

print(f"Verdict: {result.verdict}")
for claim in result.verified:
    print(f"✓ {claim.claim}")

BrutalHonesty

from reasonkit.thinktools import BrutalHonesty, Severity

bh = BrutalHonesty(severity=Severity.HIGH, include_alternatives=True)
result = bh.analyze("I'm going to become a day trader")

for truth in result.uncomfortable_truths:
    print(f"🔥 {truth}")

PowerCombo

from reasonkit import ReasonKit, Profile

rk = ReasonKit()
result = rk.think("Should I buy a house?", profile=Profile.BALANCED)

# Access individual tool results
print(f"Perspectives: {len(result.gigathink.perspectives)}")
print(f"Flaws: {len(result.laserlogic.flaws)}")
print(f"Core question: {result.bedrock.core_question}")
print(f"Verdict: {result.proofguard.verdict}")
print(f"Truths: {len(result.brutalhonesty.uncomfortable_truths)}")

# Or the synthesis
print(f"Synthesis: {result.synthesis}")

Profiles

from reasonkit import Profile

# Built-in profiles
Profile.QUICK      # Fast, 2 tools
Profile.BALANCED   # Standard, 5 tools
Profile.DEEP       # Thorough, 6 tools
Profile.PARANOID   # Maximum, all tools

# Custom profile
from reasonkit import CustomProfile, Tool, Severity

profile = CustomProfile(
    tools=[Tool.GIGATHINK, Tool.BRUTALHONESTY],
    gigathink_perspectives=8,
    brutalhonesty_severity=Severity.HIGH,
    timeout=60
)

result = rk.think("question", profile=profile)

Async Support

import asyncio
from reasonkit import ReasonKit, Profile

async def main():
    rk = ReasonKit()

    # Single async analysis
    result = await rk.think_async(
        "Should I take this job?",
        profile=Profile.BALANCED
    )
    print(result.synthesis)

    # Concurrent analyses
    questions = [
        "Should we launch feature A?",
        "Should we launch feature B?",
        "Should we launch feature C?",
    ]

    tasks = [rk.think_async(q) for q in questions]
    results = await asyncio.gather(*tasks)

    for question, result in zip(questions, results):
        print(f"{question}: {result.synthesis[:100]}...")

asyncio.run(main())

Output Formats

from reasonkit import Format

result = rk.think("question")

# Pretty print
print(result.format(Format.PRETTY))

# JSON
json_str = result.format(Format.JSON)
with open("analysis.json", "w") as f:
    f.write(json_str)

# Markdown
md = result.format(Format.MARKDOWN)

# As dict
data = result.to_dict()

# As dataclass
from dataclasses import asdict
data = asdict(result)

Error Handling

from reasonkit import ReasonKit, ReasonKitError, ApiError, ConfigError, TimeoutError

rk = ReasonKit()

try:
    result = rk.think("question")
except ApiError as e:
    print(f"API error: {e}")
except ConfigError as e:
    print(f"Config error: {e}")
except TimeoutError:
    print("Analysis timed out")
except ReasonKitError as e:
    print(f"Error: {e}")

Streaming

from reasonkit import ReasonKit

rk = ReasonKit()

# Stream results as they complete
for tool_result in rk.think_stream("question"):
    if tool_result.tool == "gigathink":
        print(f"GigaThink: {len(tool_result.perspectives)} perspectives")
    elif tool_result.tool == "laserlogic":
        print(f"LaserLogic: {len(tool_result.flaws)} flaws")
    # ... etc

Context Manager

from reasonkit import ReasonKit

# Automatic cleanup
with ReasonKit() as rk:
    result = rk.think("question")
    print(result.synthesis)

Integration with pandas

import pandas as pd
from reasonkit import ReasonKit

rk = ReasonKit()

# Analyze multiple questions
questions = pd.Series([
    "Should we invest in marketing?",
    "Should we hire more engineers?",
    "Should we expand to Europe?"
])

# Apply analysis
results = questions.apply(lambda q: rk.think(q).synthesis)

# Create DataFrame
df = pd.DataFrame({
    "question": questions,
    "analysis": results
})

Full Example

#!/usr/bin/env python3
"""Complete ReasonKit analysis example."""

from reasonkit import ReasonKit, Profile, Format
from reasonkit.thinktools import BrutalHonesty, Severity

def main():
    # Initialize
    rk = ReasonKit.from_env()

    question = "Should I quit my job to start a business?"

    # Run deep analysis
    print(f"Analyzing: {question}\n")
    result = rk.think(question, profile=Profile.DEEP)

    # Process results
    print("=== Perspectives ===")
    for p in result.gigathink.perspectives[:5]:  # Top 5
        print(f"  - {p.category}: {p.content}")

    print("\n=== Logical Flaws ===")
    for f in result.laserlogic.flaws[:3]:  # Top 3
        print(f"  - {f.claim}")

    print(f"\n=== Core Question ===")
    print(f"  {result.bedrock.core_question}")

    print("\n=== Uncomfortable Truths ===")
    for t in result.brutalhonesty.uncomfortable_truths[:3]:
        print(f"  🔥 {t}")

    print("\n=== Synthesis ===")
    print(result.synthesis)

    # Export
    with open("analysis.json", "w") as f:
        f.write(result.format(Format.JSON))

    print("\nAnalysis saved to analysis.json")

if __name__ == "__main__":
    main()

Output Formats

📄 Understanding ReasonKit’s output options for different use cases.

ReasonKit supports multiple output formats for human readability, machine processing, and documentation.

Available Formats

FormatFlagBest For
Pretty--output prettyInteractive use, terminals
JSON--output jsonScripts, APIs, processing
Markdown--output markdownDocumentation, reports

Pretty Output (Default)

Human-readable output with colors and box drawing.

rk-core think "Should I learn Rust?" --output pretty
╔════════════════════════════════════════════════════════════╗
║  BALANCED ANALYSIS                                         ║
║  Time: 1 minute 32 seconds                                 ║
╚════════════════════════════════════════════════════════════╝

┌────────────────────────────────────────────────────────────┐
│  💡 GIGATHINK: 10 Perspectives                             │
├────────────────────────────────────────────────────────────┤
│  1. CAREER: Rust is in high demand for systems/WebAssembly │
│  2. LEARNING: Steep initial curve, strong long-term value  │
│  3. COMMUNITY: Excellent docs, helpful community           │
│  4. ECOSYSTEM: Growing rapidly, some gaps remain           │
│  5. ALTERNATIVES: Consider Go, Zig as alternatives         │
│  ...                                                        │
└────────────────────────────────────────────────────────────┘

┌────────────────────────────────────────────────────────────┐
│  ⚡ LASERLOGIC: Reasoning Check                            │
├────────────────────────────────────────────────────────────┤
│  FLAW 1: "Rust is hard"                                    │
│  → Difficulty is front-loaded, not total                   │
│  → Initial investment pays off in fewer bugs later         │
└────────────────────────────────────────────────────────────┘

═══════════════════════════════════════════════════════════════

SYNTHESIS:
Yes, learn Rust if you're interested in systems programming,
WebAssembly, or want to level up your understanding of memory
management. The steep learning curve is worth the payoff.

CONFIDENCE: 85%

Disabling Colors

# Via flag
rk-core think "question" --no-color

# Via environment
export NO_COLOR=1
rk-core think "question"

# Via config
[output]
color = "never"  # "auto", "always", "never"

JSON Output

Machine-readable structured output.

rk-core think "Should I learn Rust?" --output json
{
  "id": "analysis_2025011512345",
  "input": "Should I learn Rust?",
  "profile": "balanced",
  "timestamp": "2025-01-15T10:30:00Z",
  "duration_ms": 92000,
  "confidence": 0.85,
  "synthesis": "Yes, learn Rust if you're interested in systems programming...",
  "tools": [
    {
      "name": "GigaThink",
      "alias": "gt",
      "duration_ms": 25000,
      "result": {
        "perspectives": [
          {
            "id": 1,
            "label": "CAREER",
            "content": "Rust is in high demand for systems/WebAssembly"
          },
          {
            "id": 2,
            "label": "LEARNING",
            "content": "Steep initial curve, strong long-term value"
          }
        ],
        "summary": "Multiple perspectives suggest learning Rust is worthwhile..."
      }
    },
    {
      "name": "LaserLogic",
      "alias": "ll",
      "duration_ms": 18000,
      "result": {
        "flaws": [
          {
            "claim": "Rust is hard",
            "issue": "Difficulty is front-loaded, not total",
            "correction": "Initial investment pays off in fewer bugs later"
          }
        ],
        "valid_points": [
          "Memory safety without garbage collection is valuable",
          "Systems programming skills transfer to other domains"
        ]
      }
    },
    {
      "name": "BedRock",
      "alias": "br",
      "duration_ms": 20000,
      "result": {
        "core_question": "Is learning Rust worth the time investment?",
        "first_principles": [
          "Programming languages are tools for solving problems",
          "Learning investment should match problem frequency",
          "Difficulty is an upfront cost, not ongoing"
        ],
        "decomposition": "..."
      }
    },
    {
      "name": "ProofGuard",
      "alias": "pg",
      "duration_ms": 15000,
      "result": {
        "claims_verified": [
          {
            "claim": "Rust has excellent documentation",
            "status": "verified",
            "sources": ["rust-lang.org", "doc.rust-lang.org"]
          }
        ],
        "claims_unverified": [],
        "contradictions": []
      }
    },
    {
      "name": "BrutalHonesty",
      "alias": "bh",
      "duration_ms": 14000,
      "result": {
        "harsh_truths": [
          "You might be avoiding learning by asking this question",
          "The 'best' language is one you actually use"
        ],
        "blind_spots": [
          "What problem are you trying to solve with Rust?"
        ]
      }
    }
  ],
  "metadata": {
    "provider": "anthropic",
    "model": "claude-sonnet-4-20250514",
    "tokens": {
      "prompt": 1234,
      "completion": 2345,
      "total": 3579
    },
    "version": "0.1.0"
  }
}

JSON Schema

Full JSON schema for validation:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "type": "object",
  "required": ["id", "input", "profile", "confidence", "synthesis", "tools"],
  "properties": {
    "id": { "type": "string" },
    "input": { "type": "string" },
    "profile": { "type": "string", "enum": ["quick", "balanced", "deep", "paranoid"] },
    "timestamp": { "type": "string", "format": "date-time" },
    "duration_ms": { "type": "integer" },
    "confidence": { "type": "number", "minimum": 0, "maximum": 1 },
    "synthesis": { "type": "string" },
    "tools": {
      "type": "array",
      "items": {
        "type": "object",
        "required": ["name", "alias", "result"],
        "properties": {
          "name": { "type": "string" },
          "alias": { "type": "string" },
          "duration_ms": { "type": "integer" },
          "result": { "type": "object" }
        }
      }
    },
    "metadata": { "type": "object" }
  }
}

Parsing JSON Output

jq examples:

# Get just the synthesis
rk-core think "question" -o json | jq -r '.synthesis'

# Get confidence as number
rk-core think "question" -o json | jq '.confidence'

# List all tool names
rk-core think "question" -o json | jq -r '.tools[].name'

# Get GigaThink perspectives
rk-core think "question" -o json | jq '.tools[] | select(.name == "GigaThink") | .result.perspectives'

# Filter to high-confidence analyses
rk-core think "question" -o json | jq 'select(.confidence > 0.8)'

Python:

import json
import subprocess

result = subprocess.run(
    ["rk-core", "think", "question", "-o", "json"],
    capture_output=True,
    text=True,
)
analysis = json.loads(result.stdout)

print(f"Confidence: {analysis['confidence']}")
print(f"Synthesis: {analysis['synthesis']}")

for tool in analysis['tools']:
    print(f"- {tool['name']}: {tool['duration_ms']}ms")

Markdown Output

Documentation-ready format.

rk-core think "Should I learn Rust?" --output markdown
# Analysis: Should I learn Rust?

**Profile:** Balanced
**Time:** 1 minute 32 seconds
**Confidence:** 85%

---

## 💡 GigaThink: 10 Perspectives

| # | Perspective | Insight |
|---|-------------|---------|
| 1 | CAREER | Rust is in high demand for systems/WebAssembly |
| 2 | LEARNING | Steep initial curve, strong long-term value |
| 3 | COMMUNITY | Excellent docs, helpful community |
| 4 | ECOSYSTEM | Growing rapidly, some gaps remain |
| 5 | ALTERNATIVES | Consider Go, Zig as alternatives |

---

## ⚡ LaserLogic: Reasoning Check

### Flaws Identified

1. **"Rust is hard"**
   - Issue: Difficulty is front-loaded, not total
   - Correction: Initial investment pays off in fewer bugs later

### Valid Points

- Memory safety without garbage collection is valuable
- Systems programming skills transfer to other domains

---

## 🪨 BedRock: First Principles

**Core Question:** Is learning Rust worth the time investment?

**First Principles:**
1. Programming languages are tools for solving problems
2. Learning investment should match problem frequency
3. Difficulty is an upfront cost, not ongoing

---

## 🛡️ ProofGuard: Verification

| Claim | Status | Sources |
|-------|--------|---------|
| Rust has excellent documentation | ✅ Verified | rust-lang.org, doc.rust-lang.org |

---

## 🔥 BrutalHonesty: Reality Check

**Harsh Truths:**
- You might be avoiding learning by asking this question
- The "best" language is one you actually use

**Blind Spots:**
- What problem are you trying to solve with Rust?

---

## Synthesis

Yes, learn Rust if you're interested in systems programming,
WebAssembly, or want to level up your understanding of memory
management. The steep learning curve is worth the payoff.

---

*Generated by ReasonKit v0.1.0 | Profile: balanced | Confidence: 85%*

Streaming Output

For real-time feedback during analysis:

rk-core think "question" --stream

Streaming outputs each tool’s result as it completes:

[GigaThink] Starting...
[GigaThink] Perspective 1: CAREER - Rust is in high demand...
[GigaThink] Perspective 2: LEARNING - Steep initial curve...
[GigaThink] Complete (25s)

[LaserLogic] Starting...
[LaserLogic] Analyzing logical structure...
[LaserLogic] Complete (18s)

[Synthesis] Combining results...
[Complete] Confidence: 85%

Quiet Mode

Suppress progress, show only final result:

# Just the synthesis
rk-core think "question" --quiet

# Combine with JSON for scripts
rk-core think "question" -q -o json | jq -r '.synthesis'

Output to File

# Redirect stdout
rk-core think "question" -o json > analysis.json

# Use --output-file flag
rk-core think "question" -o markdown --output-file report.md

# Multiple outputs
rk-core think "question" \
  --output json --output-file analysis.json \
  --output markdown --output-file report.md

Custom Templates

For advanced formatting, use templates:

rk-core think "question" --template my-template.hbs

Template example (Handlebars):

{{! my-template.hbs }}
# {{input}}

Analyzed with {{profile}} profile in {{duration_ms}}ms.

{{#each tools}}
## {{name}}
{{#each result.perspectives}}
- {{label}}: {{content}}
{{/each}}
{{/each}}

**Bottom Line:** {{synthesis}}

Architecture

🏗️ Deep dive into ReasonKit’s internal design.

Understanding ReasonKit’s architecture helps you extend it, debug issues, and contribute effectively.

High-Level Overview

┌─────────────────────────────────────────────────────────────────┐
│                         CLI / API                                │
│                    (rk-core binary)                              │
└─────────────────────┬───────────────────────────────────────────┘
                      │
┌─────────────────────▼───────────────────────────────────────────┐
│                     Orchestrator                                 │
│              (Profile selection, tool sequencing)                │
└─────────────────────┬───────────────────────────────────────────┘
                      │
┌─────────────────────▼───────────────────────────────────────────┐
│                   ThinkTool Registry                             │
│         ┌─────────┬─────────┬─────────┬─────────┐              │
│         │GigaThink│LaserLogic│BedRock │ProofGuard│BrutalHonesty│
│         └─────────┴─────────┴─────────┴─────────┘              │
└─────────────────────┬───────────────────────────────────────────┘
                      │
┌─────────────────────▼───────────────────────────────────────────┐
│                    LLM Provider Layer                            │
│         ┌─────────┬─────────┬─────────┬─────────┐              │
│         │Anthropic│ OpenAI  │OpenRouter│ Ollama  │              │
│         └─────────┴─────────┴─────────┴─────────┘              │
└─────────────────────────────────────────────────────────────────┘

Core Components

1. CLI Layer (src/main.rs)

Entry point for the application:

// Simplified structure
fn main() -> Result<()> {
    let args = Args::parse();
    let config = Config::load(&args)?;

    let runtime = Runtime::new()?;
    runtime.block_on(async {
        let result = orchestrator::run(&args.input, &config).await?;
        output::render(&result, &config.output_format)?;
        Ok(())
    })
}

Responsibilities:

  • Parse command-line arguments
  • Load and merge configuration
  • Initialize async runtime
  • Render output

2. Orchestrator (src/thinktool/executor.rs)

Coordinates ThinkTool execution based on profile:

#![allow(unused)]
fn main() {
pub struct Executor {
    registry: Registry,
    profile: Profile,
    provider: Box<dyn LlmProvider>,
}

impl Executor {
    pub async fn run(&self, input: &str) -> Result<Analysis> {
        let tools = self.profile.tools();
        let mut results = Vec::new();

        for tool in tools {
            let result = self.registry
                .get(tool)
                .execute(input, &self.provider)
                .await?;
            results.push(result);
        }

        self.synthesize(input, results).await
    }
}
}

Responsibilities:

  • Select tools based on profile
  • Execute tools in sequence or parallel
  • Synthesize final analysis

3. ThinkTool Registry (src/thinktool/registry.rs)

Manages available ThinkTools:

#![allow(unused)]
fn main() {
pub struct Registry {
    tools: HashMap<String, Box<dyn ThinkTool>>,
}

impl Registry {
    pub fn new() -> Self {
        let mut tools = HashMap::new();
        tools.insert("gigathink".to_string(), Box::new(GigaThink::new()));
        tools.insert("laserlogic".to_string(), Box::new(LaserLogic::new()));
        tools.insert("bedrock".to_string(), Box::new(BedRock::new()));
        tools.insert("proofguard".to_string(), Box::new(ProofGuard::new()));
        tools.insert("brutalhonesty".to_string(), Box::new(BrutalHonesty::new()));
        Self { tools }
    }

    pub fn get(&self, name: &str) -> Option<&dyn ThinkTool> {
        self.tools.get(name).map(|t| t.as_ref())
    }
}
}

4. ThinkTool Trait (src/thinktool/mod.rs)

Interface all ThinkTools implement:

#![allow(unused)]
fn main() {
#[async_trait]
pub trait ThinkTool: Send + Sync {
    /// Human-readable name
    fn name(&self) -> &str;

    /// Short alias (e.g., "gt" for GigaThink)
    fn alias(&self) -> &str;

    /// Tool description
    fn description(&self) -> &str;

    /// Execute the tool
    async fn execute(
        &self,
        input: &str,
        provider: &dyn LlmProvider,
        config: &ToolConfig,
    ) -> Result<ToolResult>;

    /// Generate the prompt for the LLM
    fn build_prompt(&self, input: &str, config: &ToolConfig) -> String;

    /// Parse the LLM response into structured output
    fn parse_response(&self, response: &str) -> Result<ToolResult>;
}
}

5. LLM Provider Layer (src/thinktool/llm.rs)

Abstraction over different LLM providers:

#![allow(unused)]
fn main() {
#[async_trait]
pub trait LlmProvider: Send + Sync {
    async fn complete(&self, request: &CompletionRequest) -> Result<CompletionResponse>;

    fn name(&self) -> &str;
    fn supports_streaming(&self) -> bool;
}

pub struct AnthropicProvider {
    client: reqwest::Client,
    api_key: String,
    model: String,
}

#[async_trait]
impl LlmProvider for AnthropicProvider {
    async fn complete(&self, request: &CompletionRequest) -> Result<CompletionResponse> {
        let response = self.client
            .post("https://api.anthropic.com/v1/messages")
            .header("x-api-key", &self.api_key)
            .header("anthropic-version", "2023-06-01")
            .json(&self.build_request(request))
            .send()
            .await?;

        self.parse_response(response).await
    }
}
}

Data Flow

Request Flow

User Input
    │
    ▼
┌─────────────┐
│  CLI Parse  │  Parse args, load config
└─────────────┘
    │
    ▼
┌─────────────┐
│  Executor   │  Select profile, initialize tools
└─────────────┘
    │
    ▼
┌─────────────┐
│  GigaThink  │──┐
└─────────────┘  │
    │            │
    ▼            │
┌─────────────┐  │  Sequential or parallel
│ LaserLogic  │──┤  based on profile
└─────────────┘  │
    │            │
    ▼            │
┌─────────────┐  │
│   BedRock   │──┤
└─────────────┘  │
    │            │
    ▼            │
┌─────────────┐  │
│ ProofGuard  │──┤
└─────────────┘  │
    │            │
    ▼            │
┌─────────────┐  │
│BrutalHonesty│──┘
└─────────────┘
    │
    ▼
┌─────────────┐
│  Synthesis  │  Combine all tool outputs
└─────────────┘
    │
    ▼
┌─────────────┐
│   Output    │  Format and render
└─────────────┘

Tool Execution Flow

#![allow(unused)]
fn main() {
// Inside each ThinkTool
async fn execute(&self, input: &str, provider: &dyn LlmProvider) -> Result<ToolResult> {
    // 1. Build the prompt with tool-specific instructions
    let prompt = self.build_prompt(input);

    // 2. Call the LLM
    let request = CompletionRequest {
        prompt,
        max_tokens: self.config.max_tokens,
        temperature: self.config.temperature,
    };
    let response = provider.complete(&request).await?;

    // 3. Parse structured output
    let result = self.parse_response(&response.text)?;

    // 4. Validate and return
    self.validate(&result)?;
    Ok(result)
}
}

Configuration System

Configuration Hierarchy

Priority (highest to lowest):
1. Command-line flags
2. Environment variables
3. Config file (~/.config/reasonkit/config.toml)
4. Built-in defaults

Config Structure

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Deserialize)]
pub struct Config {
    pub profile: Profile,
    pub provider: ProviderConfig,
    pub output: OutputConfig,
    pub tools: ToolsConfig,
}

#[derive(Debug, Clone, Deserialize)]
pub struct ToolsConfig {
    pub gigathink: GigaThinkConfig,
    pub laserlogic: LaserLogicConfig,
    pub bedrock: BedRockConfig,
    pub proofguard: ProofGuardConfig,
    pub brutalhonesty: BrutalHonestyConfig,
}
}

Error Handling

Error Types

#![allow(unused)]
fn main() {
#[derive(Debug, thiserror::Error)]
pub enum ReasonKitError {
    #[error("Configuration error: {0}")]
    Config(String),

    #[error("Provider error: {0}")]
    Provider(String),

    #[error("Parse error: {0}")]
    Parse(String),

    #[error("Validation error: {0}")]
    Validation(String),

    #[error("Timeout after {0:?}")]
    Timeout(Duration),

    #[error("Rate limited, retry after {0:?}")]
    RateLimit(Duration),
}
}

Error Propagation

#![allow(unused)]
fn main() {
// Errors bubble up with context
fn run_analysis(input: &str, config: &Config) -> Result<Analysis> {
    let provider = create_provider(config)
        .map_err(|e| ReasonKitError::Config(format!("Provider setup: {}", e)))?;

    let result = executor.run(input, &provider)
        .await
        .map_err(|e| ReasonKitError::Provider(format!("Execution: {}", e)))?;

    Ok(result)
}
}

Extension Points

Adding a New ThinkTool

  1. Implement the ThinkTool trait:
#![allow(unused)]
fn main() {
pub struct MyTool {
    config: MyToolConfig,
}

#[async_trait]
impl ThinkTool for MyTool {
    fn name(&self) -> &str { "MyTool" }
    fn alias(&self) -> &str { "mt" }

    async fn execute(&self, input: &str, provider: &dyn LlmProvider) -> Result<ToolResult> {
        // Implementation
    }
}
}
  1. Register in the Registry:
#![allow(unused)]
fn main() {
registry.insert("mytool".to_string(), Box::new(MyTool::new()));
}

Adding a New Provider

  1. Implement LlmProvider:
#![allow(unused)]
fn main() {
#[async_trait]
impl LlmProvider for MyProvider {
    async fn complete(&self, request: &CompletionRequest) -> Result<CompletionResponse> {
        // API call implementation
    }
}
}
  1. Add to provider factory:
#![allow(unused)]
fn main() {
fn create_provider(config: &ProviderConfig) -> Result<Box<dyn LlmProvider>> {
    match config.name.as_str() {
        "myprovider" => Ok(Box::new(MyProvider::new(config)?)),
        // ...
    }
}
}

Performance Considerations

Async Execution

ThinkTools can run in parallel when independent:

#![allow(unused)]
fn main() {
// Parallel execution for independent tools
let (gigathink, laserlogic) = tokio::join!(
    registry.get("gigathink").execute(input, provider),
    registry.get("laserlogic").execute(input, provider),
);
}

Caching

Responses are cached to avoid redundant LLM calls:

#![allow(unused)]
fn main() {
pub struct CachedProvider<P: LlmProvider> {
    inner: P,
    cache: Arc<RwLock<LruCache<String, CompletionResponse>>>,
}
}

Connection Pooling

HTTP clients use connection pooling:

#![allow(unused)]
fn main() {
let client = reqwest::Client::builder()
    .pool_max_idle_per_host(10)
    .timeout(Duration::from_secs(30))
    .build()?;
}

LLM Providers

🤖 Configure and optimize different LLM providers with ReasonKit.

ReasonKit supports multiple LLM providers, each with different strengths, pricing, and capabilities.

Supported Providers

ProviderModelsBest ForPricing
AnthropicClaude 4, Sonnet, HaikuBest quality, safety$$$
OpenAIGPT-4, GPT-4 TurboBroad compatibility$$$
OpenRouter300+ modelsVariety, cost optimization$ - $$$
OllamaLlama, Mistral, etc.Privacy, freeFree
GoogleGemini Pro, FlashLong context$$

Provider Configuration

Claude models provide the best reasoning quality for ThinkTools.

# Set API key
export ANTHROPIC_API_KEY="sk-ant-..."

# Use explicitly
rk-core think "question" --provider anthropic --model claude-sonnet-4-20250514

Config file:

[providers.anthropic]
api_key = "${ANTHROPIC_API_KEY}"  # Use env var
model = "claude-sonnet-4-20250514"
max_tokens = 4096

Available models:

ModelContextSpeedQuality
claude-opus-4-20250514200KSlowBest
claude-sonnet-4-20250514200KFastExcellent
claude-haiku-3-5-20241022200KFastestGood

OpenAI

export OPENAI_API_KEY="sk-..."

rk-core think "question" --provider openai --model gpt-4-turbo

Config file:

[providers.openai]
api_key = "${OPENAI_API_KEY}"
model = "gpt-4-turbo"
organization_id = "org-..."  # Optional
base_url = "https://api.openai.com/v1"  # For proxies

Available models:

ModelContextSpeedQuality
gpt-4-turbo128KFastExcellent
gpt-48KMediumExcellent
gpt-3.5-turbo16KFastestGood

OpenRouter

Access 300+ models through a single API. Great for cost optimization and experimentation.

export OPENROUTER_API_KEY="sk-or-..."

rk-core think "question" --provider openrouter --model anthropic/claude-sonnet-4

Config file:

[providers.openrouter]
api_key = "${OPENROUTER_API_KEY}"
model = "anthropic/claude-sonnet-4"
site_url = "https://yourapp.com"  # For rankings
site_name = "Your App"

Popular models:

ModelProviderQualityPrice
anthropic/claude-sonnet-4AnthropicExcellent$$
openai/gpt-4-turboOpenAIExcellent$$
google/gemini-proGoogleGood$
mistralai/mistral-largeMistralGood$
meta-llama/llama-3-70bMetaGood$

Ollama (Local)

Run models locally for privacy and zero API costs.

# Start Ollama
ollama serve

# Pull a model
ollama pull llama3.2

# Use with ReasonKit
rk-core think "question" --provider ollama --model llama3.2

Config file:

[providers.ollama]
host = "http://localhost:11434"
model = "llama3.2"

Recommended models:

ModelSizeQualityRAM Required
llama3.28BGood8GB
llama3.2:70b70BExcellent48GB
mistral7BGood8GB
mixtral8x7BExcellent32GB
deepseek-coder33BGood (code)24GB

Google Gemini

export GOOGLE_API_KEY="..."

rk-core think "question" --provider google --model gemini-pro

Config file:

[providers.google]
api_key = "${GOOGLE_API_KEY}"
model = "gemini-pro"

Provider Selection

Automatic Selection

By default, ReasonKit auto-selects based on available API keys:

# Priority order:
# 1. ANTHROPIC_API_KEY
# 2. OPENAI_API_KEY
# 3. OPENROUTER_API_KEY
# 4. GOOGLE_API_KEY
# 5. Ollama (if running)

rk-core think "question"  # Uses first available

Per-Profile Provider

Configure different providers for different profiles:

[profiles.quick]
provider = "ollama"
model = "llama3.2"

[profiles.balanced]
provider = "anthropic"
model = "claude-sonnet-4-20250514"

[profiles.deep]
provider = "anthropic"
model = "claude-opus-4-20250514"

Cost Optimization

# Use cheaper models for simple tasks
[profiles.quick]
provider = "openrouter"
model = "mistralai/mistral-7b-instruct"  # Very cheap

[profiles.balanced]
provider = "openrouter"
model = "anthropic/claude-sonnet-4"  # Good balance

[profiles.paranoid]
provider = "anthropic"
model = "claude-opus-4-20250514"  # Best quality

Advanced Configuration

Timeouts

[providers.anthropic]
timeout_secs = 120
connect_timeout_secs = 10

Retries

[providers.anthropic]
max_retries = 3
retry_delay_ms = 1000
retry_multiplier = 2.0  # Exponential backoff

Rate Limiting

[providers.anthropic]
requests_per_minute = 50
tokens_per_minute = 100000

Custom Endpoints

For proxies or enterprise deployments:

[providers.openai]
base_url = "https://your-proxy.com/v1"
api_key = "${PROXY_API_KEY}"

Temperature and Sampling

[providers.anthropic]
temperature = 0.7        # 0.0-1.0, lower = more deterministic
top_p = 0.9             # Nucleus sampling
top_k = 40              # Top-k sampling

Provider-Specific Features

Anthropic Extended Thinking

Enable extended thinking for complex analysis:

[providers.anthropic]
extended_thinking = true
thinking_budget = 16000  # Max thinking tokens

OpenAI Function Calling

[providers.openai]
function_calling = true

OpenRouter Fallbacks

[providers.openrouter]
model = "anthropic/claude-sonnet-4"
fallback_models = [
    "openai/gpt-4-turbo",
    "google/gemini-pro",
]

Monitoring and Debugging

Token Usage

# Show token usage after each analysis
rk-core think "question" --verbose

# Output includes:
# Tokens: 1,234 prompt + 567 completion = 1,801 total
# Cost: ~$0.0054

Request Logging

# Log all API requests (for debugging)
export RK_DEBUG_API=true
rk-core think "question"

Provider Health Check

# Check if provider is working
rk-core provider test anthropic
rk-core provider test openai
rk-core provider test ollama

Switching Providers

Migration Checklist

When switching providers:

  1. Test compatibility — Run same prompts, compare quality
  2. Adjust timeouts — Different providers have different latencies
  3. Check token limits — Models have different context windows
  4. Update rate limits — Different quotas per provider
  5. Review costs — Pricing varies significantly

Quality Comparison

# Run same analysis with different providers
rk-core think "question" --provider anthropic --output json > anthropic.json
rk-core think "question" --provider openai --output json > openai.json
rk-core think "question" --provider ollama --output json > ollama.json

# Compare results
diff anthropic.json openai.json

Troubleshooting

Common Issues

IssueCauseSolution
“API key invalid”Wrong/expired keyRegenerate API key
“Rate limited”Too many requestsAdd retry logic, reduce frequency
“Model not found”Wrong model IDCheck provider’s model list
“Context too long”Input exceeds limitUse model with larger context
“Connection refused”Ollama not runningollama serve

Error Codes

CodeMeaningAction
401UnauthorizedCheck API key
429Rate limitedWait and retry
500Server errorRetry or switch provider
503Service unavailableTry fallback provider

Custom ThinkTools

Build your own reasoning modules.

Overview

ReasonKit’s architecture allows you to create custom ThinkTools that integrate seamlessly with the framework.

ThinkTool Anatomy

Every ThinkTool has:

  1. Input - A question, claim, or statement to analyze
  2. Process - Structured reasoning steps
  3. Output - Formatted analysis results
#![allow(unused)]
fn main() {
pub trait ThinkTool {
    type Output;

    fn name(&self) -> &str;
    fn description(&self) -> &str;
    async fn analyze(&self, input: &str) -> Result<Self::Output>;
}
}

Creating a Custom Tool

1. Define the Output Structure

#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StakeholderAnalysis {
    pub stakeholders: Vec<Stakeholder>,
    pub conflicts: Vec<Conflict>,
    pub recommendations: Vec<String>,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Stakeholder {
    pub name: String,
    pub interests: Vec<String>,
    pub power_level: PowerLevel,
    pub stance: Stance,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum PowerLevel {
    High,
    Medium,
    Low,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum Stance {
    Supportive,
    Neutral,
    Opposed,
}
}

2. Implement the Tool

#![allow(unused)]
fn main() {
use reasonkit::prelude::*;

pub struct StakeholderMap {
    min_stakeholders: usize,
    include_conflicts: bool,
}

impl StakeholderMap {
    pub fn new() -> Self {
        Self {
            min_stakeholders: 5,
            include_conflicts: true,
        }
    }

    pub fn min_stakeholders(mut self, n: usize) -> Self {
        self.min_stakeholders = n;
        self
    }
}

impl ThinkTool for StakeholderMap {
    type Output = StakeholderAnalysis;

    fn name(&self) -> &str {
        "StakeholderMap"
    }

    fn description(&self) -> &str {
        "Identifies and analyzes stakeholders affected by a decision"
    }

    async fn analyze(&self, input: &str) -> Result<Self::Output> {
        let prompt = format!(
            r#"Analyze the stakeholders for this decision: "{}"

Identify at least {} stakeholders. For each:
1. Name/category
2. Their interests
3. Power level (High/Medium/Low)
4. Likely stance (Supportive/Neutral/Opposed)

Also identify conflicts between stakeholders.

Format as JSON."#,
            input, self.min_stakeholders
        );

        let response = self.llm().complete(&prompt).await?;
        let analysis: StakeholderAnalysis = serde_json::from_str(&response)?;

        Ok(analysis)
    }
}
}

3. Create the Prompt Template

#![allow(unused)]
fn main() {
impl StakeholderMap {
    fn build_prompt(&self, input: &str) -> String {
        format!(r#"
STAKEHOLDER ANALYSIS

# Input Decision
{input}

# Your Task
Identify all parties affected by this decision.

# Required Analysis

## 1. Stakeholder Identification
List at least {min} stakeholders, considering:
- Direct participants
- Indirect affected parties
- Decision makers
- Influencers
- Silent stakeholders (often forgotten)

## 2. For Each Stakeholder
- **Name/Category**: Who they are
- **Interests**: What they want/need
- **Power Level**: High (can block/enable), Medium (can influence), Low (affected but limited voice)
- **Likely Stance**: Supportive, Neutral, or Opposed

## 3. Conflict Analysis
Identify where stakeholder interests conflict.

## 4. Recommendations
How to navigate the stakeholder landscape.

# Output Format
Respond in JSON matching this structure:
```json
{{
  "stakeholders": [...],
  "conflicts": [...],
  "recommendations": [...]
}}
}

“#, input = input, min = self.min_stakeholders ) } }


## Configuration

Make your tool configurable:

```toml
# In config.toml
[thinktools.stakeholdermap]
min_stakeholders = 5
include_conflicts = true
power_analysis = true
#![allow(unused)]
fn main() {
impl StakeholderMap {
    pub fn from_config(config: &Config) -> Self {
        Self {
            min_stakeholders: config.get("min_stakeholders").unwrap_or(5),
            include_conflicts: config.get("include_conflicts").unwrap_or(true),
        }
    }
}
}

Adding CLI Support

#![allow(unused)]
fn main() {
// In main.rs or cli module
use clap::Parser;

#[derive(Parser)]
pub struct StakeholderMapArgs {
    /// Input decision to analyze
    input: String,

    /// Minimum stakeholders to identify
    #[arg(long, default_value = "5")]
    min_stakeholders: usize,

    /// Include conflict analysis
    #[arg(long, default_value = "true")]
    conflicts: bool,
}

pub async fn run_stakeholder_map(args: StakeholderMapArgs) -> Result<()> {
    let tool = StakeholderMap::new()
        .min_stakeholders(args.min_stakeholders);

    let result = tool.analyze(&args.input).await?;
    println!("{}", result.format(Format::Pretty));

    Ok(())
}
}

Example Custom Tools

Devil’s Advocate

Argues against the proposed idea:

#![allow(unused)]
fn main() {
pub struct DevilsAdvocate {
    aggression_level: u8,  // 1-10
}

impl ThinkTool for DevilsAdvocate {
    type Output = CounterArguments;

    async fn analyze(&self, input: &str) -> Result<Self::Output> {
        // Generate strongest possible arguments against
    }
}
}

Timeline Analyst

Evaluates time-based factors:

#![allow(unused)]
fn main() {
pub struct TimelineAnalyst {
    horizon_years: u32,
}

impl ThinkTool for TimelineAnalyst {
    type Output = TimelineAnalysis;

    async fn analyze(&self, input: &str) -> Result<Self::Output> {
        // Analyze short/medium/long term implications
    }
}
}

Reversibility Checker

Assesses how reversible a decision is:

#![allow(unused)]
fn main() {
pub struct ReversibilityChecker;

impl ThinkTool for ReversibilityChecker {
    type Output = ReversibilityAnalysis;

    async fn analyze(&self, input: &str) -> Result<Self::Output> {
        // Analyze cost and feasibility of reversal
    }
}
}

Testing Custom Tools

#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
    use super::*;

    #[tokio::test]
    async fn test_stakeholder_map() {
        let tool = StakeholderMap::new().min_stakeholders(3);

        let result = tool
            .analyze("Should we open source our codebase?")
            .await
            .unwrap();

        assert!(result.stakeholders.len() >= 3);
        assert!(!result.recommendations.is_empty());
    }
}
}

Publishing Custom Tools

Share your tools with the community:

# Package as crate
cargo publish --crate reasonkit-stakeholdermap

# Or contribute to main repo
git clone https://github.com/reasonkit/reasonkit-core
# Add tool in src/thinktools/contrib/

Best Practices

  1. Clear purpose - Each tool should do one thing well
  2. Structured output - Use typed structs, not free text
  3. Configurable - Allow customization via config
  4. Tested - Include unit and integration tests
  5. Documented - Explain what it does and when to use it

Integration Patterns

🔌 Embed ReasonKit into your applications and workflows.

ReasonKit is designed to integrate seamlessly with your existing tools, pipelines, and applications.

Integration Methods

MethodBest ForComplexity
CLIScripts, CI/CD, manual useLow
LibraryRust applicationsMedium
HTTP APIAny language, microservicesMedium
MCP ServerAI assistants, ClaudeLow

CLI Integration

Shell Scripts

#!/bin/bash
# decision-helper.sh

QUESTION="$1"
PROFILE="${2:-balanced}"

# Run analysis and capture output
RESULT=$(rk-core think "$QUESTION" --profile "$PROFILE" --output json)

# Parse with jq
CONFIDENCE=$(echo "$RESULT" | jq -r '.confidence')
SYNTHESIS=$(echo "$RESULT" | jq -r '.synthesis')

# Act on result
if (( $(echo "$CONFIDENCE > 0.8" | bc -l) )); then
    echo "High confidence decision: $SYNTHESIS"
else
    echo "Low confidence, consider more research"
fi

CI/CD Integration

GitHub Actions:

name: PR Analysis
on: pull_request

jobs:
  analyze:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install ReasonKit
        run: cargo install reasonkit-core

      - name: Analyze PR
        env:
          ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
        run: |
          # Get PR description
          PR_BODY=$(gh pr view ${{ github.event.number }} --json body -q .body)

          # Analyze with ReasonKit
          rk-core think "Should this PR be merged? Context: $PR_BODY" \
            --profile balanced \
            --output json > analysis.json

      - name: Post Comment
        run: |
          SYNTHESIS=$(jq -r '.synthesis' analysis.json)
          gh pr comment ${{ github.event.number }} \
            --body "## ReasonKit Analysis\n\n$SYNTHESIS"

GitLab CI:

analyze_mr:
  stage: review
  script:
    - cargo install reasonkit-core
    - |
      rk-core think "Review this merge request: $CI_MERGE_REQUEST_DESCRIPTION" \
        --profile balanced \
        --output json > analysis.json
    - cat analysis.json
  artifacts:
    paths:
      - analysis.json

Cron Jobs

# Daily decision review
0 9 * * * /usr/local/bin/rk-core think "Review yesterday's decisions" \
  --profile deep \
  --output markdown >> /var/log/daily-review.md

Rust Library Integration

Add Dependency

# Cargo.toml
[dependencies]
reasonkit-core = "0.1"
tokio = { version = "1", features = ["full"] }

Basic Usage

use reasonkit_core::{run_analysis, Config, Profile};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = Config {
        profile: Profile::Balanced,
        ..Config::default()
    };

    let analysis = run_analysis(
        "Should I refactor this module?",
        &config,
    ).await?;

    println!("Confidence: {}", analysis.confidence);
    println!("Synthesis: {}", analysis.synthesis);

    Ok(())
}

Custom ThinkTool Pipeline

#![allow(unused)]
fn main() {
use reasonkit_core::thinktool::{
    GigaThink, LaserLogic, ProofGuard,
    ThinkTool, ToolConfig,
};

async fn custom_analysis(input: &str) -> Result<CustomResult> {
    let provider = create_provider()?;

    // Run specific tools in sequence
    let perspectives = GigaThink::new()
        .with_perspectives(15)
        .execute(input, &provider)
        .await?;

    let logic = LaserLogic::new()
        .with_depth(Depth::Deep)
        .execute(input, &provider)
        .await?;

    // Custom synthesis
    Ok(CustomResult {
        perspectives: perspectives.items,
        logic_issues: logic.flaws,
    })
}
}

Streaming Results

#![allow(unused)]
fn main() {
use reasonkit_core::stream::AnalysisStream;
use futures::StreamExt;

async fn stream_analysis(input: &str) -> Result<()> {
    let config = Config::default();
    let mut stream = AnalysisStream::new(input, &config);

    while let Some(event) = stream.next().await {
        match event? {
            StreamEvent::ToolStarted(name) => {
                println!("Starting {}...", name);
            }
            StreamEvent::ToolProgress(name, progress) => {
                println!("{}: {}%", name, progress);
            }
            StreamEvent::ToolCompleted(name, result) => {
                println!("{} complete: {:?}", name, result);
            }
            StreamEvent::Synthesis(text) => {
                println!("Final: {}", text);
            }
        }
    }

    Ok(())
}
}

HTTP API Integration

Running the API Server

# Start ReasonKit as an HTTP server
rk-core serve --port 8080

API Endpoints

POST /v1/analyze
  Request:
    {
      "input": "Should I do X?",
      "profile": "balanced",
      "options": {
        "proofguard_sources": 5
      }
    }

  Response:
    {
      "id": "analysis_abc123",
      "status": "completed",
      "confidence": 0.85,
      "synthesis": "...",
      "tools": [...]
    }

GET /v1/analysis/{id}
  Returns analysis status and results

GET /v1/profiles
  Lists available profiles

GET /v1/health
  Health check endpoint

Client Examples

Python:

import requests

def analyze(question: str, profile: str = "balanced") -> dict:
    response = requests.post(
        "http://localhost:8080/v1/analyze",
        json={
            "input": question,
            "profile": profile,
        },
        headers={"Authorization": f"Bearer {API_KEY}"},
    )
    response.raise_for_status()
    return response.json()

result = analyze("Should I invest in this stock?", "paranoid")
print(f"Confidence: {result['confidence']}")

JavaScript/TypeScript:

interface AnalysisResult {
  id: string;
  confidence: number;
  synthesis: string;
  tools: ToolResult[];
}

async function analyze(
  input: string,
  profile: string = "balanced"
): Promise<AnalysisResult> {
  const response = await fetch("http://localhost:8080/v1/analyze", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "Authorization": `Bearer ${API_KEY}`,
    },
    body: JSON.stringify({ input, profile }),
  });

  if (!response.ok) {
    throw new Error(`Analysis failed: ${response.statusText}`);
  }

  return response.json();
}

curl:

curl -X POST http://localhost:8080/v1/analyze \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $API_KEY" \
  -d '{
    "input": "Should I accept this job offer?",
    "profile": "deep"
  }'

MCP Server Integration

ReasonKit can run as an MCP (Model Context Protocol) server for AI assistants.

Setup

# Install MCP server
cargo install reasonkit-mcp

# Configure in Claude Desktop
# ~/.config/claude/claude_desktop_config.json
{
  "mcpServers": {
    "reasonkit": {
      "command": "reasonkit-mcp",
      "args": ["--profile", "balanced"],
      "env": {
        "ANTHROPIC_API_KEY": "your-key"
      }
    }
  }
}

Available Tools

When connected, Claude can use:

  • reasonkit_think — Full analysis
  • reasonkit_gigathink — Multi-perspective brainstorm
  • reasonkit_laserlogic — Logic analysis
  • reasonkit_proofguard — Fact verification

Webhook Integration

Outgoing Webhooks

# Configure webhook endpoint
rk-core config set webhook.url "https://your-server.com/webhook"
rk-core config set webhook.events "analysis.completed,analysis.failed"

# Webhook payload format:
{
  "event": "analysis.completed",
  "timestamp": "2025-01-15T10:30:00Z",
  "analysis_id": "abc123",
  "input_hash": "sha256:...",
  "confidence": 0.85,
  "profile": "balanced"
}

Incoming Webhooks

# Trigger analysis via webhook
curl -X POST http://localhost:8080/webhook/analyze \
  -H "X-Webhook-Secret: your-secret" \
  -d '{"input": "Question from external system"}'

Database Integration

SQLite Logging

# Enable SQLite logging
export RK_LOG_DB="$HOME/.local/share/reasonkit/analyses.db"

# Query past analyses
sqlite3 "$RK_LOG_DB" "SELECT * FROM analyses WHERE confidence > 0.8"

Schema

CREATE TABLE analyses (
    id TEXT PRIMARY KEY,
    input_text TEXT NOT NULL,
    input_hash TEXT NOT NULL,
    profile TEXT NOT NULL,
    confidence REAL,
    synthesis TEXT,
    raw_result TEXT,  -- JSON blob
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    duration_ms INTEGER
);

CREATE INDEX idx_confidence ON analyses(confidence);
CREATE INDEX idx_created_at ON analyses(created_at);

Best Practices

Rate Limiting

#![allow(unused)]
fn main() {
use governor::{Quota, RateLimiter};

let limiter = RateLimiter::direct(Quota::per_minute(NonZeroU32::new(30).unwrap()));

async fn analyze_with_limit(input: &str) -> Result<Analysis> {
    limiter.until_ready().await;
    run_analysis(input, &Config::default()).await
}
}

Error Handling

#![allow(unused)]
fn main() {
match run_analysis(input, &config).await {
    Ok(analysis) => process_result(analysis),
    Err(ReasonKitError::RateLimit(retry_after)) => {
        tokio::time::sleep(retry_after).await;
        // Retry
    }
    Err(ReasonKitError::Timeout(_)) => {
        // Use cached result or default
    }
    Err(e) => {
        log::error!("Analysis failed: {}", e);
        return fallback_response();
    }
}
}

Caching

#![allow(unused)]
fn main() {
use moka::sync::Cache;

let cache: Cache<String, Analysis> = Cache::builder()
    .max_capacity(1000)
    .time_to_live(Duration::from_secs(3600))
    .build();

async fn cached_analysis(input: &str) -> Result<Analysis> {
    let key = hash(input);

    if let Some(cached) = cache.get(&key) {
        return Ok(cached);
    }

    let result = run_analysis(input, &Config::default()).await?;
    cache.insert(key, result.clone());
    Ok(result)
}
}

Performance

Optimize ReasonKit for speed and cost efficiency.

Performance Overview

ReasonKit’s performance depends on:

  1. LLM Provider - Response times vary by provider/model
  2. Profile Depth - More tools = more time
  3. Network Latency - Distance to API servers
  4. Token Count - Longer prompts/responses = more time

Benchmarks

Typical execution times (Claude 3 Sonnet):

ProfileToolsAvg TimeTokens
Quick2~15s~2K
Balanced5~45s~5K
Deep6~90s~15K
Paranoid7~180s~40K

Optimization Strategies

1. Choose Appropriate Profile

Don’t use paranoid for everything:

# Low stakes = quick
rk-core think "Should I buy this $20 item?" --quick

# High stakes = paranoid
rk-core think "Should I invest my savings?" --paranoid

2. Use Faster Models

Trade reasoning depth for speed:

# Fastest (Claude Haiku)
rk-core think "question" --model claude-3-haiku

# Balanced (Claude Sonnet)
rk-core think "question" --model claude-3-sonnet

# Best reasoning (Claude Opus)
rk-core think "question" --model claude-3-opus

Model speed comparison:

ModelRelative SpeedRelative Quality
Claude 3 Haiku1.0x (fastest)Good
GPT-3.5 Turbo1.1xGood
Claude 3 Sonnet2.5xGreat
GPT-4 Turbo3.0xGreat
Claude 3 Opus5.0xBest

3. Parallel Execution

Run tools concurrently when possible:

[execution]
parallel = true  # Run independent tools in parallel
max_concurrent = 3

Tools that can run in parallel:

  • GigaThink + LaserLogic (no dependencies)
  • ProofGuard (can run independently)

Tools that must be sequential:

  • BrutalHonesty (benefits from prior analysis)
  • Synthesis (requires all tool outputs)

4. Caching

Cache identical queries:

[cache]
enabled = true
ttl_seconds = 3600  # 1 hour
max_entries = 1000
storage = "memory"  # or "disk"
# First run: Full analysis
rk-core think "Should I take this job?" --profile balanced
# Time: 45s

# Second run (same query): Cached
rk-core think "Should I take this job?" --profile balanced
# Time: <1s

5. Streaming

Get results as they complete:

# Stream mode
rk-core think "question" --stream

Shows each tool’s output as it completes rather than waiting for all.

6. Local Models

For maximum privacy and no network latency:

# Use Ollama
ollama serve
rk-core think "question" --provider ollama --model llama3

# Performance varies by hardware:
# - M2 MacBook Pro: ~2-5 tokens/sec (Llama 3 8B)
# - RTX 4090: ~20-50 tokens/sec (Llama 3 8B)

Cost Optimization

Token Costs

Approximate costs per analysis (as of 2024):

ProfileClaude SonnetGPT-4 TurboClaude Opus
Quick$0.02$0.06$0.10
Balanced$0.05$0.15$0.25
Deep$0.15$0.45$0.75
Paranoid$0.40$1.20$2.00

Cost Reduction Strategies

  1. Use cheaper models for simple questions

    rk-core think "simple question" --model claude-3-haiku
    
  2. Limit perspectives/sources

    rk-core think "question" --perspectives 5 --sources 2
    
  3. Use summary mode

    rk-core think "question" --summary-only
    
  4. Set token limits

    [limits]
    max_input_tokens = 2000
    max_output_tokens = 2000
    

Budget Controls

[budget]
daily_limit_usd = 10.00
alert_threshold = 0.80  # Alert at 80% of limit
hard_stop = true  # Stop if limit reached

Monitoring

Built-in Metrics

# Show execution stats
rk-core think "question" --show-stats

# Output:
# Execution time: 45.2s
# Tokens used: 4,892
# Estimated cost: $0.05
# Cache hits: 0

Logging

[logging]
level = "info"  # debug for detailed timing
file = "~/.local/share/reasonkit/logs/rk.log"

[telemetry]
enabled = true
endpoint = "http://localhost:4317"  # OpenTelemetry

Prometheus Metrics

# Start with metrics endpoint
rk-core serve --metrics-port 9090

# Metrics available:
# reasonkit_analysis_duration_seconds
# reasonkit_tokens_used_total
# reasonkit_cache_hits_total
# reasonkit_errors_total

Hardware Requirements

Minimum

  • 2 CPU cores
  • 4GB RAM
  • Network connection
  • 4+ CPU cores
  • 8GB RAM
  • SSD storage (for caching)
  • Fast network connection

For Local Models

  • Apple Silicon (M1/M2/M3) or
  • NVIDIA GPU with 8GB+ VRAM
  • 32GB+ RAM for larger models

Development Setup

Get started contributing to ReasonKit.

Prerequisites

  • Rust 1.75+ (install via rustup)
  • Git for version control
  • LLM API key (Anthropic, OpenAI, or OpenRouter)

Optional:

  • Python 3.10+ for Python bindings
  • Node.js 18+ for documentation site
  • Docker for containerized development

Quick Start

# Clone the repository
git clone https://github.com/reasonkit/reasonkit-core.git
cd reasonkit-core

# Install dependencies and build
cargo build

# Run tests
cargo test

# Run the CLI
cargo run -- think "Test question"

Environment Setup

API Keys

# Set your API key
export ANTHROPIC_API_KEY="sk-ant-..."
# OR
export OPENAI_API_KEY="sk-..."
# OR
export OPENROUTER_API_KEY="sk-or-..."

IDE Setup

VS Code

Recommended extensions:

  • rust-analyzer
  • CodeLLDB (for debugging)
  • Even Better TOML
  • Error Lens
// .vscode/settings.json
{
  "rust-analyzer.check.command": "clippy",
  "rust-analyzer.cargo.features": "all"
}

JetBrains (RustRover/IntelliJ)

Install Rust plugin and configure:

  • Toolchain: Use rustup default
  • Cargo features: all

Git Hooks

# Install pre-commit hooks
./scripts/install-hooks.sh

# Manual hook installation
cp hooks/pre-commit .git/hooks/
chmod +x .git/hooks/pre-commit

Project Structure

reasonkit-core/
├── src/
│   ├── lib.rs           # Library entry point
│   ├── main.rs          # CLI entry point
│   ├── thinktools/      # ThinkTool implementations
│   │   ├── mod.rs
│   │   ├── gigathink.rs
│   │   ├── laserlogic.rs
│   │   ├── bedrock.rs
│   │   ├── proofguard.rs
│   │   ├── brutalhonesty.rs
│   │   └── powercombo.rs
│   ├── profiles/        # Reasoning profiles
│   ├── providers/       # LLM provider implementations
│   ├── output/          # Output formatters
│   └── config/          # Configuration handling
├── tests/               # Integration tests
├── benches/             # Benchmarks
├── docs/                # Documentation (mdBook)
└── examples/            # Example usage

Development Workflow

Building

# Debug build
cargo build

# Release build (optimized)
cargo build --release

# Build with all features
cargo build --all-features

Testing

# Run all tests
cargo test

# Run specific test
cargo test test_gigathink

# Run tests with output
cargo test -- --nocapture

# Run integration tests
cargo test --test integration

# Run with coverage
cargo llvm-cov

Linting

# Run clippy
cargo clippy -- -D warnings

# Format code
cargo fmt

# Check formatting
cargo fmt -- --check

Benchmarks

# Run benchmarks
cargo bench

# Run specific benchmark
cargo bench gigathink

Documentation

# Build Rust docs
cargo doc --open

# Build mdBook docs
cd docs && mdbook serve

Running Locally

CLI

# Run directly
cargo run -- think "Your question here"

# With profile
cargo run -- think "Question" --profile deep

# With specific tool
cargo run -- gigathink "Question"

As Library

# Run example
cargo run --example basic_usage

# Run with release optimizations
cargo run --release --example full_analysis

Docker Development

# Build image
docker build -t reasonkit-dev .

# Run container
docker run -it \
  -e ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY \
  -v $(pwd):/app \
  reasonkit-dev

# Run tests in container
docker run reasonkit-dev cargo test

Debugging

VS Code

// .vscode/launch.json
{
  "version": "0.2.0",
  "configurations": [
    {
      "type": "lldb",
      "request": "launch",
      "name": "Debug CLI",
      "cargo": {
        "args": ["build", "--bin=rk-core"],
        "filter": {
          "name": "rk-core",
          "kind": "bin"
        }
      },
      "args": ["think", "Test question"],
      "cwd": "${workspaceFolder}"
    }
  ]
}

Logging

# Enable debug logging
RUST_LOG=debug cargo run -- think "question"

# Trace level for maximum detail
RUST_LOG=trace cargo run -- think "question"

Common Issues

“API key not found”

# Verify key is set
echo $ANTHROPIC_API_KEY

# Or use .env file
cp .env.example .env
# Edit .env with your key

Build failures

# Update Rust
rustup update

# Clean and rebuild
cargo clean && cargo build

# Update dependencies
cargo update

Tests failing

# Run with verbose output
cargo test -- --nocapture

# Check if API key is valid
rk-core providers test anthropic

Next Steps

Code Style

🎨 Coding standards and style guidelines for ReasonKit contributors.

ReasonKit is written in Rust and follows strict code quality standards. This guide helps you write code that fits seamlessly into the codebase.

Core Philosophy

  1. Clarity over cleverness — Readable code wins
  2. Explicit over implicit — Don’t hide behavior
  3. Fail fast, fail loud — No silent failures
  4. Performance matters — But not at the cost of correctness

Rust Style Guide

Formatting

We use rustfmt with project-specific settings. Always run before committing:

cargo fmt

Configuration (.rustfmt.toml):

edition = "2021"
max_width = 100
tab_spaces = 4
use_small_heuristics = "Default"

Naming Conventions

ItemConventionExample
Types/TraitsPascalCaseThinkTool, ReasoningProfile
Functions/Methodssnake_caserun_analysis(), get_config()
Variablessnake_caseuser_input, analysis_result
ConstantsSCREAMING_SNAKEDEFAULT_TIMEOUT, MAX_RETRIES
Modulessnake_casethinktool, retrieval
Feature flagskebab-caseembeddings-local

Error Handling

Use the crate’s error types consistently:

#![allow(unused)]
fn main() {
use crate::error::{ReasonKitError, Result};

// Good: Use ? operator with context
fn process_input(input: &str) -> Result<Analysis> {
    let parsed = parse_input(input)
        .map_err(|e| ReasonKitError::Parse(format!("Invalid input: {}", e)))?;

    analyze(parsed)
}

// Bad: Unwrap in library code
fn process_input_bad(input: &str) -> Analysis {
    parse_input(input).unwrap()  // Don't do this!
}
}

Documentation

Every public item must have documentation:

#![allow(unused)]
fn main() {
/// Executes the GigaThink reasoning module.
///
/// Generates multiple perspectives on a problem by exploring
/// it from different viewpoints, stakeholders, and frames.
///
/// # Arguments
///
/// * `input` - The question or problem to analyze
/// * `config` - GigaThink configuration options
///
/// # Returns
///
/// A `GigaThinkResult` containing all generated perspectives
/// and a synthesis of the analysis.
///
/// # Errors
///
/// Returns `ReasonKitError::Provider` if the LLM call fails.
///
/// # Example
///
/// ```rust
/// use reasonkit::thinktool::{gigathink, GigaThinkConfig};
///
/// let config = GigaThinkConfig::default();
/// let result = gigathink("Should I switch jobs?", &config)?;
/// println!("Found {} perspectives", result.perspectives.len());
/// ```
pub fn gigathink(input: &str, config: &GigaThinkConfig) -> Result<GigaThinkResult> {
    // implementation
}
}

Module Organization

#![allow(unused)]
fn main() {
// mod.rs structure
//
// 1. Module documentation
// 2. Re-exports (pub use)
// 3. Public types
// 4. Private types
// 5. Public functions
// 6. Private functions
// 7. Tests

//! ThinkTool execution module.
//!
//! This module provides the core reasoning tools that power ReasonKit.

pub use self::executor::Executor;
pub use self::profiles::{Profile, ProfileConfig};

mod executor;
mod profiles;
mod registry;

/// Main entry point for ThinkTool execution.
pub fn run(input: &str, profile: &Profile) -> Result<Analysis> {
    let executor = Executor::new(profile)?;
    executor.run(input)
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_run_with_default_profile() {
        // test implementation
    }
}
}

Imports

Organize imports in this order:

#![allow(unused)]
fn main() {
// 1. Standard library
use std::collections::HashMap;
use std::path::PathBuf;

// 2. External crates
use serde::{Deserialize, Serialize};
use tokio::sync::mpsc;

// 3. Internal crates (workspace members)
use reasonkit_db::VectorStore;

// 4. Crate modules
use crate::error::Result;
use crate::thinktool::Profile;

// 5. Super/self
use super::Config;
}

Async Code

ReasonKit uses Tokio for async operations:

#![allow(unused)]
fn main() {
// Good: Use async properly
pub async fn call_llm(prompt: &str) -> Result<String> {
    let client = Client::new();
    let response = client
        .post(&api_url)
        .json(&request)
        .send()
        .await
        .map_err(|e| ReasonKitError::Provider(e.to_string()))?;

    response.text().await
        .map_err(|e| ReasonKitError::Parse(e.to_string()))
}

// Good: Spawn tasks when parallelism helps
pub async fn run_tools_parallel(
    input: &str,
    tools: &[Tool],
) -> Result<Vec<ToolResult>> {
    let handles: Vec<_> = tools
        .iter()
        .map(|tool| {
            let input = input.to_string();
            let tool = tool.clone();
            tokio::spawn(async move { tool.run(&input).await })
        })
        .collect();

    futures::future::try_join_all(handles)
        .await
        .map_err(|e| ReasonKitError::Internal(e.to_string()))
}
}

Linting

All code must pass Clippy with no warnings:

cargo clippy -- -D warnings

Common Clippy fixes:

#![allow(unused)]
fn main() {
// Bad: Unnecessary clone
let s = some_string.clone();
do_something(&s);

// Good: Borrow instead
do_something(&some_string);

// Bad: Redundant pattern matching
match result {
    Ok(v) => Some(v),
    Err(_) => None,
}

// Good: Use .ok()
result.ok()
}

Performance Guidelines

Avoid Allocations in Hot Paths

#![allow(unused)]
fn main() {
// Bad: Allocates on every call
fn format_error(code: u32) -> String {
    format!("Error code: {}", code)
}

// Good: Return static str when possible
fn error_message(code: u32) -> &'static str {
    match code {
        1 => "Invalid input",
        2 => "Timeout",
        _ => "Unknown error",
    }
}
}

Use Iterators Over Vectors

#![allow(unused)]
fn main() {
// Bad: Creates intermediate vector
let results: Vec<_> = items.iter()
    .filter(|x| x.is_valid())
    .collect();
let sum: u32 = results.iter().map(|x| x.value).sum();

// Good: Chain iterator operations
let sum: u32 = items.iter()
    .filter(|x| x.is_valid())
    .map(|x| x.value)
    .sum();
}

Testing Requirements

See Testing Guide for full details. Quick summary:

  • Unit tests for all public functions
  • Integration tests for cross-module behavior
  • Benchmarks for performance-critical code

Pre-Commit Checklist

Before every commit:

# Format code
cargo fmt

# Run linter
cargo clippy -- -D warnings

# Run tests
cargo test

# Check docs compile
cargo doc --no-deps

Testing

🧪 How to write and run tests for ReasonKit.

Testing is essential for maintaining quality. ReasonKit uses Rust’s built-in testing framework with additional tooling for benchmarks and integration tests.

Test Types

TypeLocationPurposeRun Command
Unitsrc/**/*.rsTest individual functionscargo test
Integrationtests/*.rsTest module interactionscargo test --test '*'
Doc testsDoc commentsEnsure examples workcargo test --doc
Benchmarksbenches/*.rsPerformance regressioncargo bench

Running Tests

All Tests

# Run all tests
cargo test

# Run with output (see println! in tests)
cargo test -- --nocapture

# Run in release mode (faster, catches different bugs)
cargo test --release

Specific Tests

# Run tests matching a name
cargo test gigathink

# Run tests in a specific module
cargo test thinktool::

# Run a single test
cargo test test_gigathink_default_config

# Run ignored tests (slow/expensive)
cargo test -- --ignored

Test Features

# Run with all features
cargo test --all-features

# Run with specific feature
cargo test --features embeddings-local

Writing Unit Tests

Basic Structure

#![allow(unused)]
fn main() {
// In src/thinktool/gigathink.rs

pub fn count_perspectives(config: &Config) -> usize {
    config.perspectives.unwrap_or(10)
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_count_perspectives_default() {
        let config = Config::default();
        assert_eq!(count_perspectives(&config), 10);
    }

    #[test]
    fn test_count_perspectives_custom() {
        let config = Config {
            perspectives: Some(15),
            ..Default::default()
        };
        assert_eq!(count_perspectives(&config), 15);
    }
}
}

Testing Errors

#![allow(unused)]
fn main() {
#[test]
fn test_invalid_input_returns_error() {
    let result = parse_input("");
    assert!(result.is_err());

    // Check error type
    let err = result.unwrap_err();
    assert!(matches!(err, ReasonKitError::Parse(_)));
}

#[test]
#[should_panic(expected = "cannot be empty")]
fn test_panics_on_empty() {
    validate_required("");  // Should panic
}
}

Testing Async Code

#![allow(unused)]
fn main() {
use tokio;

#[tokio::test]
async fn test_async_llm_call() {
    let client = MockClient::new();
    let result = call_llm(&client, "test prompt").await;
    assert!(result.is_ok());
}

#[tokio::test]
async fn test_timeout_handling() {
    let client = SlowMockClient::new(Duration::from_secs(10));
    let result = tokio::time::timeout(
        Duration::from_secs(1),
        call_llm(&client, "test"),
    ).await;

    assert!(result.is_err());  // Should timeout
}
}

Test Fixtures

#![allow(unused)]
fn main() {
// In tests/common/mod.rs
pub fn sample_config() -> Config {
    Config {
        profile: Profile::Balanced,
        provider: Provider::Mock,
        timeout: Duration::from_secs(30),
    }
}

pub fn sample_input() -> &'static str {
    "Should I accept this job offer with 20% higher salary?"
}

// In tests/integration_test.rs
mod common;

#[test]
fn test_with_fixtures() {
    let config = common::sample_config();
    let input = common::sample_input();
    // ...
}
}

Writing Integration Tests

Integration tests go in the tests/ directory:

#![allow(unused)]
fn main() {
// tests/thinktool_integration.rs

use reasonkit_core::{run_analysis, Config, Profile};

#[test]
fn test_full_analysis_pipeline() {
    let config = Config {
        profile: Profile::Quick,
        provider: Provider::Mock,
        ..Default::default()
    };

    let result = run_analysis("Test question", &config);

    assert!(result.is_ok());
    let analysis = result.unwrap();
    assert!(!analysis.synthesis.is_empty());
    assert!(analysis.confidence > 0.0);
}

#[test]
fn test_profile_affects_depth() {
    let quick = run_with_profile(Profile::Quick).unwrap();
    let deep = run_with_profile(Profile::Deep).unwrap();

    // Deep should have more perspectives
    assert!(deep.perspectives.len() > quick.perspectives.len());
}
}

Mocking

Mock LLM Provider

#![allow(unused)]
fn main() {
use mockall::{automock, predicate::*};

#[automock]
pub trait LlmProvider {
    async fn complete(&self, prompt: &str) -> Result<String>;
}

#[tokio::test]
async fn test_with_mock_provider() {
    let mut mock = MockLlmProvider::new();
    mock.expect_complete()
        .with(predicate::str::contains("GigaThink"))
        .returning(|_| Ok("Mocked response".to_string()));

    let result = gigathink("test", &mock).await;
    assert!(result.is_ok());
}
}

Test Doubles

#![allow(unused)]
fn main() {
// Simple test double for deterministic testing
pub struct TestProvider {
    responses: HashMap<String, String>,
}

impl TestProvider {
    pub fn new() -> Self {
        Self {
            responses: HashMap::new(),
        }
    }

    pub fn with_response(mut self, contains: &str, response: &str) -> Self {
        self.responses.insert(contains.to_string(), response.to_string());
        self
    }
}

impl LlmProvider for TestProvider {
    async fn complete(&self, prompt: &str) -> Result<String> {
        for (key, value) in &self.responses {
            if prompt.contains(key) {
                return Ok(value.clone());
            }
        }
        Ok("Default response".to_string())
    }
}
}

Benchmarks

Writing Benchmarks

#![allow(unused)]
fn main() {
// benches/thinktool_bench.rs

use criterion::{black_box, criterion_group, criterion_main, Criterion};
use reasonkit_core::thinktool;

fn benchmark_gigathink(c: &mut Criterion) {
    let config = Config::default();
    let input = "Test question for benchmarking";

    c.bench_function("gigathink_default", |b| {
        b.iter(|| {
            thinktool::gigathink(black_box(input), black_box(&config))
        })
    });
}

fn benchmark_profiles(c: &mut Criterion) {
    let mut group = c.benchmark_group("profiles");

    for profile in [Profile::Quick, Profile::Balanced, Profile::Deep] {
        group.bench_function(format!("{:?}", profile), |b| {
            b.iter(|| run_with_profile(black_box(profile)))
        });
    }

    group.finish();
}

criterion_group!(benches, benchmark_gigathink, benchmark_profiles);
criterion_main!(benches);
}

Running Benchmarks

# Run all benchmarks
cargo bench

# Run specific benchmark
cargo bench gigathink

# Compare against baseline
cargo bench -- --baseline main

# Generate HTML report
cargo bench -- --noplot  # Skip plots if no gnuplot

Test Coverage

Measuring Coverage

# Install coverage tool
cargo install cargo-tarpaulin

# Generate coverage report
cargo tarpaulin --out Html

# Coverage with specific features
cargo tarpaulin --all-features --out Html

Coverage Goals

ComponentTarget Coverage
Core logic> 80%
Error paths> 70%
Edge cases> 60%
Overall> 75%

CI Integration

Tests run automatically on every PR:

# .github/workflows/test.yml
name: Tests
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: dtolnay/rust-toolchain@stable

      - name: Run tests
        run: cargo test --all-features

      - name: Run clippy
        run: cargo clippy -- -D warnings

      - name: Check formatting
        run: cargo fmt --check

Test Best Practices

Do

  • Test one thing per test
  • Use descriptive test names
  • Test edge cases and error conditions
  • Keep tests fast (< 100ms each)
  • Use fixtures for common setup

Don’t

  • Test private implementation details
  • Rely on test execution order
  • Use sleep() for timing (use mocks)
  • Write flaky tests that sometimes fail
  • Skip writing tests “for now”

Debugging Tests

# Run with debug output
RUST_BACKTRACE=1 cargo test -- --nocapture

# Run single test with logging
RUST_LOG=debug cargo test test_name -- --nocapture

# Run test in debugger
rust-gdb target/debug/deps/reasonkit_core-*

Pull Requests

🔀 How to submit code changes to ReasonKit.

We love contributions! This guide walks you through the PR process from start to merge.

Before You Start

1. Check Existing Issues

Before writing code, check if:

  • There’s an existing issue for your change
  • Someone else is already working on it
  • The change aligns with project direction
# Search issues on GitHub
gh issue list --search "your feature"

2. Fork and Clone

# Fork on GitHub, then clone your fork
git clone https://github.com/YOUR-USERNAME/reasonkit-core.git
cd reasonkit-core

# Add upstream remote
git remote add upstream https://github.com/reasonkit/reasonkit-core.git

3. Create a Branch

# Always branch from main
git checkout main
git pull upstream main
git checkout -b your-branch-name

Branch naming:

TypePatternExample
Featurefeat/descriptionfeat/add-streaming-output
Bug fixfix/descriptionfix/timeout-handling
Docsdocs/descriptiondocs/update-api-reference
Refactorrefactor/descriptionrefactor/thinktool-registry

Making Changes

1. Write Code

Follow the Code Style Guide:

# Format as you go
cargo fmt

# Check for issues
cargo clippy -- -D warnings

2. Write Tests

All changes need tests. See Testing Guide:

# Run tests frequently
cargo test

# Run specific test
cargo test test_name

3. Update Documentation

If your change affects:

  • Public API → Update doc comments
  • CLI behavior → Update docs/
  • Configuration → Update docs/

4. Commit Changes

We follow Conventional Commits:

# Format: type(scope): description
git commit -m "feat(thinktool): add streaming support for GigaThink"
git commit -m "fix(cli): handle timeout correctly in quiet mode"
git commit -m "docs(api): document new output format options"

Commit types:

TypeWhen to Use
featNew feature
fixBug fix
docsDocumentation only
refactorCode change that neither fixes nor adds
testAdding/updating tests
perfPerformance improvement
choreBuild, CI, dependencies

Submitting the PR

1. Push Your Branch

git push origin your-branch-name

2. Create the PR

# Using GitHub CLI
gh pr create --title "feat(thinktool): add streaming support" --body-file .github/PULL_REQUEST_TEMPLATE.md

# Or use GitHub web interface

3. PR Template

Every PR should include:

## Summary
Brief description of what this PR does.

## Changes
- [ ] Added streaming support to GigaThink
- [ ] Updated CLI to handle streaming output
- [ ] Added tests for streaming behavior

## Testing
How did you test this?
- `cargo test thinktool::streaming`
- Manual testing with `rk-core think "test" --stream`

## Screenshots (if applicable)
[Add terminal screenshots for UI changes]

## Checklist
- [ ] Code follows project style guidelines
- [ ] Tests pass locally (`cargo test`)
- [ ] Linting passes (`cargo clippy -- -D warnings`)
- [ ] Documentation updated (if needed)
- [ ] Commit messages follow conventional commits

Review Process

What to Expect

  1. Automated Checks — CI runs tests, linting, formatting
  2. Maintainer Review — Usually within 48 hours
  3. Feedback — May request changes
  4. Approval — At least one maintainer approval needed
  5. Merge — Squash-merged to main

Responding to Feedback

# Make requested changes
git add .
git commit -m "refactor: address review feedback"
git push origin your-branch-name

For substantial changes, consider force-pushing a cleaner history:

# Rebase to clean up commits
git rebase -i HEAD~3  # Squash last 3 commits
git push --force-with-lease origin your-branch-name

CI Requirements

All PRs must pass:

CheckCommandRequirement
Buildcargo build --releaseMust compile
Testscargo testAll tests pass
Lintingcargo clippy -- -D warningsNo warnings
Formatcargo fmt --checkProperly formatted
Docscargo doc --no-depsDocs compile

After Merge

Your PR gets squash-merged to main. After merge:

# Update your local main
git checkout main
git pull upstream main

# Clean up your branch
git branch -d your-branch-name
git push origin --delete your-branch-name

PR Size Guidelines

SizeLines ChangedReview Time
XS< 50Same day
S50-2001-2 days
M200-5002-3 days
L500-10003-5 days
XL> 1000Consider splitting

Tip: Smaller PRs get reviewed faster and merged sooner.

Special Cases

Breaking Changes

PRs with breaking changes need:

  • BREAKING CHANGE: in commit body
  • Migration guide in PR description
  • Explicit maintainer approval

Security Fixes

For security issues:

  1. Don’t open a public PR
  2. Email security@reasonkit.sh
  3. We’ll coordinate a fix and disclosure

Dependencies

For dependency updates:

  • Use cargo update for minor/patch updates
  • Create separate PR for major version bumps
  • Include changelog review in PR description

Getting Help

Stuck? Need guidance?

  • Ask in the PR comments
  • Join our Discord
  • Check existing PRs for examples

Frequently Asked Questions

General

How is this different from just asking ChatGPT to “think step by step”?

“Think step by step” is a hint. ReasonKit is a process.

Each ThinkTool has a specific job:

  • GigaThink forces 10+ perspectives
  • LaserLogic checks for logical fallacies
  • ProofGuard triangulates sources

You see exactly what each step caught. It’s structured, auditable reasoning—not just “try harder.”

Does this actually make AI smarter?

Honest answer: No.

ReasonKit doesn’t make LLMs smarter—it makes them show their work. The value is:

  • Structured output (not a wall of text)
  • Auditability (see what each tool caught)
  • Catching blind spots (five tools for five types of oversight)

Run the benchmarks yourself to verify.

Who actually uses this?

Anyone making decisions they want to think through properly:

  • Job offers and career changes
  • Major purchases
  • Business strategies
  • Life decisions

Also professionals in due diligence, compliance, and research.

Can I use my own LLM?

Yes. ReasonKit works with:

  • Anthropic Claude
  • OpenAI GPT-4
  • Google Gemini
  • Mistral
  • Groq
  • 300+ models via OpenRouter
  • Local models via Ollama

You bring your own API key.

Technical

What models work best?

Recommended:

  • Anthropic Claude Opus 4 / Sonnet 4 (best reasoning)
  • GPT-4o (good balance)
  • Claude Haiku 3.5 (fast, cheap, decent)

Good alternatives:

  • Gemini 2.0 Flash
  • Mistral Large
  • Llama 3.3 70B
  • DeepSeek V3

Not recommended:

  • Small models (<7B parameters)
  • Models without good instruction following

How much does it cost to run?

Depends on your profile and provider:

Profile~TokensClaude CostGPT-4 Cost
Quick2K~$0.02~$0.06
Balanced5K~$0.05~$0.15
Deep15K~$0.15~$0.45
Paranoid40K~$0.40~$1.20

Local models (Ollama) are free but slower.

Can I run it offline?

Yes, with local models:

ollama serve
rk-core think "question" --provider ollama --model llama3

Performance won’t match cloud models but works for privacy-sensitive use.

Is my data sent anywhere?

Only to your chosen LLM provider. ReasonKit itself:

  • Doesn’t collect telemetry
  • Doesn’t store your queries
  • Runs entirely locally except for LLM calls

Can I customize the prompts?

Yes. See Custom ThinkTools for details.

You can modify existing tools or create entirely new ones.

Usage

When should I use which profile?

DecisionProfileWhy
“Should I buy this $50 thing?”QuickLow stakes
“Should I take this job?”BalancedImportant but reversible
“Should I move cities?”DeepMajor life change
“Should I invest my life savings?”ParanoidCan’t afford to be wrong

Can I use just one ThinkTool?

Yes:

rk-core gigathink "Should I start a business?"
rk-core laserlogic "Renting is throwing money away"
rk-core proofguard "8 glasses of water a day"

What questions work best?

Great questions:

  • Decisions with trade-offs (“Should I X or Y?”)
  • Claims to verify (“Is it true that X?”)
  • Plans to stress-test (“I’m going to X”)
  • Complex situations (“How should I think about X?”)

Less suited:

  • Pure factual lookups (“What year was X?”)
  • Math problems
  • Code generation
  • Creative writing

How do I interpret the output?

Focus on:

  1. BrutalHonesty — Usually the most valuable section
  2. LaserLogic flaws — Arguments you might have accepted uncritically
  3. ProofGuard sources — Are claims actually verified?
  4. GigaThink perspectives — Especially ones that make you uncomfortable

Pricing

Is the free tier really free?

Yes. The open source core includes:

  • All 5 ThinkTools
  • PowerCombo
  • All profiles
  • Local execution
  • Apache 2.0 license

You only pay your LLM provider (or use free local models).

What’s in Pro?

Pro ($15/week) adds:

  • Advanced modules (AtomicBreak, HighReflect, etc.)
  • Team collaboration
  • Cloud execution
  • Priority support

What’s in Enterprise?

Enterprise ($45/week) adds:

  • Unlimited usage
  • Custom integrations
  • SLA guarantees
  • On-premise deployment option
  • Dedicated support

Troubleshooting

“API key not found”

Make sure the key is exported:

export ANTHROPIC_API_KEY="your-key"
echo $ANTHROPIC_API_KEY  # Should print your key

Analysis is slow

Try:

  1. Use --quick profile for faster results
  2. Use a faster model (Claude Haiku 3.5, GPT-4o-mini)
  3. Check your internet connection

Output is too long

Use output options:

rk-core think "question" --summary-only
rk-core think "question" --max-length 500

Model gives poor results

Try:

  1. A better model (Claude Opus 4, GPT-4o)
  2. A more specific question
  3. The --deep profile for more thorough prompting

Contributing

How can I contribute?

See Contributing Guide:

  • Report bugs on GitHub Issues
  • Propose features in Discussions
  • Submit PRs for fixes and features
  • Improve documentation

Can I create custom ThinkTools?

Yes! See Custom ThinkTools.

Share your creations with the community.

Changelog

All notable changes to ReasonKit are documented here.

[Unreleased]

Added

  • HighReflect meta-cognition tool (Pro)
  • RiskRadar risk assessment tool (Pro)
  • Streaming output support
  • Custom profile creation

Changed

  • Improved BrutalHonesty severity levels
  • Better error messages for provider failures

Fixed

  • Timeout handling in parallel execution
  • Cache invalidation on config change

[0.1.0] - 2025-01-15

Added

Core ThinkTools

  • GigaThink - Multi-perspective exploration (5-25 perspectives)
  • LaserLogic - Logical analysis and fallacy detection
  • BedRock - First principles decomposition
  • ProofGuard - Source verification and triangulation
  • BrutalHonesty - Adversarial self-critique
  • PowerCombo - All tools in sequence

Profiles

  • Quick (~10s) - Fast sanity check
  • Balanced (~20s) - Standard analysis
  • Deep (~1min) - Thorough examination
  • Paranoid (~2-3min) - Maximum scrutiny

Providers

  • Anthropic Claude (Claude Opus 4 / Sonnet 4 / Haiku 3.5)
  • OpenAI (GPT-4o, o1)
  • Google Gemini (Gemini 2.0)
  • Groq (fast inference)
  • OpenRouter (300+ models)
  • Ollama (local models)

Output Formats

  • Pretty (terminal with colors)
  • JSON (machine-readable)
  • Markdown (documentation-friendly)

CLI

  • rk-core think - Full analysis
  • rk-core gigathink - Single tool
  • rk-core config - Configuration management
  • rk-core providers - Provider management

Configuration

  • TOML config file support
  • Environment variable overrides
  • CLI flag overrides
  • Custom profiles

Technical

  • Async/await throughout
  • Parallel tool execution option
  • Structured error handling
  • Comprehensive logging

Version History

VersionDateHighlights
0.1.02025-01-15Initial release

Upgrade Guide

From 0.0.x to 0.1.0

This is the first stable release. No migration needed.

Future Upgrades

We follow semantic versioning:

  • Major (1.0.0) - Breaking changes
  • Minor (0.2.0) - New features, backward compatible
  • Patch (0.1.1) - Bug fixes

Roadmap

0.2.0 (Planned)

  • AtomicBreak tool (Pro)
  • DeciDomatic decision matrix (Pro)
  • Webhook integrations
  • VS Code extension

0.3.0 (Planned)

  • Team collaboration features
  • Analysis history and search
  • Custom tool marketplace
  • Mobile companion app

1.0.0 (Planned)

  • Stable API guarantee
  • Enterprise features
  • Self-hosted option
  • SOC 2 compliance

Contributing

See Contributing Guidelines for how to help.

Report bugs at GitHub Issues.