Frequently Asked Questions
General
How is this different from just asking ChatGPT to “think step by step”?
“Think step by step” is a hint. ReasonKit is a process.
Each ThinkTool has a specific job:
- GigaThink forces 10+ perspectives
- LaserLogic checks for logical fallacies
- ProofGuard triangulates sources
You see exactly what each step caught. It’s structured, auditable reasoning—not just “try harder.”
Does this actually make AI smarter?
Honest answer: No.
ReasonKit doesn’t make LLMs smarter—it makes them show their work. The value is:
- Structured output (not a wall of text)
- Auditability (see what each tool caught)
- Catching blind spots (five tools for five types of oversight)
Run the benchmarks yourself to verify.
Who actually uses this?
Anyone making decisions they want to think through properly:
- Job offers and career changes
- Major purchases
- Business strategies
- Life decisions
Also professionals in due diligence, compliance, and research.
Can I use my own LLM?
Yes. ReasonKit works with:
- Anthropic Claude
- OpenAI GPT-4
- Google Gemini
- Mistral
- Groq
- 300+ models via OpenRouter
- Local models via Ollama
You bring your own API key.
Technical
What models work best?
Recommended:
- Anthropic Claude Opus 4 / Sonnet 4 (best reasoning)
- GPT-4o (good balance)
- Claude Haiku 3.5 (fast, cheap, decent)
Good alternatives:
- Gemini 2.0 Flash
- Mistral Large
- Llama 3.3 70B
- DeepSeek V3
Not recommended:
- Small models (<7B parameters)
- Models without good instruction following
How much does it cost to run?
Depends on your profile and provider:
| Profile | ~Tokens | Claude Cost | GPT-4 Cost |
|---|---|---|---|
| Quick | 2K | ~$0.02 | ~$0.06 |
| Balanced | 5K | ~$0.05 | ~$0.15 |
| Deep | 15K | ~$0.15 | ~$0.45 |
| Paranoid | 40K | ~$0.40 | ~$1.20 |
Local models (Ollama) are free but slower.
Can I run it offline?
Yes, with local models:
ollama serve
rk-core think "question" --provider ollama --model llama3
Performance won’t match cloud models but works for privacy-sensitive use.
Is my data sent anywhere?
Only to your chosen LLM provider. ReasonKit itself:
- Doesn’t collect telemetry
- Doesn’t store your queries
- Runs entirely locally except for LLM calls
Can I customize the prompts?
Yes. See Custom ThinkTools for details.
You can modify existing tools or create entirely new ones.
Usage
When should I use which profile?
| Decision | Profile | Why |
|---|---|---|
| “Should I buy this $50 thing?” | Quick | Low stakes |
| “Should I take this job?” | Balanced | Important but reversible |
| “Should I move cities?” | Deep | Major life change |
| “Should I invest my life savings?” | Paranoid | Can’t afford to be wrong |
Can I use just one ThinkTool?
Yes:
rk-core gigathink "Should I start a business?"
rk-core laserlogic "Renting is throwing money away"
rk-core proofguard "8 glasses of water a day"
What questions work best?
Great questions:
- Decisions with trade-offs (“Should I X or Y?”)
- Claims to verify (“Is it true that X?”)
- Plans to stress-test (“I’m going to X”)
- Complex situations (“How should I think about X?”)
Less suited:
- Pure factual lookups (“What year was X?”)
- Math problems
- Code generation
- Creative writing
How do I interpret the output?
Focus on:
- BrutalHonesty — Usually the most valuable section
- LaserLogic flaws — Arguments you might have accepted uncritically
- ProofGuard sources — Are claims actually verified?
- GigaThink perspectives — Especially ones that make you uncomfortable
Pricing
Is the free tier really free?
Yes. The open source core includes:
- All 5 ThinkTools
- PowerCombo
- All profiles
- Local execution
- Apache 2.0 license
You only pay your LLM provider (or use free local models).
What’s in Pro?
Pro ($15/week) adds:
- Advanced modules (AtomicBreak, HighReflect, etc.)
- Team collaboration
- Cloud execution
- Priority support
What’s in Enterprise?
Enterprise ($45/week) adds:
- Unlimited usage
- Custom integrations
- SLA guarantees
- On-premise deployment option
- Dedicated support
Troubleshooting
“API key not found”
Make sure the key is exported:
export ANTHROPIC_API_KEY="your-key"
echo $ANTHROPIC_API_KEY # Should print your key
Analysis is slow
Try:
- Use
--quickprofile for faster results - Use a faster model (Claude Haiku 3.5, GPT-4o-mini)
- Check your internet connection
Output is too long
Use output options:
rk-core think "question" --summary-only
rk-core think "question" --max-length 500
Model gives poor results
Try:
- A better model (Claude Opus 4, GPT-4o)
- A more specific question
- The
--deepprofile for more thorough prompting
Contributing
How can I contribute?
See Contributing Guide:
- Report bugs on GitHub Issues
- Propose features in Discussions
- Submit PRs for fixes and features
- Improve documentation
Can I create custom ThinkTools?
Yes! See Custom ThinkTools.
Share your creations with the community.