REST API
HTTP API for web integration and remote access.
Overview
ReasonKit can run as an HTTP server for web applications and remote access.
# Start server
rk serve --port 9100
Authentication
# Set API key header
curl -H "Authorization: Bearer YOUR_API_KEY" \
http://localhost:9100/api/v1/think
Or configure in server:
[server]
require_auth = true
api_keys = ["key1", "key2"]
Endpoints
POST /api/v1/think
Run full PowerCombo analysis.
Request:
{
"question": "Should I take this job offer?",
"profile": "balanced",
"options": {
"format": "json",
"include_metadata": true
}
}
Response:
{
"success": true,
"data": {
"question": "Should I take this job offer?",
"profile": "balanced",
"results": {
"gigathink": {
"perspectives": [...]
},
"laserlogic": {
"flaws": [...]
},
"bedrock": {
"core_question": "...",
"first_principles": [...]
},
"proofguard": {
"verified": [...],
"verdict": "..."
},
"brutalhonesty": {
"uncomfortable_truths": [...],
"questions": [...]
}
},
"synthesis": "...",
"metadata": {
"execution_time_ms": 12500,
"tokens_used": 4892
}
}
}
POST /api/v1/tools/:tool
Run a specific ThinkTool.
Available tools: gigathink, laserlogic, bedrock, proofguard, brutalhonesty
Request:
POST /api/v1/tools/brutalhonesty
{
"input": "I'm going to start a YouTube channel",
"options": {
"severity": "high"
}
}
Response:
{
"success": true,
"data": {
"tool": "brutalhonesty",
"input": "I'm going to start a YouTube channel",
"uncomfortable_truths": [...],
"questions": [...],
"conditional_advice": [...]
}
}
GET /api/v1/profiles
List available profiles.
Response:
{
"profiles": [
{
"name": "quick",
"tools": ["gigathink", "laserlogic"],
"description": "Fast analysis for low-stakes decisions"
},
{
"name": "balanced",
"tools": [
"gigathink",
"laserlogic",
"bedrock",
"proofguard",
"brutalhonesty"
],
"description": "Standard analysis for most decisions"
}
]
}
GET /api/v1/health
Health check endpoint.
Response:
{
"status": "healthy",
"version": "0.1.0",
"uptime_seconds": 3600
}
Streaming
For long-running analyses, use Server-Sent Events:
curl -N -H "Accept: text/event-stream" \
-H "Authorization: Bearer YOUR_API_KEY" \
-X POST \
-d '{"question": "...", "profile": "deep"}' \
http://localhost:9100/api/v1/think/stream
Events:
event: tool_start
data: {"tool": "gigathink"}
event: tool_complete
data: {"tool": "gigathink", "perspectives": [...]}
event: tool_start
data: {"tool": "laserlogic"}
event: tool_complete
data: {"tool": "laserlogic", "flaws": [...]}
...
event: complete
data: {"synthesis": "..."}
Error Responses
{
"success": false,
"error": {
"code": "INVALID_REQUEST",
"message": "Missing required field: question",
"details": {...}
}
}
Error Codes:
| Code | HTTP Status | Description |
|---|---|---|
INVALID_REQUEST | 400 | Bad request format |
UNAUTHORIZED | 401 | Invalid or missing API key |
RATE_LIMITED | 429 | Too many requests |
INTERNAL_ERROR | 500 | Server error |
PROVIDER_ERROR | 502 | LLM provider error |
TIMEOUT | 504 | Analysis timed out |
Rate Limiting
Default limits:
| Tier | Requests/minute | Concurrent |
|---|---|---|
| Free | 10 | 2 |
| MCP (Pro) | 60 | 10 |
| Enterprise | Unlimited | 100 |
Rate limit headers:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1642000000
JavaScript Client
// Using fetch
async function analyze(question, profile = "balanced") {
const response = await fetch("http://localhost:9100/api/v1/think", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${API_KEY}`,
},
body: JSON.stringify({ question, profile }),
});
if (!response.ok) {
throw new Error(`Analysis failed: ${response.statusText}`);
}
return response.json();
}
// Usage
const result = await analyze("Should I take this job?");
console.log(result.data.synthesis);
With Streaming
async function* analyzeStream(question, profile = "balanced") {
const response = await fetch("http://localhost:9100/api/v1/think/stream", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${API_KEY}`,
Accept: "text/event-stream",
},
body: JSON.stringify({ question, profile }),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const text = decoder.decode(value);
const lines = text.split("\n");
for (const line of lines) {
if (line.startsWith("data: ")) {
yield JSON.parse(line.slice(6));
}
}
}
}
// Usage
for await (const event of analyzeStream("Should I take this job?")) {
console.log(event);
}
Server Configuration
# ~/.config/reasonkit/server.toml
[server]
host = "0.0.0.0"
port = 8080
workers = 4
[server.auth]
require_auth = true
api_keys = ["key1", "key2"]
[server.rate_limit]
enabled = true
requests_per_minute = 60
[server.cors]
allowed_origins = ["https://yourdomain.com"]
allowed_methods = ["GET", "POST"]
[server.tls]
enabled = false
cert_file = "/path/to/cert.pem"
key_file = "/path/to/key.pem"
Docker Deployment
FROM rust:1.75-slim as builder
WORKDIR /app
COPY . .
RUN cargo build --release --bin rk-server
FROM debian:bookworm-slim
COPY --from=builder /app/target/release/rk-server /usr/local/bin/
EXPOSE 9100
CMD ["rk-server", "--port", "9100"]
# docker-compose.yml
version: "3.8"
services:
reasonkit:
build: .
ports:
- "9100:9100"
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
restart: unless-stopped