duckduckGO-chat-api

repo
Created Jun 2025
Original
Go
Stars
4
Forks
1
Size
100 KB
Last Update
7 months ago

A minimalist Go API for DuckDuckGo AI Chat Complete reverse engineering with streaming support and session management

README.md

🦆 DuckDuckGo AI Chat API


✨ Features

🤖 AI Models

  • 5 AI models available
  • Real-time streaming
  • Model switching
  • Session persistence

🔧 Technical

  • Complete reverse engineering
  • Auto error 418 recovery
  • VQD token management
  • Dynamic headers

🌐 REST API

  • Simple endpoints
  • JSON responses
  • Session management
  • Health monitoring

🤖 Available Models

Model NameIntegration IDAliasStrengthBest ForCharacteristics
GPT-4o minigpt-4o-minigpt-4o-miniGeneral purposeEveryday questions• Fast• Well-balanced
Claude 3 Haikuclaude-3-haiku-20240307claude-3-haikuCreative writingExplanations & summaries• Clear responses• Concise
Llama 3.3 70Bmeta-llama/Llama-3.3-70B-Instruct-TurbollamaProgrammingCode-related tasks• Technical precision• Detailed
Mistral Smallmistralai/Mistral-Small-24B-Instruct-2501mixtralKnowledge & analysisComplex topics• Reasoning• Logic-focused
o4-minio4-minio4miniSpeedQuick answers• Very fast• Compact responses

� Installation

git clone https://github.com/benoitpetit/duckduckGO-chat-cli cd duckduckGO-chat-cli/duckduckGO-chat-api go mod tidy go run .

The API will be available at http://localhost:8080

📖 API Endpoints

🔍 API Health Check

GET /api/v1/health

Response:

{ "status": "ok", "service": "DuckDuckGo Chat API", "version": "1.0.0", "timestamp": "1749828577156" }

🤖 List Available Models

GET /api/v1/models

Response:

{ "models": [ { "id": "gpt-4o-mini", "name": "GPT-4o Mini", "description": "Fast and balanced general purpose model", "alias": "gpt-4o-mini" } ], "success": true, "count": 5 }

💬 Chat (Complete Response)

POST /api/v1/chat Content-Type: application/json { "message": "Hello, how are you?", "model": "gpt-4o-mini", "session_id": "session_1" }

Response:

{ "message": "Hello! I'm doing very well, thank you...", "model": "gpt-4o-mini", "session_id": "session_1", "success": true }

🌊 Streaming Chat

POST /api/v1/chat/stream Content-Type: application/json { "message": "Write me a poem", "model": "claude-3-haiku", "session_id": "session_1" }

Response (Server-Sent Events):

event: chunk
data: {"chunk":"Here","done":false,"session_id":"session_1"}

event: chunk
data: {"chunk":" is","done":false,"session_id":"session_1"}

event: done
data: {"done":true,"session_id":"session_1"}

🧹 Clear Session

DELETE /api/v1/chat/clear?session_id=session_1

Response:

{ "success": true, "message": "Session cleared successfully", "session_id": "session_1" }

🎯 Usage Examples

JavaScript (Fetch API)

// Simple chat const response = await fetch('http://localhost:8080/api/v1/chat', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ message: 'Explain Go programming to me', model: 'llama', session_id: 'my_session' }) }); const data = await response.json(); console.log(data.message);

JavaScript (Streaming)

// Streaming chat const response = await fetch('http://localhost:8080/api/v1/chat/stream', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ message: 'Tell me a story', model: 'claude-3-haiku' }) }); const reader = response.body.getReader(); const decoder = new TextDecoder(); while (true) { const { done, value } = await reader.read(); if (done) break; const chunk = decoder.decode(value); const lines = chunk.split('\n'); for (const line of lines) { if (line.startsWith('data: ')) { const data = JSON.parse(line.slice(6)); if (data.chunk) { console.log(data.chunk); } } } }

cURL

# Simple chat curl -X POST http://localhost:8080/api/v1/chat \ -H "Content-Type: application/json" \ -d '{ "message": "Hello!", "model": "gpt-4o-mini" }' # Streaming curl -X POST http://localhost:8080/api/v1/chat/stream \ -H "Content-Type: application/json" \ -d '{ "message": "Write Python code", "model": "llama" }'

⚙️ Configuration

Environment Variables

# Server port (default: 8080) export PORT=3000 # Debug mode (default: debug) export GIN_MODE=release # DuckDuckGo requests debug export DEBUG=true

Production Deployment

GIN_MODE=release PORT=8080 go run .

🔧 Technical Architecture

DuckDuckGo Reverse Engineering

  • Automatic VQD tokens via /duckchat/v1/status
  • Dynamic headers with authenticated values
  • Complete session cookie management
  • Auto error 418 recovery (98.3% success rate)

Advanced Features

  • Persistent sessions in memory
  • Automatic retry with backoff
  • Real-time streaming with Server-Sent Events
  • Model validation and error handling

📊 Performance

  • Latency: ~200-500ms for first response
  • Throughput: Real-time streaming
  • Error recovery: 98.3% automatic success
  • Memory: ~10-50MB per active session

⚠️ Limitations

  • In-memory sessions: Lost on restart
  • Rate limiting: Respects DuckDuckGo limits
  • Static headers: May need updates

🚨 Troubleshooting

Error 418 (I'm a teapot)

The API automatically handles these errors with retry and token refresh.

Unable to get VQD

# Check connectivity curl -I https://duckduckgo.com/duckchat/v1/status # Debug mode DEBUG=true go run .

Lost sessions

Sessions are stored in memory. Restarting the API clears them.


🔧 Unofficial API based on DuckDuckGo Chat reverse engineering

Repository Topics
#api#duckduckgo#iachatbot