API Documentation

Unified access to world-class AI models. OpenAI-compatible API — one key for all models.

API Base URL

https://tokenhub.store/api/v1

All API requests use this base URL. Fully compatible with OpenAI SDKs.

Quick Start

Get Started in Minutes

  1. Create an account and log in to your Dashboard
  2. Go to API Keys and create a new API key
  3. Visit Billing to add credits to your account
  4. Start making API calls with any supported model

Supported Models

Click on a provider to view available models and code examples.

OpenAI

6 models

Leading AI research company, creators of the GPT series. Known for state-of-the-art language models with exceptional reasoning and coding capabilities.

Anthropic

6 models

AI safety company known for Claude models. Excel at nuanced conversations, code, and complex reasoning with strong safety features.

Google

5 models

Google's Gemini family offers cutting-edge multimodal capabilities with industry-leading context windows up to 2M tokens.

xAI

3 models

Elon Musk's AI company. Grok models are known for real-time knowledge, witty responses, and excellent coding assistance.

Alibaba (Qwen)

3 models

Alibaba's Qwen series offers powerful multilingual models with strong performance in both English and Chinese tasks.

Zhipu AI (GLM)

3 models

Zhipu AI's GLM series. Advanced Chinese-English bilingual models with up to 200K context window.

MiniMax

2 models

MiniMax's M2 series. Ultra-long 205K context with strong multilingual capabilities.

Xiaomi (MiMo)

2 models

Xiaomi's MiMo V2 models. Large context windows up to 1M tokens.

Alibaba (Qwen3)

3 models

Alibaba's Qwen3 series. Massive MoE models with vision-language and coding specializations.

Moonshot AI (Kimi)

3 models

Moonshot AI's Kimi K2 series. Strong reasoning with extended context windows.

DeepSeek

6 models

DeepSeek's latest models. Highly cost-effective with strong reasoning and coding capabilities.

ByteDance (Seedance)

2 models

ByteDance's Seedance video generation models. Cinema-quality text-to-video, image-to-video, and multi-modal video editing with audio support.

ByteDance (Seedream)

3 models

ByteDance's Seedream image generation models. High-quality text-to-image with photorealistic and artistic styles.

Kwaivgi (Kling)

2 models

Kwaivgi's Kling video generation models. High-quality text-to-video and image-to-video with std/pro modes. Durations 3–15s. std: $0.168/s, pro: $0.224/s (no audio).

API Endpoints

POST /chat/completions

Create a chat completion with streaming support

Headers

headers
Authorization: Bearer th-your-api-key
Content-Type: application/json

Request Body

json
{
  "model": "openai/gpt-4.1",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
  "temperature": 0.7,
  "max_tokens": 1000,
  "stream": false
}

POST /videos/generations

Generate a video from text or image prompt (Seedance & Wan 2.2 models)

Headers

headers
Authorization: Bearer th-your-api-key
Content-Type: application/json

Request Body

json
{
  "model": "bytedance/doubao-seedance-2.0",
  "prompt": "A golden retriever running on a sunny beach",
  "duration": 5,
  "resolution": "720p",
  "aspect_ratio": "16:9"
}

Request Body (content array — first_frame & last_frame)

json
{
  "model": "bytedance/doubao-seedance-2.0",
  "prompt": "Hand picks a fresh apple from the tree, the scene smoothly transitions to a hand holding an apple smoothie drink, cinematic lighting in an orchard",
  "content": [
    {
      "type": "image_url",
      "image_url": { "url": "https://ark-project.tos-cn-beijing.volces.com/doc_image/r2v_tea_pic1.jpg" },
      "role": "first_frame"
    },
    {
      "type": "image_url",
      "image_url": { "url": "https://ark-project.tos-cn-beijing.volces.com/doc_image/r2v_tea_pic2.jpg" },
      "role": "last_frame"
    }
  ],
  "duration": 5,
  "resolution": "720p"
}

Parameters

content (optional): Array of media assets. Each item has:

  • type: "image_url"
  • image_url.url: URL of the image
  • role: <firstFrame/> (max 1), <lastFrame/> (max 1), <refImage/>, <refVideo/>, <refAudio/>

GET /videos/generations/{id}

Poll the status of a video generation task

Headers

headers
Authorization: Bearer th-your-api-key

Response

json
{
  "id": "tsk-xxx",
  "object": "video.generation.task",
  "model": "bytedance/doubao-seedance-2.0",
  "status": "succeeded",  // "queued" | "running" | "succeeded" | "failed"
  "data": [
    {
      "video_url": "https://..."
    }
  ]
}

POST /images/generations

Generate images from text prompts (Seedream models)

Headers

headers
Authorization: Bearer th-your-api-key
Content-Type: application/json

Request Body

json
{
  "model": "bytedance/doubao-seedream-5.0",
  "prompt": "A serene mountain landscape at sunset",
  "n": 1,
  "size": "2048x2048"
}

GET /models

List all available models

Code Examples

cURL

bash
curl https://tokenhub.store/api/v1/chat/completions \
  -H "Authorization: Bearer th-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4.1",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Python (OpenAI SDK)

python
from openai import OpenAI

client = OpenAI(
    api_key="th-your-api-key",
    base_url="https://tokenhub.store/api/v1"
)

# Use any supported model
response = client.chat.completions.create(
    model="openai/gpt-5.4",  # Or any other model
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

JavaScript / TypeScript

typescript
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'th-your-api-key',
  baseURL: 'https://tokenhub.store/api/v1',
});

// Use any supported model
const response = await client.chat.completions.create({
  model: 'anthropic/claude-sonnet-4-6',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(response.choices[0].message.content);

Streaming Example (Python)

python
from openai import OpenAI

client = OpenAI(
    api_key="th-your-api-key",
    base_url="https://tokenhub.store/api/v1"
)

stream = client.chat.completions.create(
    model="google/gemini-2.5-pro",
    messages=[{"role": "user", "content": "Write a short story."}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Need Help?