D
DeepSeek API qo‘llanmasi

TokenHub’dagi DeepSeek V4 — OpenAI-Compatible fikrlash modellari

DeepSeek’ning 2026 yilgi flagman V4 seriyasiga (V4-Pro va V4-Flash) TokenHub’ning yagona /chat/completions endpoint’i orqali murojaat qiling. To‘liq OpenAI-Compatible — rasmiy openai SDK darhol ishlaydi. Streaming, tool use va reasoning_content bilan thinking mode ham qo‘llab-quvvatlanadi. 1M context window, 384K max output, token bo‘yicha billing DeepSeek katalog narxiga asosan.

OpenAI-CompatibleStreamingThinking ModeTool Use1M Context384K Output

1API Key oling

  1. Kiring tokenhub.store va akkaunt ro‘yxatdan o‘tkazing (GitHub / Google orqali kirish qo‘llab-quvvatlanadi)
  2. Dashboard → API Keys ga o‘ting va "Create New Key" ni bosing
  3. Credits qo‘shish uchun Dashboard → Billing ga o‘ting (1 Credit = $1 USD)
  4. API Key ni nusxalang (format: th-xxxxxxxxxxxx...)
⚠️ API Key faqat yaratish vaqtida bir marta ko‘rsatiladi. Uni xavfsiz saqlang; yo‘qotib qo‘ysangiz, yangisini yarating.

2API umumiy ko‘rinishi

Base URL

https://tokenhub.store/api/v1

Autentifikatsiya

Authorization header’ida API Key ni yuboring:

Header
Authorization: Bearer th-your-api-key

Endpoint (OpenAI-compatible)

POST
/chat/completions

Chat completion. OpenAI /v1/chat/completions bilan bir xil schema, streaming, tools, JSON mode va DeepSeek’ga xos thinking maydonlari bilan.

Rasmiy openai SDK bilan bevosita ishlaydi — faqat base_url ni TokenHub’ga yo‘naltiring va TokenHub API Key dan foydalaning. Boshqa kod o‘zgarishi talab qilinmaydi.

3Modellar va narxlash

Narx 1 million token uchun (USD) hisoblanadi, DeepSeek katalog ro‘yxat narxiga asoslangan (hech qanday promo chegirma qo‘llanmaydi). Ham kanonik ID, ham deepseek/* alias qabul qilinadi. Billing upstream qaytargan completion_tokens bo‘yicha amalga oshiriladi (u allaqachon reasoning_tokens ni o‘z ichiga oladi).

DarajaModel IDKirishChiqishIzohlar
V4-Prodeepseek-v4-pro$1.80$3.602026 yilgi eng yuqori darajadagi flagman. Eng yaxshi reasoning va coding sifati.
V4-Flashdeepseek-v4-flash$0.15$0.30Juda tejamkor flagman, Pro’dan taxminan 12× arzon; production uchun ajoyib standart tanlov.

4So‘rov parametrlari

ParametrTuriMajburiyStandartTavsif
modelstringMajburiyDeepSeek V4 model ID. Masalan: "deepseek/deepseek-v4-flash".
messagesarrayMajburiyChat tarixi. Har bir element { role, content } ko‘rinishida. role ∈ system | user | assistant | tool.
max_tokensintegerIxtiyoriyupstream defaultMaksimal chiqish tokenlari. Agar ko‘rsatilmasa, DeepSeek upstream standart qiymatini ishlatadi (384K gacha). thinking mode’da hisoblagich reasoning tokenlarni HAM o‘z ichiga oladi — uni juda kichik qo‘ymang.
temperaturenumberIxtiyoriy1.0Sampling temperature, 0.0–2.0. Pastroq = yanada deterministik. DeepSeek kod uchun 0.0, kreativ yozuv uchun 1.3 ni tavsiya qiladi.
top_pnumberIxtiyoriy1.0Nucleus sampling. temperature YOKI top_p dan foydalaning, ikkalasini birga emas.
streambooleanIxtiyoriyfalseAgar true bo‘lsa, Server-Sent Events (SSE) delta-larini qaytaradi.
thinkingobjectIxtiyoriy{type:'enabled'}DeepSeek-ga xos. Tezroq/arzonroq javoblar uchun reasoning bosqichini o‘tkazib yuborish maqsadida extra_body orqali { type: 'disabled' } yuboring. Standart: enabled.
reasoning_effortstringIxtiyoriymediumThinking chuqurligi: low | medium | high. Yuqoriroq = ko‘proq reasoning tokens, yaxshiroq sifat, yuqoriroq narx.
toolsarrayIxtiyoriyTool use (function calling) uchun tool/function ta’riflari ro‘yxati.
tool_choicestring|objectIxtiyoriyautoTool tanlashni boshqaring: auto | none | required | { type:'function', function:{ name } }.
response_formatobjectIxtiyoriyJSON mode: { "type": "json_object" } modelni to‘g‘ri JSON qaytarishga majbur qiladi.

5curl misollari

bash
curl https://tokenhub.store/api/v1/chat/completions \
  -H "Authorization: Bearer th-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek/deepseek-v4-flash",
    "messages": [
      {"role": "system", "content": "You are a concise assistant."},
      {"role": "user", "content": "Explain CAP theorem in 3 bullets."}
    ],
    "temperature": 0.3
  }'

6Python misoli

python
from openai import OpenAI

client = OpenAI(
    api_key="th-your-api-key",
    base_url="https://tokenhub.store/api/v1",
)

resp = client.chat.completions.create(
    model="deepseek/deepseek-v4-flash",
    temperature=0.3,
    messages=[
        {"role": "system", "content": "You are a concise assistant."},
        {"role": "user", "content": "Explain CAP theorem in 3 bullets."},
    ],
)

msg = resp.choices[0].message
# DeepSeek V4 returns the chain-of-thought in a separate field
print("Thinking:", getattr(msg, "reasoning_content", None))
print("Answer:  ", msg.content)
print("Usage:   ", resp.usage)

7JavaScript / Node.js misoli

typescript
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "th-your-api-key",
  baseURL: "https://tokenhub.store/api/v1",
});

const resp = await client.chat.completions.create({
  model: "deepseek/deepseek-v4-flash",
  temperature: 0.3,
  messages: [
    { role: "system", content: "You are a concise assistant." },
    { role: "user", content: "Explain CAP theorem in 3 bullets." },
  ],
});

const msg: any = resp.choices[0].message;
console.log("Thinking:", msg.reasoning_content);
console.log("Answer:  ", msg.content);
console.log("Usage:   ", resp.usage);

8Thinking Mode’ni batafsil ko‘rib chiqish

DeepSeek V4 yakuniy javobni yozishdan oldin alohida reasoning bosqichini ochadi. Bilishingiz kerak bo‘lganlar:

  • reasoning_content assistant xabarida content ichida emas, alohida maydon sifatida qaytariladi. Keyingi turnlarda uni qayta yubormang.
  • usage ichidagi completion_tokens allaqachon reasoning_tokens’ni o‘z ichiga oladi — biz ham shu bo‘yicha hisoblaymiz. Nechta token thinking uchun ketganini ko‘rish uchun completion_tokens_details.reasoning_tokens ni tekshiring.
  • thinking mode’da max_tokens ni juda past belgilash bo‘sh content’ga olib keladi (hamma tokenlar reasoning’da sarflanadi). Uni bo‘sh qoldiring yoki kamida 2000+ bering.
  • Kechikishga sezgir ssenariylar (chat, classification, oddiy extraction) uchun extra_body orqali o‘chiring: { thinking: { type: 'disabled' } }.
  • reasoning_effort: 'low' | 'medium' | 'high' model qancha o‘ylashini boshqaradi. 'high' matematika/kodingda eng yaxshi natija beradi; 'low' tezroq.
  • Prompt caching: agar bir xil system prompt’ni qayta ishlatsangiz, DeepSeek prompt_cache_hit_tokens ni alohida qaytaradi. TokenHub hozircha miss rate bo‘yicha yagona tarifda hisoblaydi (bashorat qilinadigan narx evaziga kichik ortiqcha to‘lov).

9FAQ

Boshlashga tayyormisiz?

TokenHub’ga ro‘yxatdan o‘ting va OpenAI-compatible API orqali DeepSeek V4’dan foydalanishni boshlang