DeepSeek
DeepSeek V4 Pro
Long-context reasoning model for coding agents and high-volume workflows.
LLMreasoningcodingtoolslong context
Model IDdeepseek/deepseek-v4-pro
Statusavailable
Context1M tokens
Max output64K tokens
Pricing
per 1M tokens
| Input | $1.91 |
| Output | $3.83 |
| Cached input | $0.190 |
Benchmarks
Static frontend preview data
Cost index
0.18x
vs frontier blend
p95 latency
2.0s
streaming start
Agent tasks
91.2%
workflow solve rate
API example
Use the Tokmux base URL with your key
curl https://api.tokmux.ai/v1/chat/completions \
-H "Authorization: Bearer $TOKMUX_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-v4-pro",
"messages": [
{"role": "user", "content": "Give me a launch checklist."}
]
}'reasoningcodingtoolslong context