Models/inception/Inception: Mercury 2
IN

Inception: Mercury 2

128K context$0.25/M input$0.75/M output
inception/mercury-2

Description

Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving...

Quick Start

curl https://router.tangle.tools/v1/chat/completions \
  -H "Authorization: Bearer $TANGLE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "inception/mercury-2",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Modalities

Input: TextOutput: Text

Pricing

Input$0.25/M
Output$0.75/M
Context128K tokens

Model Info

Providerinception
IDinception/mercury-2