Models/nvidia/meta/llama-3.2-11b-vision-instruct
nvidia

meta/llama-3.2-11b-vision-instruct

131K context$0.24/M input$0.24/M output
nvidia/meta/llama-3.2-11b-vision-instruct

Description

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and...

Quick Start

curl https://router.tangle.tools/v1/chat/completions \
  -H "Authorization: Bearer $TANGLE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "nvidia/meta/llama-3.2-11b-vision-instruct",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Modalities

Input: TextInput: ImageOutput: Text

Pricing

Input$0.24/M
Output$0.24/M
Context131K tokens

Model Info

Providernvidia
IDnvidia/meta/llama-3.2-11b-vision-instruct