Become an Operator

Run a Tangle Blueprint and earn rewards for serving AI inference to the network. Operators set their own pricing and compete on quality.

Revenue Model

80%

of inference revenue goes to you

You set pricing

Per-token rates you control, per model

Instant payouts

On-chain settlement, no invoicing

Requirements

Hardware

GPU-capable machine (NVIDIA A100/H100 recommended) or a Modal/cloud account for serverless deployments.

Stake

Minimum 10,000 TNT staked on Tangle. Your stake backs your SLA commitment and is slashable for sustained downtime.

Blueprint

Deploy a Tangle Blueprint that serves inference endpoints. Use our vLLM, Modal, or custom Blueprint templates.

Connectivity

Publicly accessible HTTPS endpoint with <200ms response time. Health check endpoint required at /health.

Blueprint Templates

vLLM Blueprint

Run open-weight models locally with vLLM. Best for operators with dedicated GPU hardware.

Models: Llama, Mistral, Qwen, DeepSeek, etc.

View on GitHub

Modal Blueprint

Serverless GPU inference via Modal. Auto-scales, no hardware management. Great for getting started.

Models: Any model deployable on Modal

View on GitHub

API Proxy Blueprint

Proxy requests to existing API providers (OpenAI, Anthropic, Google). Resell access with your own pricing.

Models: Claude, GPT, Gemini, etc.

View on GitHub

Custom Blueprint

Build your own Blueprint from the SDK. Full control over inference pipeline, billing, and model serving.

Models: Anything you can serve

View on GitHub

How It Works

1

Choose a Blueprint

Pick a Blueprint template based on your infrastructure. Fork the repo and configure your models.

2

Deploy & Test

Build and deploy your operator. Verify health checks pass and inference works on your local setup.

3

Register On-Chain

Stake TNT and register your operator on the Tangle network. Your Blueprint ID and endpoint URL go on-chain.

4

Start Serving

The gateway discovers your operator automatically. Requests start flowing based on your models, pricing, and reputation.

Quick Start

# Clone the vLLM Blueprint template
git clone https://github.com/tangle-network/vllm-inference-blueprint
cd vllm-inference-blueprint

# Configure your operator
cp operator/config.example.toml operator/config.toml
# Edit config.toml: set model, GPU count, pricing, endpoint URL

# Build the operator
cargo build --release

# Run locally for testing
./target/release/operator --config operator/config.toml

# Register on Tangle (requires TNT stake)
tangle operator register \
  --blueprint-id <your-blueprint-id> \
  --endpoint https://your-operator.example.com \
  --stake 10000