Smart Router for LLMs

Route
Smarter.
Not Harder.

The intelligent layer for your AI stack. We analyze complexity and route every prompt to the perfect model—saving you costs without sacrificing quality.

router_logic.json
"Explain quantum physics in simple terms..."
Routing...
GPT-4o
High Cost
OPTIMAL
Haiku
Low Cost • Fast

Routing is the new
Prompt Engineering

Don't waste money using the smartest model for simple tasks. Hopline optimizes your traffic automatically.

Smart Routing

We analyze complexity on the fly and route to the most cost-effective model.

Prompt Adaptation

Our engine automatically tweaks your prompt to match the specific formatting of the destination model.

Unified API

One endpoint for OpenAI, Anthropic, Mistral, and Llama. Switch providers without code changes.

Optimization on Autopilot

1

Input Prompt

Send your prompt to our unified API endpoint. We support streaming.

2

Smart Analysis

Hopline scores complexity and context to determine the ideal model.

3

Optimized Output

We route to the model, adapt the prompt, and return the response.

Transparent Pricing

Simple Usage-Based Pricing

Only pay for the tokens you route. No hidden fees.

Developer

For side projects & testing.

$0 / mo
  • 5k routed requests/mo
  • Basic prompt adaptation
  • Community support
Most Popular

Startup

For production applications.

$49 / mo
  • Unlimited requests
  • Advanced routing rules
  • Custom model fallback
  • Analytics Dashboard

Questions?

Everything you need to know about the router.

Minimal. Our routing layer adds ~15ms overhead, which is usually offset by routing you to a faster, smaller model when possible.

No. We are a pass-through service. We do not store request bodies or completions unless you opt-in for debugging logs.

Yes. You can input your own keys for OpenAI/Anthropic to keep your negotiated rates, or use our aggregated billing.

We use a specialized BERT-based classifier trained on thousands of prompts to determine complexity with 98% accuracy.