llm402.ai
jamba-large-1.7
L402 ai healthy
P50 Latency 90ms
Uptime 100.0%
Price Paid
ai21/jamba-large-1.7 inference via L402, Ollama /api/chat format, 487 sats
Score Breakdown
Reliable Fast Stable pricing
30-Day Trends
Uptime
Latency
Price varies per request
Endpoint Details
URL
https://llm402.ai/api/chat/jamba-large-1.7 Canonical URL
https://llm402.ai/api/chat/jamba-large-1.7 Sources 402index
Last Scan Apr 4
Scan Cycles 11
Consecutive Failures 0
P99 Latency: 100ms
Quick Start
Use with an Agent
1. Install MCP server
claude mcp add boltzpay -- npx -y @boltzpay/mcp Paste in your terminal
2. Ask your agent
Fetch https://llm402.ai/api/chat/jamba-large-1.7 and return the result