llm402.ai
Llama-4-Maverick-17B-128E-Instruct-FP8
x402
New
ai degraded
P50 Latency 10ms
Uptime 100.0%
Price Free
meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 inference via x402 USDC on Base, 21 sats
Score Breakdown
Intermittent Fast Stable pricing
30-Day Trends
Uptime
Latency
Price varies per request
Endpoint Details
URL
https://llm402.ai/v1/chat/completions/Llama-4-Maverick-17B-128E-Instruct-FP8 Canonical URL
https://llm402.ai/v1/chat/completions/Llama-4-Maverick-17B-128E-Instruct-FP8 Sources 402index
Last Scan Apr 12
Scan Cycles 1
Consecutive Failures 0
P99 Latency: 10ms
Quick Start
Use with an Agent
1. Install MCP server
claude mcp add boltzpay -- npx -y @boltzpay/mcp Paste in your terminal
2. Ask your agent
Fetch https://llm402.ai/v1/chat/completions/Llama-4-Maverick-17B-128E-Instruct-FP8 and return the result Use in Code
CLI
npx boltzpay fetch "https://llm402.ai/v1/chat/completions/Llama-4-Maverick-17B-128E-Instruct-FP8"