Skip to content
ExploreLlama-4-Maverick-17B-128E-Instruct-FP8 (Ollama)
40
llm402.ai

Llama-4-Maverick-17B-128E-Instruct-FP8 (Ollama)

L402 New ai degraded
P50 Latency 3.7s
Uptime 0.0%
Price Free

Llama-4-Maverick-17B-128E-Instruct-FP8 (Ollama) — ai endpoint offering l402 access.

Score Breakdown

Uptime 50% (40pt weight)
Latency 26% (30pt weight)
Price Stability 100% (30pt weight)
Weighted total 40/100

30-Day Trends

No trend data available yet.

Endpoint Details

URL
https://llm402.ai/api/generate/Llama-4-Maverick-17B-128E-Instruct-FP8
Canonical URL https://llm402.ai/api/generate/Llama-4-Maverick-17B-128E-Instruct-FP8
Sources 402index
Last Scan Mar 26, 2026
Scan Cycles 1
Consecutive Failures 0
P99 Latency: 3.7s

Quick Start

Use with an Agent
1. Install MCP server
claude mcp add boltzpay -- npx -y @boltzpay/mcp

Paste in your terminal

2. Ask your agent
Fetch https://llm402.ai/api/generate/Llama-4-Maverick-17B-128E-Instruct-FP8 and return the result
Use in Code
CLI
npx boltzpay fetch "https://llm402.ai/api/generate/Llama-4-Maverick-17B-128E-Instruct-FP8"