Endpoints
GET /api/gpus
List available GPUs across all servers.
curl https://portal.spuric.com/api/gpus \
-H "Authorization: Bearer $SPUR_API_KEY"
import requests
r = requests.get("https://portal.spuric.com/api/gpus",
headers={"Authorization": f"Bearer {API_KEY}"})
print(r.json())
const r = await fetch("https://portal.spuric.com/api/gpus", {
headers: { Authorization: `Bearer ${API_KEY}` }
});
console.log(await r.json());
POST /api/sessions
Launch a GPU compute session.
curl -X POST https://portal.spuric.com/api/sessions \
-H "Authorization: Bearer $SPUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"server":"spur509002","gpu_count":1,"image":"pytorch","duration_hours":4}'
POST /api/inference
Run AI inference (OpenAI-compatible).
curl https://portal.spuric.com/api/inference \
-H "Authorization: Bearer $SPUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen/Qwen2.5-72B-Instruct",
"messages": [{"role":"user","content":"Hello!"}],
"max_tokens": 512
}'
from openai import OpenAI
client = OpenAI(
base_url="https://portal.spuric.com/api",
api_key=API_KEY
)
resp = client.chat.completions.create(
model="Qwen/Qwen2.5-72B-Instruct",
messages=[{"role": "user", "content": "Hello!"}]
)
print(resp.choices[0].message.content)
const r = await fetch("https://portal.spuric.com/api/inference", {
method: "POST",
headers: {
"Authorization": `Bearer ${API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
model: "Qwen/Qwen2.5-72B-Instruct",
messages: [{role: "user", content: "Hello!"}],
max_tokens: 512
})
});
console.log(await r.json());
GET /api/models
List available AI models.
GET /api/balance
Get SPUR token balance.
DELETE /api/sessions/{id}
Terminate a compute session.
GET /api/pricing
Get current pricing info.
GET /api/chain/stats
SPUR blockchain statistics.