🖥️ Available Nodes
Select a node to see available models and start a session
🧠 Models on Node
📥 Load Custom Model from HuggingFace
Format: owner/repo:quantization (e.g. unsloth/Llama-3.2-3B-Instruct-GGUF:Q4_K_M)
📦 Pre-loaded Models
⚙️ Session Configuration
🎛️ LLM Generation Settings
🎲 Sampling
⚖️ Penalties
🏜️ DRY (Don't Repeat Yourself)
📝 Generation
📋 Sampler Order
📚 RAG Knowledge Base
Upload documents to provide context for AI responses. The AI will search your documents and use relevant information when answering your questions.
Add Document
Indexed Documents (0)
No documents indexed yet. Upload documents to enable RAG.
📝 Add Text Document
💳 Payment Required
Amount: - sats
💰 Your balance: 0 sats
⏳ Payment will be deducted automatically from your balance...
Or scan/copy to pay with external wallet
🧭 Model Router
Use LightPhon as a drop-in replacement for the OpenAI API. Connect any client, framework, or AI tool — LightPhon routes requests to the best available GPU node in the decentralized network.
⚡ How It Works
LightPhon is an OpenAI-compatible proxy. Instead of sending requests to OpenAI, you point your client to LightPhon — it selects the best GPU node and forwards the request. You get back a standard OpenAI response.
OpenClaw, Python, curl…
Proxy & Router
runs the model
Models are served by two types of sources:
- 🖥️ Local GPU Nodes — real hardware nodes running LLMs via the LightPhon node software. Connected via WebSocket.
- 🐾 OpenClaw — an external OpenAI-compatible AI platform. The LightPhon server queries its
/v1/modelsendpoint and makes those models available alongside local nodes. OpenClaw models are marked with a badge in the list below.
The router treats both sources equally — it picks the best match based on availability, speed, and your preferences.
Quick Start (3 steps):
- Create an API key — your personal token to authenticate requests
- Select a model — pick one from the list, or leave it on
"auto"to let the router decide - Copy the config — ready-to-use snippet for OpenClaw, Python, Node.js, curl, or Aider
🔑 Step 1 — Create an API Key
This key acts as your apiKey in any OpenAI-compatible client. It's personal and never expires. You can create up to 5 keys.
🔒 Login or Register to generate your API key and start using the API.
🌐 Step 2 — Choose a Model
Click a model card to set it as your preferred model. If you don't select one, the router uses "auto" — it picks the best available model automatically.
📋 Step 3 — Connect Your Client
Add this to your OpenClaw models.providers config to use LightPhon as a backend:
Using the openai Python SDK — just change base_url:
Using the openai Node.js SDK — just change baseURL:
Quick test from the command line:
For Aider, Continue.dev, Cursor, or any IDE plugin that supports custom OpenAI endpoints:
💰 My Wallet
🔄 EUR ↔ Sats Converter
⚠️ EUR deposits are credited at 60% of the real BTC value.
This means node hosts receiving EUR-sourced payments get 60% of the sats compared to direct Bitcoin payments. This policy exists to:
- 📈 Incentivize direct BTC payments (you get 100% value)
- 🔄 Avoid exchange-rate risks for node operators
- ⚡ Keep the Lightning ecosystem strong
💡 Tip: Deposit via Lightning (⚡) to get full sats value!