We spin up an actual server in a real datacenter, install the Inference-X engine, load an AI model, and give you a live chat interface. Everything is erased when you're done. No data stored. No account created.
./ix --config ix-config.json
Every free demo needs real compute. Community providers contribute their idle server capacity. In return, they're credited publicly, gain early access to future IX frameworks, and become part of the infrastructure that democratizes AI.
Pioneer providers will have priority integration when the Echo Relay (federated inference network) launches.
| Backend | Nodes | Avg tok/s | Load | Status |
|---|---|---|---|---|
| Loading community nodes... | ||||
--scout report to appear on this map. Your IP is never shown.