Launching the Chat Interface
The Chat interface lets you interact with your trained models in a browser.Quick Start
http://localhost:7860/inference.
Command Options
Examples
Requirements
Before launching, you need:- A trained model (local path or Hugging Face model ID)
- Sufficient memory to load the model
Typical Workflow
-
Train a model:
-
Launch chat:
-
Browser opens automatically to
localhost:7860/inference -
Load your trained model from
./my-model - Start chatting
Stopping the Server
PressCtrl+C in the terminal to stop the chat server.
Troubleshooting
Port already in use
Port already in use
Another process is using port 7860. Either:
- Stop the other process
- Use a different port:
aitraining chat --port 3000
Model won't load
Model won't load
Check:
- Model path is correct
- Sufficient GPU/RAM memory
- Model format is compatible (Hugging Face format)
Can't access from other machines
Can't access from other machines
Use
--host 0.0.0.0 to bind to all interfaces: