Skip to main content

Launching the Chat Interface

The Chat interface lets you interact with your trained models in a browser.

Quick Start

aitraining chat
The browser will open automatically to http://localhost:7860/inference.

Command Options

aitraining chat [OPTIONS]

Options:
  --port PORT    Port to run on (default: 7860)
  --host HOST    Host to bind to (default: 127.0.0.1)

Examples

# Default (localhost:7860)
aitraining chat

# Custom port
aitraining chat --port 3000

# Accessible from other machines
aitraining chat --host 0.0.0.0 --port 7860

Requirements

Before launching, you need:
  • A trained model (local path or Hugging Face model ID)
  • Sufficient memory to load the model

Typical Workflow

  1. Train a model:
    aitraining llm --train \
      --model meta-llama/Llama-3.2-1B \
      --data-path ./my-data \
      --project-name my-model
    
  2. Launch chat:
    aitraining chat
    
  3. Browser opens automatically to localhost:7860/inference
  4. Load your trained model from ./my-model
  5. Start chatting

Stopping the Server

Press Ctrl+C in the terminal to stop the chat server.

Troubleshooting

Another process is using port 7860. Either:
  • Stop the other process
  • Use a different port: aitraining chat --port 3000
Check:
  • Model path is correct
  • Sufficient GPU/RAM memory
  • Model format is compatible (Hugging Face format)
Use --host 0.0.0.0 to bind to all interfaces:
aitraining chat --host 0.0.0.0

Next Steps