Skip to main content

When to Use the Chat Interface

The Chat interface lets you test and interact with your trained models in a browser.

What It Does

The Chat interface (aitraining chat) provides:
  • Interactive conversation with your trained models
  • Real-time response generation
  • Conversation history
  • Model parameter adjustment (temperature, max tokens, etc.)

Best For

  • Testing trained models - Verify your fine-tuned model works as expected
  • Quick experiments - Try different prompts and parameters
  • Demos - Show stakeholders what your model can do
  • Debugging - Identify issues in model responses

What It Looks Like

Open your browser to the chat interface:
  • Type messages in a chat box
  • See model responses in real-time
  • Adjust generation parameters
  • View conversation history

Starting the Chat Interface

# Start the chat interface
aitraining chat

# With custom port
aitraining chat --port 7860

# With custom host
aitraining chat --host 0.0.0.0 --port 7860
Then open http://localhost:7860 in your browser.

Workflow Example

  1. Train your model with CLI: aitraining llm --train ...
  2. Start chat interface: aitraining chat
  3. Open browser to localhost:7860
  4. Select your trained model
  5. Start chatting to test responses
  6. Adjust temperature/parameters as needed
  7. Iterate on training if needed

Advantages

  • Immediate feedback - See responses instantly
  • No coding required - Just type and chat
  • Visual interface - Easy to use
  • Parameter tuning - Adjust generation settings in real-time

Limitations

  • Not for training - Use CLI or API for training
  • Local only - Must access the machine running it
  • Single model - Test one model at a time

When to Use Something Else

Use CLI when you:
  • Need to train models
  • Want to automate workflows
  • Need batch processing
  • Want reproducible experiments
Use API when you:
  • Build applications
  • Need programmatic control
  • Integrate with other systems
  • Deploy to production

Common Use Cases

Post-Training Verification

“Did my fine-tuning work?”
  • Load trained model
  • Test with sample prompts
  • Verify response quality

Parameter Exploration

“What temperature works best?”
  • Try different generation settings
  • See effects immediately
  • Find optimal parameters

Demo Preparation

“Show the team what we built”
  • Visual, easy to understand
  • Interactive demonstration
  • No technical setup needed

Tips

  1. Start with low temperature - More consistent responses for testing
  2. Save good prompts - Document what works
  3. Compare models - Test before/after fine-tuning
  4. Check edge cases - Try unusual inputs

Next Steps