Global Options
These options work across most AITraining CLI commands.
Version
Check the installed version:
aitraining --version
aitraining -v
Help
Get help for any command:
aitraining --help
aitraining llm --help
aitraining text-classification --help
Config File
Load parameters from a YAML configuration file:
aitraining --config path/to/config.yaml
This is useful for:
- Reproducible experiments
- Complex configurations
- Sharing settings with teammates
Backend
Specify where training runs:
aitraining llm --train --backend local ...
Available backends:
| Backend | Description |
|---|
local | Run on local machine (default). Variants: local-cli, local-ui |
spaces-* | Run on Hugging Face Spaces |
ep-* | Hugging Face Endpoints |
ngc-* | NVIDIA NGC/DGX Cloud |
nvcf-* | NVIDIA Cloud Functions |
Spaces Backend Options
| Backend | GPU |
|---|
spaces-t4-small | T4 (small) |
spaces-t4-medium | T4 (medium) |
spaces-a10g-small | A10G (small) |
spaces-a10g-large | A10G (large) |
spaces-a10g-largex2 | 2x A10G |
spaces-a10g-largex4 | 4x A10G |
spaces-a100-large | A100 |
spaces-l4x1 | 1x L4 |
spaces-l4x4 | 4x L4 |
spaces-l40sx1 | 1x L40S |
spaces-l40sx4 | 4x L40S |
spaces-l40sx8 | 8x L40S |
spaces-cpu-basic | CPU only |
spaces-cpu-upgrade | CPU (upgraded) |
Remote backends require authentication: When using non-local backends (spaces-*, ep-*, ngc-*, nvcf-*), you must provide --username and --token for Hugging Face authentication.
Push to Hub also requires authentication: Even with --backend local, using --push-to-hub requires --username and --token to upload the model to Hugging Face Hub.
Environment Variables
Set these before running commands:
Authentication
export HF_TOKEN="hf_..." # Hugging Face token
export WANDB_API_KEY="..." # Weights & Biases key
GPU Configuration
export CUDA_VISIBLE_DEVICES=0,1 # Use specific GPUs
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512 # Memory management
MPS (Apple Silicon) Control
export AUTOTRAIN_DISABLE_MPS=1 # Force CPU training on Mac
export AUTOTRAIN_ENABLE_MPS=1 # Force MPS even with quantization
Interactive Mode
Launch the configuration wizard:
aitraining # No arguments = wizard mode
aitraining llm --interactive # Explicit interactive mode
Next Steps