Skip to main content

Command Structure

The AITraining CLI follows a consistent pattern for all commands.

Basic Syntax

aitraining <command> [options]

Available Commands

CommandDescription
llmTrain large language models (LLMs)
chatLaunch chat interface for inference
apiStart the training API server
text-classificationTrain text classification models
text-regressionTrain text regression models
image-classificationTrain image classification models
image-regressionTrain image regression models
token-classificationTrain NER/token classification
seq2seqTrain sequence-to-sequence models
tabularTrain tabular data models
sentence-transformersTrain sentence embedding models
object-detectionTrain object detection models
vlmTrain vision-language models
extractive-qaTrain extractive QA models
toolsUtility tools: merge-llm-adapter, convert_to_kohya
setupInitial setup and configuration
spacerunnerRun training on Hugging Face Spaces
SpaceRunner requirements: The spacerunner command requires --project-name, --script-path, --username, --token, and --backend to be specified.

Getting Help

General Help

aitraining --help

Command-Specific Help

aitraining llm --help

Trainer-Specific Help

For LLM training, see parameters for a specific trainer:
aitraining llm --trainer sft --help
aitraining llm --trainer dpo --help
aitraining llm --trainer orpo --help
aitraining llm --trainer ppo --help
Or use preview mode:
aitraining llm --preview-trainer dpo --help

Global Options

These options are truly global (work at the top level):
OptionDescription
--help, -hShow help message
--version, -vShow version
--configLoad from YAML config file
The --backend option is available on most training commands but is registered per-command, not globally. See Global Options for backend details.

Config File Usage

Instead of command-line arguments, use a YAML config:
aitraining --config training_config.yaml

Examples

Basic LLM Training

aitraining llm --train \
  --model google/gemma-3-270m \
  --data-path ./data \
  --project-name my-model \
  --trainer sft

With LoRA

aitraining llm --train \
  --model meta-llama/Llama-3.2-1B \
  --data-path ./data \
  --project-name my-lora-model \
  --peft \
  --lora-r 16 \
  --lora-alpha 32

Text Classification

aitraining text-classification \
  --model bert-base-uncased \
  --data-path ./reviews.csv \
  --text-column text \
  --target-column label \
  --project-name sentiment-model

Interactive Mode

Run without arguments to start the interactive wizard:
aitraining
Or explicitly:
aitraining llm --interactive

Next Steps