Quick Start

Get up and running with MarineGym in minutes. This guide covers the minimal setup for launching training runs, benchmarking large batches, and validating results.

Prerequisites

  • Follow the Source Setup or Docker Setup guide to provision Isaac Sim, IsaacLab, and MarineGym.

  • Activate the sim Conda environment (or the equivalent environment used during installation).

  • Change into the repository root so relative paths resolve correctly:

    conda activate sim
    cd ~/MarineGym
    

Warm-Up Training Runs

Each script below lives in scripts/. Start with a small number of environments to validate your install before scaling up.

python train.py task=Hover algo=ppo headless=false enable_livestream=false
python train.py task=Track algo=ppo headless=false enable_livestream=false
python train.py task=Landing algo=ppo headless=false enable_livestream=false

Tips:

  • Set headless=true when running on a remote machine without display access.

  • Append seed=<value> for reproducible experiments.

Evaluation and Resume

Switch to evaluation mode or resume interrupted runs by toggling the command-line options:

# Evaluate an existing checkpoint
python train.py task=Hover algo=ppo mode=evaluate headless=true

# Resume from a specific checkpoint directory
python train.py task=Hover algo=ppo resume_path=./checkpoints/hover_latest.pt

Weights & Biases logging is enabled by default. To operate offline:

export WANDB_MODE=offline

High-Throughput Benchmark

For the headline benchmark configuration (4096 parallel environments with the iAUV model):

python train.py task=HoverRand algo=ppo headless=true enable_livestream=false \
    mode=train task.drone_model.name=iAUV task.env.num_envs=4096 total_frames=50000000

Monitor GPU utilisation with nvidia-smi and adjust task.env.num_envs or rendering resolution if memory pressure becomes critical.

Logging and Outputs

  • Checkpoints: outputs/<experiment>/checkpoints/

  • Episode statistics (JSON/CSV): outputs/<experiment>/logs/

  • Visualisations (if enable_livestream is true): outputs/<experiment>/media/

Use python analyze.py --run <experiment> (if provided) or open the Weights & Biases dashboard to explore learning curves.

Next Steps

  • Automate large sweeps with the scripts in Demos.

  • Explore environment configuration files under cfg/ to fine-tune actuators, sensors, and disturbance settings.

  • Share reproducible runs by exporting the command history: python scripts/train.py --dry-run > command.txt.