Claude Code Skills · 论文 · 数据分析

run-experiment

Claude Code 的 run-experiment skill 负责将训练任务部署到本地 GPU、远程服务器、Vast.ai 或 Modal serverless 环境。自动检测目标机器的硬件状态与 conda 环境,完成代码同步与命令执行,省去反复 SSHand 配置的工作。适合需要频繁切换多台 GPU 机器跑消融实验或超参搜索的研究者。

Deploy and run ML experiments on local, remote, Vast.ai, or Modal serverless GPU. Use when user says "run experiment", "deploy to server", "跑实验", or needs to launch training jobs.

Repo
Chanw-research/claude-code-paper-writing
Slug
run-experiment

SKILL.md

Run Experiment

Deploy and run ML experiment: $ARGUMENTS

Workflow

Step 1: Detect Environment

Read the project's CLAUDE.md to determine the experiment environment:

  • Local GPU (gpu: local): Look for local CUDA/MPS setup info
  • Remote server (gpu: remote): Look for SSH alias, conda env, code directory
  • Vast.ai (gpu: vast): Check for vast-instances.json at project root — if a running instance exists, use it. Also check CLAUDE.md for a ## Vast.ai section.
  • Modal (gpu: modal): Serverless GPU via Modal. No SSH, no Docker, auto scale-to-zero. Delegate to /serverless-modal.

Modal detection: If CLAUDE.md has gpu: modal or a ## Modal section, the entire deployment is handled by /serverless-modal. Jump to Step 4: Deploy (Modal) — Steps 2-3 are not needed (Modal handles code sync and GPU allocation automatically).

Vast.ai detection priority:

  1. If CLAUDE.md has gpu: vast or a ## Vast.ai section:
    • If vast-instances.json exists and has a running instance → use that instance
    • If no running instance → call /vast-gpu provision which analyzes the task, presents cost-optimized GPU options, and rents the user's choice
  2. If no server info is found in CLAUDE.md, ask the user.

Step 2: Pre-flight Check

Check GPU availability on the target machine:

Remote (SSH):

ssh <server> nvidia-smi --query-gpu=index,memory.used,memory.total --format=csv,noheader

Remote (Vast.ai):

ssh -p <PORT> root@<HOST> nvidia-smi --query-gpu=index,memory.used,memory.total --format=csv,noheader

(Read ssh_host and ssh_port from vast-instances.json, or run vastai ssh-url <INSTANCE_ID> which returns ssh://root@HOST:PORT)

Local:

nvidia-smi --query-gpu=index,memory.used,memory.total --format=csv,noheader
# or for Mac MPS:
python -c "import torch; print('MPS available:', torch.backends.mps.is_available())"

Free GPU = memory.used < 500 MiB.

Step 3: Sync Code (Remote Only)

Check the project's CLAUDE.md for a code_sync setting. If not specified, default to rsync.

Option A: rsync (default)

Only sync necessary files — NOT data, checkpoints, or large files:

rsync -avz --include='*.py' --exclude='*' <local_src>/ <server>:<remote_dst>/

Option B: git (when code_sync: git is set in CLAUDE.md)

Push local changes to remote repo, then pull on the server:

# 1. Push from local
git add -A && git commit -m "sync: experiment deployment" && git push

# 2. Pull on server
ssh <server> "cd <remote_dst> && git pull"

Benefits: version-tracked, multi-server sync with one push, no rsync include/exclude rules needed.

Option C: Vast.ai instance

Sync code to the vast.ai instance (always rsync, code dir is /workspace/project/):

rsync -avz -e "ssh -p <PORT>" \
  --include='*.py' --include='*.yaml' --include='*.yml' --include='*.json' \
  --include='*.txt' --include='*.sh' --include='*/' \
  --exclude='*.pt' --exclude='*.pth' --exclude='*.ckpt' \
  --exclude='__pycache__' --exclude='.git' --exclude='data/' \
  --exclude='wandb/' --exclude='outputs/' \
  ./ root@<HOST>:/workspace/project/

If requirements.txt exists, install dependencies:

scp -P <PORT> requirements.txt root@<HOST>:/workspace/
ssh -p <PORT> root@<HOST> "pip install -q -r /workspace/requirements.txt"

Step 3.5: W&B Integration (when wandb: true in CLAUDE.md)

Skip this step entirely if wandb is not set or is false in CLAUDE.md.

Before deploying, ensure the experiment scripts have W&B logging:

  1. Check if wandb is already in the script — look for import wandb or wandb.init. If present, skip to Step 4.

  2. If not present, add W&B logging to the training script:

    import wandb
    wandb.init(project=WANDB_PROJECT, name=EXP_NAME, config={...hyperparams...})
    
    # Inside training loop:
    wandb.log({"train/loss": loss, "train/lr": lr, "step": step})
    
    # After eval:
    wandb.log({"eval/loss": eval_loss, "eval/ppl": ppl, "eval/accuracy": acc})
    
    # At end:
    wandb.finish()
    
  3. Metrics to log (add whichever apply to the experiment):

    • train/loss — training loss per step
    • train/lr — learning rate
    • eval/loss, eval/ppl, eval/accuracy — eval metrics per epoch
    • gpu/memory_used — GPU memory (via torch.cuda.max_memory_allocated())
    • speed/samples_per_sec — throughput
    • Any custom metrics the experiment already computes
  4. Verify wandb login on the target machine:

    ssh <server> "wandb status"  # should show logged in
    # If not logged in:
    ssh <server> "wandb login <WANDB_API_KEY>"
    

The W&B project name and API key come from CLAUDE.md (see example below). The experiment name is auto-generated from the script name + timestamp.

Step 4: Deploy

Remote (via SSH + screen)

For each experiment, create a dedicated screen session with GPU binding:

ssh <server> "screen -dmS <exp_name> bash -c '\
  eval \"\$(<conda_path>/conda shell.bash hook)\" && \
  conda activate <env> && \
  CUDA_VISIBLE_DEVICES=<gpu_id> python <script> <args> 2>&1 | tee <log_file>'"

Vast.ai instance

No conda needed — the Docker image has the environment. Use /workspace/project/ as working dir:

ssh -p <PORT> root@<HOST> "screen -dmS <exp_name> bash -c '\
  cd /workspace/project && \
  CUDA_VISIBLE_DEVICES=<gpu_id> python <script> <args> 2>&1 | tee /workspace/<log_file>'"

After launching, update the experiment field in vast-instances.json for this instance.

Modal (serverless)

When gpu: modal is detected, delegate to /serverless-modal:

  1. Analyze task — determine VRAM needs, choose GPU, estimate cost
  2. Generate launcher — create a modal_launcher.py that wraps the training script using modal.Mount.from_local_dir for code and modal.Volume for results
  3. Runmodal run modal_launcher.py (runs locally, GPU executes remotely)
  4. Collect results — results return via Volume or stdout, no manual download needed

Key Modal settings from CLAUDE.md:

  • modal_gpu: GPU override (default: auto-select based on VRAM analysis)
  • modal_timeout: Max seconds (default: 21600 = 6 hours)
  • modal_volume: Named volume for persistent results

No SSH, no code sync, no screen sessions needed. Modal handles everything.

Local

# Linux with CUDA
CUDA_VISIBLE_DEVICES=<gpu_id> python <script> <args> 2>&1 | tee <log_file>

# Mac with MPS (PyTorch uses MPS automatically)
python <script> <args> 2>&1 | tee <log_file>

For local long-running jobs, use run_in_background: true to keep the conversation responsive.

Step 5: Verify Launch

Remote (SSH):

ssh <server> "screen -ls"

Remote (Vast.ai):

ssh -p <PORT> root@<HOST> "screen -ls"

Modal:

modal app list         # Check app is running
modal app logs <app>   # Stream logs

Local: Check process is running and GPU is allocated.

Step 6: Feishu Notification (if configured)

After deployment is verified, check ~/.claude/feishu.json:

  • Send experiment_done notification: which experiments launched, which GPUs, estimated time
  • If config absent or mode "off": skip entirely (no-op)

Step 7: Auto-Destroy Vast.ai Instance (when gpu: vast and auto_destroy: true)

Skip this step if not using vast.ai or auto_destroy is false.

After the experiment completes (detected via /monitor-experiment or screen session ending):

  1. Download results from the instance:

    rsync -avz -e "ssh -p <PORT>" root@<HOST>:/workspace/project/results/ ./results/
    
  2. Download logs:

    scp -P <PORT> root@<HOST>:/workspace/*.log ./logs/
    
  3. Destroy the instance to stop billing:

    vastai destroy instance <INSTANCE_ID>
    
  4. Update vast-instances.json — mark status as destroyed.

  5. Report cost:

    Vast.ai instance <ID
    

同一分类的其他项