chart-lineTrack Refinement with Weights & Biases

Log ROCKET refinement metrics and outputs in real time with W&B

ROCKET can stream rk.refine metrics to Weights & Biases (W&B) while a run is active.

Use this when you want live curves, run comparison, and shared experiment history.

circle-info

This works for both cryo-EM and X-ray refinement. You only need to update the YAML you pass to rk.refine.

1. Install W&B

Install wandb in the same environment where you run ROCKET:

pip install wandb

2. Choose a logging mode

Use this when you want a hosted dashboard and easy sharing.

  1. Create an account at wandb.aiarrow-up-right.

  2. Login from the same shell where you run ROCKET:

wandb login

Paste the API key from wandb.ai/authorizearrow-up-right.

3. Update your refinement config

Open the same YAML file you pass to rk.refine.

Add the W&B fields you want to use:

use_wandb: true
wandb_entity: my-lab
wandb_project: rocket-refinement
wandb_name: 8p4pH-phase1
wandb_tags:
  - cryo-em
  - phase1
wandb_notes: First ROCKET run with live tracking

wandb_entity sets the username or team workspace that owns the run. wandb_project groups related runs. wandb_name sets the run label. wandb_tags and wandb_notes are optional.

If you leave wandb_entity unset, W&B uses your default logged-in account.

circle-exclamation

4. Run refinement as usual

Launch refinement with your updated config:

You can use the same setup for either Launch with Your Own Cryo-EM Data or Launch with Your Own X-ray Data.

If you run phase 2, keep the same W&B fields in the phase 2 config.

5. View results

Open your project in the browser:

Use it to compare runs, inspect metrics, and share results.

What ROCKET logs to W&B

When use_wandb: true, ROCKET can log:

  • Refinement settings from your YAML

  • Per-iteration metrics such as loss and confidence

  • Run metadata such as names, tags, and notes

  • Saved models and trajectory artifacts when enabled

Exactly which metrics appear depends on your ROCKET version and enabled outputs.

circle-check

A practical pattern

A simple naming scheme makes comparison much easier:

Keep one project per dataset family, then use names and tags for phase, target, and hyperparameters.

Last updated