Specifically when working with small or single-node dev/homelab clusters, like in my Docker Swarm Setup

With MicroCeph

I couldn’t find a clean way to follow in the footsteps of this Proxmox forum post with a MicroCeph deployment since I didn’t have a separate service for each OSD. I could stop an OSD (ceph osd stop 1) but then couldn’t restart it without bouncing all the OSDs.

So growing existing OSDs isn’t really an option. Instead, I:

  1. Add new, larger virtual disks to the VM.
  2. sudo microceph disk list to verify the new disks are available.
  3. Disable the PG autoscaler (which would otherwise likely increase the configured number of PGs based on the new OSD availability and then make things awkward when you reduce the availability again) with sudo ceph osd pool set noautoscale.
  4. For each new disk, run sudo microceph disk add $DISK_DEVICE_PATH to adopt them as OSDs.
  5. watch sudo ceph -s to wait for the cluster to become healthy again (after redistributing data to the new OSDs).
  6. sudo ceph osd df to confirm that the new disks are listed as OSDs with state “up”, and to grab the OSD ID of each old disk.
  7. For each old disk, sudo microceph disk remove $OSD_ID, then check sudo ceph -s to verify cluster health before moving on to the next.
  8. Re-enable
  9. The old virtual disks can then be detached from the VM and deleted.

With cephadm

With a not-micro Ceph deployment, you can probably make do with expanding the existing OSDs one by one:

  1. Expand virtual disks.
  2. For each OSD:
  3. sudo systemctl stop ceph-osd@${OSD_ID}.service
  4. sudo ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-${OSD_ID}
  5. sudo systemctl start ceph-osd@${OSD_ID}.service
  6. sudo ceph -s to verify the cluster is healthy.

See also: