Specifically when working with small or single-node dev/homelab clusters, like in my Docker Swarm Setup…
With MicroCeph
I couldn’t find a clean way to follow in the footsteps of this Proxmox forum post with a MicroCeph deployment since I didn’t have a separate service for each OSD. I could stop an OSD (ceph osd stop 1) but then couldn’t restart it without bouncing all the OSDs.
So growing existing OSDs isn’t really an option. Instead, I:
- Add new, larger virtual disks to the VM.
sudo microceph disk listto verify the new disks are available.- Disable the PG autoscaler (which would otherwise likely increase the configured number of PGs based on the new OSD availability and then make things awkward when you reduce the availability again) with
sudo ceph osd pool set noautoscale. - For each new disk, run
sudo microceph disk add $DISK_DEVICE_PATHto adopt them as OSDs. watch sudo ceph -sto wait for the cluster to become healthy again (after redistributing data to the new OSDs).sudo ceph osd dfto confirm that the new disks are listed as OSDs with state “up”, and to grab the OSD ID of each old disk.- For each old disk,
sudo microceph disk remove $OSD_ID, then checksudo ceph -sto verify cluster health before moving on to the next. - Re-enable
- The old virtual disks can then be detached from the VM and deleted.
With cephadm
With a not-micro Ceph deployment, you can probably make do with expanding the existing OSDs one by one:
- Expand virtual disks.
- For each OSD:
sudo systemctl stop ceph-osd@${OSD_ID}.servicesudo ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-${OSD_ID}sudo systemctl start ceph-osd@${OSD_ID}.servicesudo ceph -sto verify the cluster is healthy.
See also: