---
tags:
  - ceph
  - devstack
  - openstack
---
[[Installing Ceph on devstack]]

The pool names are:
- volumes for cinder volumes
- images for glance images 
- vms for nova "local storage" rbd volumes
- backups for cinder volume backups

https://docs.ceph.com/en/reef/rbd/rados-rbd-cmds/

```
ubuntu@gobs-devstack:~$ sudo rbd ls volumes
volume-5b6cc251-8f3e-4572-a8b9-52efa390ebc3
volume-6f5e81dd-40f3-42f9-aad3-afcf5696387c
volume-ac39d5be-9606-4711-948e-e76c035e2a25
```

Specify the pool in the `info` command:
```
ubuntu@gobs-devstack:~$ sudo rbd info volumes/volume-6f5e81dd-40f3-42f9-aad3-afcf5696387c
rbd image 'volume-6f5e81dd-40f3-42f9-aad3-afcf5696387c':
	size 10 GiB in 2560 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: 17d385edc7a94
	block_name_prefix: rbd_data.17d385edc7a94
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff
	op_features:
	flags:
	create_timestamp: Mon Mar 31 12:56:46 2025
	access_timestamp: Mon Mar 31 12:56:46 2025
	modify_timestamp: Mon Mar 31 12:56:46 2025
	parent: volumes/volume-5b6cc251-8f3e-4572-a8b9-52efa390ebc3@snapshot-3d1f148a-5fe4-4df2-87c3-4272565b82c2
	overlap: 10 GiB
```

using `rbd du` we can get some actual usage stats:
```
ubuntu@gobs-devstack:~$ sudo rbd du volumes/volume-d66216fc-0b4a-4480-be0e-5530fb8e6004
NAME                                                                                                PROVISIONED  USED
volume-d66216fc-0b4a-4480-be0e-5530fb8e6004@volume-e65e7bcf-88a8-4afb-ba5c-0dbb27f402d9.clone_snap        1 GiB  44 MiB
volume-d66216fc-0b4a-4480-be0e-5530fb8e6004@snapshot-05b07e40-8cbc-4f2d-896c-bbfc0cd2c801                10 GiB  52 MiB
volume-d66216fc-0b4a-4480-be0e-5530fb8e6004                                                              10 GiB  10 GiB
<TOTAL>                                                                                                  10 GiB  10 GiB
```