Single node PVE with Ceph
Usually when I'm testing or learning things in my lab, I do it on just one of machines that is currently unused, so in case of PVE with Ceph it would be probably NUC with two drives - one for ZFS and second for Ceph. In such config it's easy to get it up and running because it requires only setting up replica count for pool to 1.
ceph osd pool set <pool_name> size 1
ceph osd pool set <pool_name> min_size 1
However, recently I got Advance STOR-1 with single 500GB NVMe and four 4TB HDDs from OVH, mainly because I've decided to stop using multiple ARM-2T for OSDs in my Ceph cluster. One of the reasons was that with bigger traffic it was too unstable and nonexistent support outweighed benefits coming from the low cost.
After installing Proxmox 6.1 via IPMI on main NVMe drive and adding four SATA drives as OSDs next step was changing failure domain to osd from default host. Because I wanted to use erasure coded pool to gain as much storage as I can simplest way was to start from creating pool with interesting me profile (in my case it was default K3M1) which caused inserting relevant rule to crush map. Of course such pool won't be usable and Ceph will be reporting it as unhealthy because at the moment we need at least four OSDds on separate hosts to be able to use it. Having all rules that we'll need we can move to adjusting crush map. First we have to dump and decompress it:
ceph osd getcrushmap -o crush_map_compressed
crushtool -d crush_map_compressed -o crush_map_decompressed
Next we will have to modify lines in our rules for replicated_rule and erasure-code starting with "step chooseleaf"
sed -r 's/step chooseleaf (firstn|indep) 0 type host/step chooseleaf \1 0 type osd/g' crush_map_decompressed > new_crush_map_decompressed
After this change we can compress and import our new crush map
crushtool -c new_crush_map_decompressed -o new_crush_map_compressed
ceph osd setcrushmap -i new_crush_map_compressed
This way we've forced Ceph to work as a simple local software RAID and now cluster shouldn't report errors.