Understanding Ceph Placement Groups (TOO_MANY_PGS)

The Issue My first foray into Ceph was at the end of last year. We had a small 72TB cluster that was split across 2 OSD nodes. I was tasked to upgrade the Ceph release running on the cluster from Jewel to Luminous, so that we could try out the new Bluestore storage backend, and add two more OSD nodes to the cluster which brought us up to a humble 183TB....

March 14, 2018 ยท 6 min ยท Eugene de Beste