OpenStack backed with Ceph Erasure Coded Pools

If you鈥檙e impatient, skip to the solution section 馃槂 Over the last few months I鈥檝e been working with the University of Cape Town on the Ilifu research cloud project. The focus for the initial release of the cloud is mainly to provide compute and storage to astronomy and bioinformatics use cases. The technology powering this cloud is the ever-growing-in-popularity combination of OpenStack (Queens release) as the virtualization platform and Ceph (Luminous) as the storage backend....

August 23, 2018 路 5 min 路 Eugene de Beste

Understanding Ceph Placement Groups (TOO_MANY_PGS)

The Issue My first foray into Ceph was at the end of last year. We had a small 72TB cluster that was split across 2 OSD nodes. I was tasked to upgrade the Ceph release running on the cluster from Jewel to Luminous, so that we could try out the new Bluestore storage backend, and add two more OSD nodes to the cluster which brought us up to a humble 183TB....

March 14, 2018 路 6 min 路 Eugene de Beste

Removing CephFS from a Ceph Cluster (Luminous)

While upgrading the packages for the Ceph cluster at SANBI, I encountered an issue where the Ceph MDS daemon was causing the CephFS filesystem to become unresponsive and stuck in the active(laggy) state. I decided to strip down the CephFS deployment and reinstall it, since the existing one was for testing (set up before my time) and I wanted to do the process of setting it up from scratch. It was surprisingly difficult to find a simple process for removing an MDS, but after I did some digging I ended up using the following:...

March 13, 2018 路 1 min 路 Eugene de Beste