Skip to content

OpenShift 4: Red Hat OpenShift Container Storage 4.5 Lab Installation

Avatar photo

https://www.linkedin.com/in/muhammad-aizuddin-zali-4807b552/

Image : plutonlogistics.com

Official documentation of Red Hat OpenShift Container Storage can be found here.

This blog merely written only to show on the technical feasibility such configuration without taking any official supportability scope in context.

As we already knew, OpenShift Container Storage (OCS) version 4.x has already GA somewhere last two weeks.

OCS backend is using Ceph Storage (to understand more about Ceph please read here) and NooBaa for S3 compliant storage.

In this blog we will install OCS based:

  • on baremetal UPI OpenShift 4.5 Installation(libvirt guest VM).
  • using Local Volume as storageClass provider for OCS OSD disks.
  • deploy initial cluster via CLI.
  • customize OSD size from 2Ti to 50Gi.
  • customize resource limits and request for components.

The Local Volume disk topology:

  • Worker01: /dev/vdc(50Gi)
  • Worker02: /dev/vdc(50Gi)
  • Worker03: /dev/vdc(50Gi)

Configuration Prerequisites:

  • A Local Volume already configured.
  • In this case, our local volume storageClassName looks like this:
# oc get sc
NAME                          PROVISIONER                             AGE
local-sc-osd                  kubernetes.io/no-provisioner            40m
  • local-sc-mon pointing to /dev/vdb and will be used by OCS mon component.
  • local-sc-osd pointing to /dev/vdc and will be used by OCS osd component.

NOTE: OCS mon requires volumeMode: Filesystem and OCS OSD requires volumeMode: Block.

Example of the LocalVolume YAML:

apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
  name: "local-disks-osd"
  namespace: "local-storage" 
spec:
  nodeSelector: 
    nodeSelectorTerms:
    - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker01
          - worker02
          - worker03
  storageClassDevices:
    - storageClassName: "local-sc-osd"
      volumeMode: Block
      devicePaths: 
        - /dev/vdc

Guest Host Configurations:

  • Lenovo P50.
  • Intel(R) Core(TM) i7-6820HQ.
  • Samsung 64 GB DDR4 SODIMM (16 GB x 4 Slots).
  • /var/lib/libvirt mounted on Samsung NVMe SSD Controller SM961/PM961 (512 GB) as LVM.
  • Fedora 31 OS.
  • OpenShift 4.2.14.
  • All workers (libvirt guest) memory sets to 24GB ( I only have 64GB total physical RAM running on Lenovo P50 (Fedora 31) , so overcommitting cpu/memory required), and this is how the memory footprint and CPU load looks like after the OCP + OCS installation:

#oc version
Client Version: openshift-clients-4.2.1-201910220950
Server Version: 4.2.14
Kubernetes Version: v1.14.6+b294fe5
#free -m
              total        used        free      shared  buff/cache   available
Mem:          64213       53613         463         315       10136        9597
Swap:          8191          59        8132
# lscpu | grep node
NUMA node(s):                    1
NUMA node0 CPU(s):               0-7
# cat /proc/loadavg 
10.76 11.66 11.90 27/1540 13455
# dmidecode --type system
# dmidecode 3.2
Getting SMBIOS data from sysfs.
SMBIOS 2.8 present.
Handle 0x000F, DMI type 1, 27 bytes
System Information
	Manufacturer: LENOVO
	Product Name: XXXXX
	Version: ThinkPad P50
	Serial Number: XXXXXX
	UUID: XXXXXXX
	Wake-up Type: Power Switch
	SKU Number: LENOVO_MT_20EQ_BU_Think_FM_ThinkPad P50
	Family: ThinkPad P50

VM resources assigned (block1 disks are not being used):

# virsh domstats --vcpu --balloon --block | egrep "Domain|balloon.current|vcpu.current|block.[[:digit:]].(name|capacity)"
Domain: 'master01.ocp4.local.bytewise.my'
  balloon.current=8388608
  vcpu.current=6
  block.0.name=vda
  block.0.capacity=32212254720
Domain: 'master02.ocp4.local.bytewise.my'
  balloon.current=8388608
  vcpu.current=6
  block.0.name=vda
  block.0.capacity=32212254720
Domain: 'master03.ocp4.local.bytewise.my'
  balloon.current=8388608
  vcpu.current=6
  block.0.name=vda
  block.0.capacity=42949672960
Domain: 'worker01.ocp4.local.bytewise.my'
  balloon.current=25165824
  vcpu.current=8
  block.0.name=vda
  block.0.capacity=42949672960
  block.1.name=vdb
  block.1.capacity=10737418240
  block.2.name=vdc
  block.2.capacity=53687091200
Domain: 'worker02.ocp4.local.bytewise.my'
  balloon.current=25165824
  vcpu.current=8
  block.0.name=vda
  block.0.capacity=42949672960
  block.1.name=vdb
  block.1.capacity=10737418240
  block.2.name=vdc
  block.2.capacity=53687091200
Domain: 'worker03.ocp4.local.bytewise.my'
  balloon.current=25165824
  vcpu.current=8
  block.0.name=vda
  block.0.capacity=42949672960
  block.1.name=vdb
  block.1.capacity=10737418240
  block.2.name=vdc
  block.2.capacity=53687091200
  • balloon.current = current memory allocation in KB
  • vcpu.current = current vCPU allocated
  • block.[digit].capacity: qcow2 disk size in KB

Before proceeding to next section, ensure that OCS operator already installed using the official documentation. We wont cover the operator installation here since the official docs are sufficient.

Configurations:

  1. Label each of the storage node with below label. Take note on rack for topology awareness being use by CRUSH map for HA and resiliency. Each node should have their own rack value (e.g worker01 using topology.rook.io/rack: rack0, worker02 using topology.rook.io/rack: rack1 and so on).

NOTE: You need to do relabel if you uninstalled previous OCS cluster.

# oc get node -l  beta.kubernetes.io/arch=amd64 -o yaml | egrep 'kubernetes.io/hostname: worker|cluster.ocs.openshift.io/openshift-storage|topology.rook.io/rack'
      cluster.ocs.openshift.io/openshift-storage: ""
      kubernetes.io/hostname: worker01
      topology.rook.io/rack: rack0
      cluster.ocs.openshift.io/openshift-storage: ""
      kubernetes.io/hostname: worker02
      topology.rook.io/rack: rack1
      cluster.ocs.openshift.io/openshift-storage: ""
      kubernetes.io/hostname: worker03
      topology.rook.io/rack: rack2

NOTE: This step will be automated when using GUI from the operator page to create cluster. When using GUI, the YAML resource will be created using default value instead of value we wanted it to used. Hence this step.

2. Define a storagecluster CR call ocscluster.yaml (note the storage size is 50Gi) to provision new OCS cluster and oc create -f ocscluster.yaml:

apiVersion: ocs.openshift.io/v1    
kind: StorageCluster    
metadata:    
  namespace: openshift-storage    
  name: ocs-storagecluster    
spec:    
  manageNodes: false    
  monDataDirHostPath: /var/lib/rook
  resources:      
    mon:      
      requests: {}
      limits: {}
    mds:
      requests: {}
      limits: {}
    rgw:      
      requests: {}
      limits: {}
    mgr:
      requests: {}
      limits: {}
    noobaa-core:      
      requests: {}
      limits: {}
    noobaa-db:        
      requests: {}
      limits: {}
  storageDeviceSets:    
  - name: ocs-deviceset    
    count: 1
    resources: {}   
    placement: {}    
    dataPVCTemplate:    
      spec:    
        storageClassName: localblock
        accessModes:    
        - ReadWriteOnce    
        volumeMode: Block    
        resources:    
          requests:    
            storage: 50Gi    
    portable: false
    replica: 3
  • create empty resource ‘{}’ to change from Guaranteed to Best Effort resource QoS since we don’t have much resource on the host.
  • OSD device set using 50Gi from 2Ti default size.
  • The monDataDirHostPath will be used by mon pod. Clean this folder before attempting OCS reinstallation.

3. This will trigger the OCS operator (OCS meta operator) to provision required operator to build the OCS cluster component and services.


4. Finally you will see:

  • Three new storageClasses:
    • ocs-storagecluster-cephfs (CephFS Filesystem based storage)
    • ocs-storagecluster-ceph-rbd (RBD Block based storage)
    • openshift-storage.noobaa.io (S3 compliant storage – this required further configuration on NooBaa portal)
    • ocs-storagecluster-ceph-rgw (Object storage)
  • Home > Dashboard > Persistent Storage, will show:
Screenshot of Health card in persistent storage dashboard
  • Home > Dashboard > Object Service , will show:
Screenshot of Health card in object service dashboard

This conclude the blog. We anticipate for OCS 4.3 and beyond we going to have better support on external Ceph Cluster, independent mode and other great cloud native storage stuff! Stay tune!

Disclaimer:

The views expressed and the content shared in all published articles on this website are solely those of the respective authors, and they do not necessarily reflect the views of the author’s employer or the techbeatly platform. We strive to ensure the accuracy and validity of the content published on our website. However, we cannot guarantee the absolute correctness or completeness of the information provided. It is the responsibility of the readers and users of this website to verify the accuracy and appropriateness of any information or opinions expressed within the articles. If you come across any content that you believe to be incorrect or invalid, please contact us immediately so that we can address the issue promptly.

Avatar photo


https://www.linkedin.com/in/muhammad-aizuddin-zali-4807b552/
Red Hat ASEAN Senior Platform Consultant. Kubernetes, OpenShift and DevSecOps evangelist.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.