@lijiang

Sculpting in time

Do one thing and do it well.
Every story has a beginning and an end.

Create Gluste Filesystem Cluster

gluster Distributed File System Creation and Use

5-Minute Read

Introduction to glusterfs

glusterfs A distributed network filesystem, used as a shared permanent storage filesystem in kubernetes, so I took the opportunity to study it.

Build process

Preparation

  1. prepare three servers that can access each other
  2. operating system Linux (ubuntu or centos)

Install gluster server

For details of the quick installation process, see Quick Start Guide

See Install Guide for details of the installation process.

Since I chose Centos 7.x as my operating system, all the following operations are done on Centos.

The three servers are Server1, Server2, Server3.

Install gluster server on each of the three servers

yum install centos-release-gluster && yum install glusterfs-server # install server
yum install centos-release-gluster && yum install glusterfs-client # install client

Run commandline on Server1

gluster peer probe server2
gluster peer probe server3

Run commandline on Server3

gluster peer probe server1

Display node status

gluster peer status

Display peers info

gluster peer status
[root@glusterFS-node1 ~]# gluster peer status
Number of Peers: 2

Hostname: 172.16.131.78
Uuid: 921a61da-3b40-443d-96cc-a2f8d00ffef0
State: Peer in Cluster (Connected)

Hostname: 172.16.131.79
Uuid: 71daa40c-ef3a-421d-86ac-af3e9aa6febd
State: Peer in Cluster (Connected)

Create and mount storage capacity disks

Execute on each of the three servers

mkdir -pv /mnt/pv0

Run on any one node

gluster volume create pv0 replica 2 server1:/mnt/pv0 server2:/mnt/pv0 server3:/mnt/pv0
gluster volume start pv0

Execute on the system with glusterfs-client installed

mount -t glusterfs server1:/pv0 /mnt

to use the pv0 storage

Use in kubernetes

This installation is independent of the k8s system, meaning that gluste is running outside of the k8s cluster

install heketi

heketi is a framework for managing the lifecycle of glusterfs storage, maintaining a series of glusterfs operations such as storage creation and destruction. The project address: [heketi

Project address: heketi github

installation documentation

Since this is a standalone deployment, choose Standalone

Install the server and client

yum install heketi
yum install heketi-client

/etc/heketi/heketi.json my config:

{
  "port": "8080",

  "use_auth": false,

  "jwt": {
    "admin": {
      "key": "custom key"
    },
    "user": {
      "key": "custom key"
    }
  },

  "glusterfs": {
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured.",
      "kubernetes: Communicate with GlusterFS containers over",
      "            Kubernetes exec api."
    ],
    "executor": "ssh",

    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/root/.ssh/id_rsa",
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab"
    },

    "_kubeexec_comment": "Kubernetes configuration",
    "kubeexec": {
      "host" :"https://kubernetes.host:8443",
      "cert" : "/path/to/crt.file",
      "insecure": false,
      "user": "kubernetes username",
      "password": "password for kubernetes user",
      "namespace": "OpenShift project or Kubernetes namespace",
      "fstab": "/etc/fstab"
    },

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db",
    "brick_max_size_gb" : 1400,
    "brick_min_size_gb" : 1,

    "_loglevel_comment": [
      "Set log level. Choices are:",
      "  none, critical, error, warning, info, debug",
      "Default is warning"
    ],
    "loglevel" : "debug"
  }
}

where if “use_auth”: true,, admin is the admin account used for authentication interface access and the authentication key> is the key value.

“port”: “8080” is the access port for heketi

Since we are using a standalone deployment, heketi chooses the sshexec mode > when operating on the glusterfs cluster

"sshexec": {
      "keyfile": "/root/.ssh/id_rsa",
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab"
    }

Copy the generated ssh pub key to each of the three servers, requiring the heketi nodes to have ssh access to all glusterfs nodes

Start heketi service

systemctl enable heketi
systemctl start heketi
systemctl status heketi

Add the disk topology file topology.json

Make an alias for heketi-cli

alias heketi-cli='heketi-cli --server http://127.0.0.1:8080 --user admin --secret adminkey'

What I’m currently using is a disk /dev/vdb installed on each node topology.json

{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "172.16.130.253"
                            ],
                            "storage": [
                                "172.16.130.253"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/vdb",
                            "destroydata": false
                        }
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "172.16.131.78"
                            ],
                            "storage": [
                                "172.16.131.78"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/vdb",
                            "destroydata": false
                        }
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "172.16.131.79"
                            ],
                            "storage": [
                                "172.16.131.79"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/vdb",
                            "destroydata": false
                        }
                    ]
                }
            ]
        }
    ]
}

Final execution

heketi-cli topology load --json topology.json
heketi-cli volume create --size=2

If the 2G disk creation is successful, the heketi is ready

k8s config

Add the admin key to the k8s Secret type and save it

Generate the base64 key by echo -n “mypassword” | base64, and fill in the generated value to {{ key }}

glusterfs-secret.yaml

---
apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
  namespace: ops
data:
  # base64 encoded password. E.g.: echo -n "mypassword" | base64
  key: {{ key }}
type: kubernetes.io/glusterfs

kubectl apply -f glusterfs-secret.yaml

glusterfs-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://heketi_server_ip:port"
  clusterid: "{{ cluster client id }}"
  restauthenabled: "true"
  restuser: "admin"
  secretNamespace: "ops"
  secretName: "heketi-secret"
  volumetype: "none"

kubectl apply -f glusterfs-class.yaml

For volumetype: “none” see here k8s Glusterfs Storage Class, Currently I am using ** distributed mode no redundancy > redundancy mechanism**

Apply PVC

nginx-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-nginx
  namespace: default
  annotations:
    volume.beta.kubernetes.io/storage-class: "glusterfs"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

kubectl apply -f nginx-pvc.yaml

Create Pod

nginx.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: default

spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: nginx
  template:
    metadata:
      labels:
        k8s-app: nginx
    spec:
      containers:
        - name: disk-pod
          image: nginx
          imagePullPolicy: Always
          volumeMounts:
            - name: disk-share-fs
              mountPath: "/mnt"
      volumes:
        - name: disk-share-fs
          persistentVolumeClaim:
            claimName: glusterfs-nginx

kubectl apply -f nginx.yaml

Finally, if it shows that the nginx pod is running, then glusterfs is done as a persistent storage distributed file system for k8s .

NAME READY STATUS RESTARTS AGE
nginx-f9567b98c-mv665 1/1 Running 0 16s

Glusterfs architecture

Doing

Recent Posts

Categories

About

Keep thinking, Stay curious
Always be sensitive to new things