@lijiang

Sculpting in time

Do one thing and do it well.
Every story has a beginning and an end.

Create Gluste Filesystem Cluster

gluster 分布式文件系统创建和使用

4 分钟

glusterfs 简介

glusterfs 一套分布式的网络文件系统, 因为要在kubernetes当作共享式永久存储文件系统使用,所以借此机会研究下该文件系统.

搭建过程

准备

  1. 准备三台服务器,可以互相访问
  2. 操作系统 Linux (ubuntu or centos)

安装 gluster server

快速安装过程详见 Quick Start Guide

具体安装的过程详见 Install Guide

由于我选择的操作系统是 Centos 7.x, 所以下面所有的操作过程全部在Centos上完成.

三台服务器分别为 Server1, Server2, Server3

分别在三台服务器上安装gluster server

yum install centos-release-gluster && yum install glusterfs-server # install server
yum install centos-release-gluster && yum install glusterfs-client # install client

在 Server1 上执行

gluster peer probe server2
gluster peer probe server3

分别在 Server2 Server3 上执行

gluster peer probe server1

显示添加节点状态

gluster peer status

Server上分别会显示peers信息

gluster peer status
[root@glusterFS-node1 ~]# gluster peer status
Number of Peers: 2

Hostname: 172.16.131.78
Uuid: 921a61da-3b40-443d-96cc-a2f8d00ffef0
State: Peer in Cluster (Connected)

Hostname: 172.16.131.79
Uuid: 71daa40c-ef3a-421d-86ac-af3e9aa6febd
State: Peer in Cluster (Connected)

创建和挂载存储容量盘

分别在三台服务器上执行

mkdir -pv /mnt/pv0

在任意一台节点上运行

gluster volume create pv0 replica 2 server1:/mnt/pv0 server2:/mnt/pv0 server3:/mnt/pv0
gluster volume start pv0

在安装有glusterfs-client的系统上执行

mount -t glusterfs server1:/pv0 /mnt

即可使用该pv0存储

kubernetes 中使用

本次安装是独立于k8s系统的,也就是说gluste是运行在k8s集群外面

安装 heketi

heketi 是管理glusterfs存储生命周期的一个框架,维护存储的创建销毁等一系列的glusterfs操作.

项目地址: heketi github

安装文档

由于是独立部署,所以选择Standalone

安装服务端和客户端

yum install heketi
yum install heketi-client

我自己配置的/etc/heketi/heketi.json

{
  "port": "8080",

  "use_auth": false,

  "jwt": {
    "admin": {
      "key": "根据需求自己填写key"
    },
    "user": {
      "key": "根据需求自己填写key"
    }
  },

  "glusterfs": {
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured.",
      "kubernetes: Communicate with GlusterFS containers over",
      "            Kubernetes exec api."
    ],
    "executor": "ssh",

    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/root/.ssh/id_rsa",
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab"
    },

    "_kubeexec_comment": "Kubernetes configuration",
    "kubeexec": {
      "host" :"https://kubernetes.host:8443",
      "cert" : "/path/to/crt.file",
      "insecure": false,
      "user": "kubernetes username",
      "password": "password for kubernetes user",
      "namespace": "OpenShift project or Kubernetes namespace",
      "fstab": "/etc/fstab"
    },

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db",
    "brick_max_size_gb" : 1400,
    "brick_min_size_gb" : 1,

    "_loglevel_comment": [
      "Set log level. Choices are:",
      "  none, critical, error, warning, info, debug",
      "Default is warning"
    ],
    "loglevel" : "debug"
  }
}

其中如果开启了 “use_auth”: true,, admin 是用于认证接口访问的管理员账户,认证密钥为key值.

“port”: “8080” 是heketi的访问端口

由于我们使用的是独立部署,所以heketi在对glusterfs集群进行操作时我们选择sshexec模式

"sshexec": {
      "keyfile": "/root/.ssh/id_rsa",
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab"
    }

分别将生成好的ssh pub key 拷贝到三台服务器上, 要求heketi的节点能够ssh登入访问到所有的glusterfs节点

启动heketi服务

systemctl enable heketi
systemctl start heketi
systemctl status heketi

添加磁盘拓扑结构文件 topology.json

为heketi-cli取一个别名

alias heketi-cli='heketi-cli --server http://127.0.0.1:8080 --user admin --secret adminkey'

目前我用到的就是每个节点上安装有一块磁盘 /dev/vdb topology.json

{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "172.16.130.253"
                            ],
                            "storage": [
                                "172.16.130.253"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/vdb",
                            "destroydata": false
                        }
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "172.16.131.78"
                            ],
                            "storage": [
                                "172.16.131.78"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/vdb",
                            "destroydata": false
                        }
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "172.16.131.79"
                            ],
                            "storage": [
                                "172.16.131.79"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/vdb",
                            "destroydata": false
                        }
                    ]
                }
            ]
        }
    ]
}

最后执行

heketi-cli topology load --json topology.json
heketi-cli volume create --size=2

如果创建2G磁盘成功则heketi已经就绪

k8s 配置

将admin key加入到k8s Secret 类型中保存

通过 echo -n “mypassword” | base64 生成 base64 key, 生成的值填写到{{ key }}

glusterfs-secret.yaml

---
apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
  namespace: ops
data:
  # base64 encoded password. E.g.: echo -n "mypassword" | base64
  key: {{ key }}
type: kubernetes.io/glusterfs

kubectl apply -f glusterfs-secret.yaml

glusterfs-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://heketi_server_ip:port"
  clusterid: "{{ cluster client id }}"
  restauthenabled: "true"
  restuser: "admin"
  secretNamespace: "ops"
  secretName: "heketi-secret"
  volumetype: "none"

kubectl apply -f glusterfs-class.yaml

关于 volumetype: “none” 可以参考这里k8s Glusterfs Storage Class, 目前我采用的是分布式模式无冗余机制

申请PVC

nginx-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-nginx
  namespace: default
  annotations:
    volume.beta.kubernetes.io/storage-class: "glusterfs"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

kubectl apply -f nginx-pvc.yaml

创建Pod

nginx.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: default

spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: nginx
  template:
    metadata:
      labels:
        k8s-app: nginx
    spec:
      containers:
        - name: disk-pod
          image: nginx
          imagePullPolicy: Always
          volumeMounts:
            - name: disk-share-fs
              mountPath: "/mnt"
      volumes:
        - name: disk-share-fs
          persistentVolumeClaim:
            claimName: glusterfs-nginx

kubectl apply -f nginx.yaml

最后如果显示nginx pod已经运行,那么glusterfs做为k8s的持久化存储分布式文件系统已经完成.

NAME                    READY   STATUS    RESTARTS   AGE
nginx-f9567b98c-mv665   1/1     Running   0          16s

Glusterfs 架构

Doing

最新文章

分类

关于

Keep thinking, Stay curious
Always be sensitive to new things