ceph高可用分布式存储集群05-nfs-ganesha将rgw导出为nfs文件接口

概述
我们知道ceph是统一的分布式存储系统,其可以提供给应用三种访问数据的接口:对象(RGW)、块(RBD)和文件(CEPHFS)接口。我们在使用对象接口的时候通常都是基于HTTP协议来存取数据。
下面介绍另外一种方式来使用ceph的对象(RGW)接口 — nfs-ganesha。这种方式可以通过基于文件的访问协议(如NFSv3和NFSv4)来访问Ceph对象网关命名空间。详细的介绍可以去ceph官网看看:http://docs.ceph.com/docs/master/radosgw/nfs/。
1、环境准备
1.1、准备虚拟机
单纯为了测试可用性,所以我这里就是使用一个虚拟机搭建ceph集群,然后配置nfs-ganesha将rgw导出为nfs文件接口
[root@ceph-osd-232 ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@ceph-osd-232 ~]# uname -a
Linux ceph-osd-232 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
1.2、配置yum源
[root@ceph-osd-232 ~]# ll /etc/yum.repos.d/
total 48
-rw-r–r–. 1 root root 1664 Nov 23  2018 CentOS-Base.repo
-rw-r–r–. 1 root root 1309 Nov 23  2018 CentOS-CR.repo
-rw-r–r–. 1 root root  649 Nov 23  2018 CentOS-Debuginfo.repo
-rw-r–r–. 1 root root  314 Nov 23  2018 CentOS-fasttrack.repo
-rw-r–r–. 1 root root  630 Nov 23  2018 CentOS-Media.repo
-rw-r–r–  1 root root  717 Mar 24  2020 CentOS-NFS-Ganesha-28.repo
-rw-r–r–. 1 root root 1331 Nov 23  2018 CentOS-Sources.repo
-rw-r–r–  1 root root  353 Jul 31  2018 CentOS-Storage-common.repo
-rw-r–r–. 1 root root 5701 Nov 23  2018 CentOS-Vault.repo
-rw-r–r–  1 root root  557 Feb  7 16:39 ceph.repo
-rw-r–r–  1 root root  664 Dec 26 19:31 epel.repo  
1.2.1、base源
[root@ceph-osd-232 ~]# cat /etc/yum.repos.d/CentOS-Base.repo

# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#
 
[base]
name=CentOS-$releasever – Base
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
 
#released updates
[updates]
name=CentOS-$releasever – Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
 
#additional packages that may be useful
[extras]
name=CentOS-$releasever – Extras
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
 
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever – Plus
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

1.2.2、epel源
[root@ceph-osd-232 ~]# cat /etc/yum.repos.d/epel.repo

[epel]
name=Extra Packages for Enterprise Linux 7 – $basearch
baseurl=http://mirrors.aliyun.com/epel/7/$basearch
failovermethod=priority
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 – $basearch – Debug
baseurl=http://mirrors.aliyun.com/epel/7/$basearch/debug
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0
[epel-source]
name=Extra Packages for Enterprise Linux 7 – $basearch – Source
baseurl=http://mirrors.aliyun.com/epel/7/SRPMS
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0

1.2.3、ceph源
我这里配置nautilus的ceph源
[root@ceph-osd-232 ~]# cat /etc/yum.repos.d/ceph.repo

[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

2、相关软件包准备
2.1、安装ceph相关包
注意这里的 librgw2-devel 包后面编译nfs-ganesha时需要,所以一定要安装
[root@ceph-osd-232 ~]# yum install ceph librgw2-devel libcephfs2 -y
[root@ceph-osd-232 ~]# ceph -v

ceph version 14.2.9 (581f22da52345dba46ee232b73b990f06029a2a0) nautilus (stable)

部署ceph集群,下面是我部署好的单机ceph集群
[root@ceph-osd-232 ~]# ceph -v

  cluster:
    id:     56863ba7-82f6-45db-b687-987f3d4cfa7c
    health: HEALTH_WARN
            1 pools have many more objects per pg than average
  services:
    mon: 3 daemons, quorum ceph-osd-232,ceph-osd-232,ceph-osd-233 (age 2d)
    mgr: ceph-osd-232(active, since 2d), standbys: ceph-osd-232, ceph-osd-233
    osd: 8 osds: 8 up (since 2d), 8 in (since 6d)
    rgw: 3 daemons active (ceph-osd-232, ceph-osd-232, ceph-osd-233)
  data:
    pools:   8 pools, 480 pgs
    objects: 2.17M objects, 6.5 TiB
    usage:   9.9 TiB used, 48 TiB / 58 TiB avail
    pgs:     480 active+clean

2.2、安装nfs-ganesha
因为我使用的ceph是nautilus 14.2.9,所以这里下载的nfs-ganesha是V2.8.4
在ganesha节点上配置nfs-ganesha源。
#vi /etc/yum.repos.d/nfs-ganesha.repo

[nfs-ganesha]
name=nfs-ganesha
baseurl=http://us-west.ceph.com/nfs-ganesha/rpm-V2.8-stable/nautilus/x86_64/
enabled=1
priority=1

 
# yum install nfs-ganesha nfs-ganesha-ceph nfs-ganesha-rgw -y
 
3、开始配置
3.1、准备rgw用户
# radosgw-admin user create –uid=qf –display-name=”qf” –access-key=DS1WUIMV2NZHK6KGURTG –secret=ppi22WJN9ElnxhOyDbjmA3gEyuKxHP8y6Vm8JSrH
3.2、准备nfs-ganesha配置文件
配置起来还是比较简单的(不过是最简配置··· 哈哈)
[root@ceph-osd-232 ~]# cat /etc/ganesha/ganesha.conf

EXPORT
{
        Export_ID=1;
        Path = “/”;
        Pseudo = “/”;
        Access_Type = RW;
        Protocols = 4;
        Transports = TCP;
        FSAL {
                Name = RGW;
                User_Id = “qf”;
                Access_Key_Id =”DS1WUIMV2NZHK6KGURTG”;
                Secret_Access_Key = “ppi22WJN9ElnxhOyDbjmA3gEyuKxHP8y6Vm8JSrH”;
        }
}
RGW {
        ceph_conf = “/etc/ceph/ceph.conf”;
        # for vstart cluster, name = “client.admin”
        name = “client.rgw.ceph-osd-232”;
        cluster = “ceph”;
#       init_args = “-d –debug-rgw=16”;
}

RGW配置项中的name值,可以使用命令ceph auth list进行查询。
 
RGW-NFS配置文件的模板路径在:
/usr/share/doc/ganesha/config_samples/rgw.conf
 
3.3、启动nfs-ganesha服务
systemctl start nfs-ganesha
systemctl enable nfs-ganesha
启动完成后,可以通过ps -ef | grep ganesha.nfsd 查询是否有生成对应的进程,如果没有,可以查看日志nfs-ganesha.log,根据日志中输出的信息进行一下检查。
 
3.4、查看nfs-ganesha服务是否正常启动
[root@ceph-osd-232 ganesha]# ps aux|grep ganesha
root       68675  0.3  0.3 7938392 55348 ?       Ssl  16:44   0:02 /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT
可以看到nfs-ganesha服务已经正常启动了
# ceph -w

  cluster:
    id:     9e9cc600-9f75-4621-8094-26082d390578
    health: HEALTH_WARN
            application not enabled on 1 pool(s)
            1 daemons have recently crashed
  services:
    mon:     3 daemons, quorum ceph-osd-231,ceph-osd-232,ceph-osd-233 (age 97m)
    mgr:     ceph-osd-231(active, since 9h), standbys: ceph-osd-233, ceph-osd-232
    osd:     12 osds: 12 up (since 97m), 12 in (since 9h)
    rgw:     3 daemons active (ceph-osd-231, ceph-osd-232, ceph-osd-233)
    rgw-nfs: 1 daemon active (ceph-osd-232)
  data:
    pools:   9 pools, 528 pgs
    objects: 642.62k objects, 2.1 TiB
    usage:   6.4 TiB used, 13 TiB / 19 TiB avail
    pgs:     528 active+clean
  io:
    client:   6.7 KiB/s rd, 9.9 MiB/s wr, 9 op/s rd, 14 op/s wr
    cache:    2.3 MiB/s flush, 8.0 MiB/s evict, 1 op/s promote

 
4、使用nfs客户端挂载目录
现在来到另外一台服务器上面
4.1、安装nfs-utils
[root@host-10-2-110-11 ~]# yum install -y nfs-utils
4.2、挂载nfs
这里的10.2.110.11就是我们上面的服务器ip
[root@host-10-2-110-11 ~]# mount -t nfs4 10.2.110.232:/ /mnt/
[root@host-10-2-110-11 ~]# mount |grep mnt

10.2.110.232:/ on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.2.110.11,local_lock=none,addr=10.2.110.232)

 
可以看到现在rgw data池里面的对象数是0个,现在在nfs挂载目录下创建一个目录试试
[root@host-10-2-110-11 ~]# mkdir -pv /mnt/testbucket
我们看到bucket增加了一个
[root@ceph-osd-233 ~]# radosgw-admin bucket list
[
    “qfpool”,
    “testbucket”
]
 
总结
我们上面主要做了这些步骤:
* 安装ceph并搭建ceph集群
* 使用radosgw-admin创建rgw用户(在创建rgw用户的时候,会自动创建rgw所需的存储池)
* 获取nfs-ganesha及其依赖的ntirpc模块
* 编译、安装和配置nfs-ganesha
* 最后启动nfs-ganesha服务并在其客户端挂载、测试使用
 
友善提示:使用nfs-ganesha将rgw导出为nfs文件,当你的对象存储中文件数量达到百万级时,性能会非常糟糕,建议使用goofys
 
作者:Dexter_Wang   工作岗位:某互联网公司资深云计算与存储工程师  联系邮箱:993852246@qq.com

Published by

风君子

独自遨游何稽首 揭天掀地慰生平

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注