ceph详细安装部署教程(多监控节点)
一、前期准备安装ceph-deploy工具所有的服务器都是用root用户登录的1、安装环境系统centos-6.5 设备:1台admin-node (ceph-ploy) 1台monistor 2台osd2、关闭所有节点的防火墙及关闭selinux,重启机器。service iptables stop sed -i '/SELINUX/s/enforcing/disabled/'
/etc/selinux/config chkconfig iptables off 3、编辑
admin-node节点的ceph yum仓库vi
/etc/yum.repos.d/ceph.repo [ceph-noarch]name=Ceph noarch
packagesbaseurl=://ceph.com/git/? p=ceph.git;a=blob_plain;f=keys/release.asc4、安装搜狐的epel仓库rpm -ivh
、更新admin-node节点的yum源
yum clean all yum update -y6、在admin-node节点上建立一个ceph集群目录mkdir /ceph cd /ceph7、在admin-node节点上安装ceph部署工具yum install ceph-deploy -y8、配置admin-node节点的hosts文件vi /etc/hosts10.240.240.210 admin-node10.240.240.211
node110.240.240.212 node210.240.240.213 node3二、配置ceph-deploy部署的无密码登录每个ceph节点1、在每个Ceph节点上安装一个SSH服务器[ceph@node3 ~]$ yum install openssh-server -y2、配置您的admin-node 管理节点与每个Ceph节点无密码的SSH访问。
[root@ceph-deploy ceph]# ssh-keygenGenerating
public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.3、复制admin-node节点的秘钥到每个ceph节点ssh-copy-id root@admin-node ssh-copy-id root@node1 ssh-copy-id root@node2 ssh-copy-id root@node34、测试每台ceph节点不用密码是否可以登录ssh root@node1 ssh
root@node2 ssh root@node35、修改admin-node管理节点的~/.ssh / config文件,这样它登录到Ceph节点创建的用户Host admin-node Hostname admin-node User root Host node1 Hostname node1 User rootHost node2 Hostname node2 User rootHost node3 Hostname node3 User root三、用ceph-deploy工具部署ceph集群1、在admin-node节点上新建一个ceph集群
[root@admin-node ceph]# ceph-deploy new node1 node2 node3 (执行这条命令后node1 node2 node3都作为了monitor节点,多个mon节点可以实现互备) [ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf[ceph_deploy.cli][INFO ] Invoked (1.5.3): /usr/bin/ceph-deploy new node1 node2
node3[ceph_deploy.new][DEBUG ] Creating new cluster named ceph[ceph_deploy.new][DEBUG ] Resolving host node1[ceph_deploy.new][DEBUG ] Monitor node1 at 10.240.240.211[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds[node1][DEBUG ] connected to host: admin-node [node1][INFO ] Running command: ssh -CT -o BatchMode=yes
node1[ceph_deploy.new][DEBUG ] Resolving host
node2[ceph_deploy.new][DEBUG ] Monitor node2 at 10.240.240.212[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds[node2][DEBUG ] connected to host: admin-node [node2][INFO ] Running command: ssh -CT -o BatchMode=yes
node2[ceph_deploy.new][DEBUG ] Resolving host
node3[ceph_deploy.new][DEBUG ] Monitor node3 at 10.240.240.213[ceph_deploy.new][INFO ] making sure
passwordless SSH succeeds[node3][DEBUG ] connected to host: admin-node [node3][INFO ] Running command: ssh -CT -o BatchMode=yes
node3[ceph_deploy.new][DEBUG ] Monitor initial members are ['node1', 'node2',
'node3'][ceph_deploy.new][DEBUG ] Monitor addrs are ['10.240.240.211', '10.240.240.212',
'10.240.240.213'][ceph_deploy.new][DEBUG ] Creating a random mon key...[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...查看生成的文件[root@admin-node ceph]# lsceph.conf ceph.log ceph.mon.keyring查看ceph的配置文件,三个节点都变为了控制节点[root@admin-node ceph]# cat ceph.conf [global]auth_service_required =
cephxfilestore_xattr_use_omap = trueauth_client_required = cephxauth_cluster_required = cephxmon_host =
10.240.240.211,10.240.240.212,10.240.240.213mon_initia l_members = node1, node2, node3fsid =
4dc38af6-f628-4c1f-b708-9178cf4e032b[root@admin-nod e ceph]# 2、部署之前确保ceph每个节点没有ceph数据包(先清空之前所有的ceph数据,如果是新装不用执行此步
骤,如果是重新部署的话也执行下面的命令)
[root@ceph-deploy ceph]# ceph-deploy purgedata
admin-node node1 node2 node3 [root@ceph-deploy ceph]# ceph-deploy forgetkeys[root@ceph-deploy ceph]# ceph-deploy purge admin-node node1 node2 node3 如果是新装的话是没有任何数据的3、编辑admin-node节点的ceph配置文件,把下面的配置放入ceph.conf中osd pool default size = 24、在admin-node节点用ceph-deploy工具向各个节点安装ceph[root@admin-node ceph]#
ceph-deploy install admin-node node1 node2
node3[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf[ceph_deploy.cli][INFO ] Invoked (1.5.3): /usr/bin/ceph-deploy install admin-node node1 node2 node3[ceph_deploy.install][DEBUG ] Installing stable version firefly on cluster ceph hosts admin-node node1 node2
node3[ceph_deploy.install][DEBUG ] Detecting platform for host admin-node ...[admin-node][DEBUG ] connected to host: admin-node [admin-node][DEBUG ] detect platform information from remote host[admin-node][DEBUG ] detect machine type[ceph_deploy.install][INFO ] Distro info: CentOS 6.5 Final[admin-node][INFO ] installing ceph on
admin-node[admin-node][INFO ] Running command: yum clean all[admin-node][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit,
security[admin-node][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras
updates[admin-node][DEBUG ] Cleaning up
Everything[admin-node][DEBUG ] Cleaning up list of fastest mirrors[admin-node][INFO ] Running command: yum -y install wget[admin-node][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit,
security[admin-node][DEBUG ] Determining fastest mirrors[admin-node][DEBUG ] * base:
mirrors.btte.net[admin-node][DEBUG ] * epel:
mirrors.neusoft.edu.cn[admin-node][DEBUG ] * extras: mirrors.btte.net[admin-node][DEBUG ] * updates: mirrors.btte.net[admin-node][DEBUG ] Setting up Install Process[admin-node][DEBUG ] Package
wget-1.12-1.11.el6_5.x86_64 already installed and latest version[admin-node][DEBUG ] Nothing to
do[admin-node][INFO ] adding EPEL
repository[admin-node][INFO ] Running command: wget
6-8.noarch.rpm[admin-node][WARNIN] --2014-06-07
22:05:34--
[admin-node][WARNIN] Resolving
dl.fedoraproject.org... 209.132.181.24, 209.132.181.25, 209.132.181.26, ...[admin-node][WARNIN] Connecting to dl.fedoraproject.org|209.132.181.24|:80...
connected.[admin-node][WARNIN] HTTP request sent, awaiting response... 200 OK[admin-node][WARNIN] Length: 14540 (14K)
[application/x-rpm][admin-node][WARNIN] Saving to:
`epel-release-6-8.noarch.rpm.1'[admin-node][WARNIN] [admin-node][WARNIN] 0K .......... ....
100% 73.8K=0.2s[admin-node][WARNIN]
[admin-node][WARNIN] 2014-06-07 22:05:35 (73.8 KB/s) - `epel-release-6-8.noarch.rpm.1' saved
[14540/14540][admin-node][WARNIN]
[admin-node][INFO ] Running command: rpm -Uvh
--replacepkgs epel-release-6*.rpm[admin-node][DEBUG ] Preparing...
############################################## ####[admin-node][DEBUG ] epel-release
############################################## ####[admin-node][INFO ] Running command: rpm
--import
;a=blob_plain;f=keys/relea se.asc[admin-node][INFO ] Running command: rpm -Uvh --replacepkgs
[admin-node][DEBUG ] Retrieving
[admin-node][DEBUG ] Preparing...
############################################## ####[admin-node][DEBUG ] ceph-release
############################################## ####[admin-node][INFO ] Running command: yum -y -q install ceph[admin-node][DEBUG ] Package
ceph-0.80.1-2.el6.x86_64 already installed and latest version[admin-node][INFO ] Running command: ceph
--version[admin-node][DEBUG ] ceph version 0.80.1
(a38fe1169b6d2ac98b427334c12d7cf81f809b74)[ceph_d eploy.install][DEBUG ] Detecting platform for host
node1 ...[node1][DEBUG ] connected to host: node1 [node1][DEBUG ] detect platform information from remote
host[node1][DEBUG ] detect machine
type[ceph_deploy.install][INFO ] Distro info: CentOS 6.4 Final[node1][INFO ] installing ceph on
node1[node1][INFO ] Running command: yum clean
all[node1][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security[node1][DEBUG ] Cleaning repos: base extras updates[node1][DEBUG ] Cleaning up Everything[node1][DEBUG ] Cleaning up list of fastest mirrors[node1][INFO ] Running command: yum -y install wget[node1][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security[node1][DEBUG ] Determining fastest mirrors[node1][DEBUG ] * base:
mirrors.btte.net[node1][DEBUG ] * extras:
mirrors.btte.net[node1][DEBUG ] * updates:
mirrors.btte.net[node1][DEBUG ] Setting up Install Process[node1][DEBUG ] Resolving
Dependencies[node1][DEBUG ] --> Running transaction check[node1][DEBUG ] ---> Package wget.x86_64
0:1.12-1.8.el6 will be updated[node1][DEBUG ] ---> Package wget.x86_64 0:1.12-1.11.el6_5 will be an update[node1][DEBUG ] --> Finished Dependency Resolution[node1][DEBUG ] [node1][DEBUG ]
Dependencies Resolved[node1][DEBUG ]
[node1][DEBUG ]
============================================ ====================================[node1][D EBUG ] Package Arch Version Repository Size[node1][DEBUG ]
============================================ ====================================[node1][D EBUG ] Updating:[node1][DEBUG ] wget
x86_64 1.12-1.11.el6_5 updates 483 k[node1][DEBUG ] [node1][DEBUG ] Transaction Summary[node1][DEBUG ]
============================================ ====================================[node1][D EBUG ] Upgrade 1 Package(s)[node1][DEBUG ] [node1][DEBUG ] Total download size: 483
k[node1][DEBUG ] Downloading
Packages:[node1][DEBUG ] Running
rpm_check_debug[node1][DEBUG ] Running Transaction Test[node1][DEBUG ] Transaction Test
Succeeded[node1][DEBUG ] Running Transaction Updating : wget-1.12-1.11.el6_5.x86_64
1/2 Cleanup : wget-1.12-1.8.el6.x86_64
2/2 Verifying : wget-1.12-1.11.el6_5.x86_64
1/2 Verifying : wget-1.12-1.8.el6.x86_64
2/2 [node1][DEBUG ] [node1][DEBUG ]
Updated:[node1][DEBUG ] wget.x86_64
0:1.12-1.11.el6_5
[node1][DEBUG ] [node1][DEBUG ]
Complete![node1][INFO ] adding EPEL
repository[node1][INFO ] Running command: wget [node1][WARNIN] --2014-06-07 22:06:57-- [node1][WARNIN] Resolving
dl.fedoraproject.org... 209.132.181.23, 209.132.181.24, 209.132.181.25, ...[node1][WARNIN] Connecting to
dl.fedoraproject.org|209.132.181.23|:80...
connected.[node1][WARNIN] HTTP request sent, awaiting response... 200 OK[node1][WARNIN] Length: 14540 (14K) [application/x-rpm][node1][WARNIN] Saving to:
`epel-release-6-8.noarch.rpm'[node1][WARNIN]
[node1][WARNIN] 0K .......... ....
100% 69.6K=0.2s[node1][WARNIN] [node1][WARNIN]
2014-06-07 22:06:58 (69.6 KB/s) -
`epel-release-6-8.noarch.rpm' saved
[14540/14540][node1][WARNIN] [node1][INFO ] Running command: rpm -Uvh --replacepkgs
epel-release-6*.rpm[node1][DEBUG ] Preparing...
############################################## ####[node1][DEBUG ] epel-release
############################################## ####[node1][WARNIN] warning:
epel-release-6-8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY[node1][INFO ] Running command: rpm --import
;a=blob_plain;f=keys/relea se.asc[node1][INFO ] Running command: rpm -Uvh
--replacepkgs
[node1][DEBUG ] Retrieving
[node1][DEBUG ] Preparing...
############################################## ####[node1][DEBUG ] ceph-release
##############################################
####[node1][INFO ] Running command: yum -y -q install ceph[node1][WARNIN] warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY[node1][WARNIN] Importing GPG key
0x0608B895:[node1][WARNIN] Userid : EPEL (6)
[node1][WARNIN] Package: epel-release-6-8.noarch
(installed)[node1][WARNIN] From :
/etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6[node1][WARNI N] Warning: RPMDB altered outside of
yum.[node1][INFO ] Running command: ceph
--version[node1][WARNIN] Traceback (most recent call last):[node1][WARNIN] File "/usr/bin/ceph", line 53, in [node1][WARNIN] import
argparse[node1][WARNIN] ImportError: No module named argparse[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph --version 上面报错信息的解决
方法
快递客服问题件处理详细方法山木方法pdf计算方法pdf华与华方法下载八字理论方法下载
是:在报错的节点上执行下面的命令[root@admin-node ~]# yum install *argparse* -y5、添加初始监控节点并收集密钥(新的ceph-deploy v1.1.3以后的版本)。[root@admin-node ceph]# ceph-deploy mon
create-initial [ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf[ceph_deploy.cli][INFO ] Invoked (1.5.3): /usr/bin/ceph-deploy mon
create-initial[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...[node1][DEBUG ] connected to host: node1 [node1][DEBUG ] detect platform information from remote host[node1][DEBUG ] detect machine type[ceph_deploy.mon][INFO ] distro info: CentOS 6.4 Final[node1][DEBUG ] determining if provided host has same hostname in remote[node1][DEBUG ] get remote short hostname[node1][DEBUG ] deploying mon to node1[node1][DEBUG ] get remote short
hostname[node1][DEBUG ] remote hostname:
node1[node1][DEBUG ] write cluster configuration to
/etc/ceph/{cluster}.conf[node1][DEBUG ] create the mon path if it does not exist[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done[node1][DEBUG ] done path does not exist:
/var/lib/ceph/mon/ceph-node1/done[node1][INFO ] creating keyring file:
/var/lib/ceph/tmp/ceph-node1.mon.keyring[node1][DEBUG ] create the monitor keyring file[node1][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i node1
--keyring
/var/lib/ceph/tmp/ceph-node1.mon.keyring[node1][DEBUG ] ceph-mon: mon.noname-a 10.240.240.211:6789/0 is local, renaming to mon.node1[node1][DEBUG ] ceph-mon: set fsid to
369daf5a-e844-4e09-a9b1-46bb985aec79[node1][DEBU G ] ceph-mon: created monfs at
/var/lib/ceph/mon/ceph-node1 for
mon.node1[node1][INFO ] unlinking keyring file
/var/lib/ceph/tmp/ceph-node1.mon.keyring[node1][DEBUG ] create a done file to avoid re-doing the mon deployment[node1][DEBUG ] create the init path if it does not exist[node1][DEBUG ] locating the `service` executable...[node1][INFO ] Running command:
/sbin/service ceph -c /etc/ceph/ceph.conf start
mon.node1[node1][WARNIN] /etc/init.d/ceph: line 15:
/lib/lsb/init-functions: No such file or
directory[node1][ERROR ] RuntimeError: command returned non-zero exit status:
1[ceph_deploy.mon][ERROR ] Failed to execute command: /sbin/service ceph -c /etc/ceph/ceph.conf start
mon.node1[ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors解决上面报错信息的方法:手动在node1 node2 node3节点上执行下面的命令[root@node1 ~]# yum install redhat-lsb -y再次执行上面的命令可以成功激活监控节点[root@admin-node ceph]# ceph-deploy mon
create-initial[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf[ceph_deploy.cli][INFO ] Invoked (1.5.3): /usr/bin/ceph-deploy mon
create-initial[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1 node2
node3[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...[node1][DEBUG ] connected to host: node1 [node1][DEBUG ] detect platform information from remote host[node1][DEBUG ] detect machine
type[ceph_deploy.mon][INFO ] distro info: CentOS 6.4 Final[node1][DEBUG ] determining if provided host has same hostname in remote[node1][DEBUG ] get remote short hostname[node1][DEBUG ] deploying mon to
node1[node1][DEBUG ] get remote short
hostname[node1][DEBUG ] remote hostname:
node1[node1][DEBUG ] write cluster configuration to
/etc/ceph/{cluster}.conf[node1][DEBUG ] create the mon path if it does not exist[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done[node1][DEBUG ] create a done file to avoid re-doing the mon
deployment[node1][DEBUG ] create the init path if it does not exist[node1][DEBUG ] locating the `service` executable...[node1][INFO ] Running command:
/sbin/service ceph -c /etc/ceph/ceph.conf start
mon.node1[node1][DEBUG ] === mon.node1 ===
[node1][DEBUG ] Starting Ceph mon.node1 on
node1...already running[node1][INFO ] Running command: ceph --cluster=ceph --admin-daemon
/var/run/ceph/ceph-mon.node1.asok
mon_status[node1][DEBUG ]
****************************************************************** **************[node1][DEBUG ] status for monitor:
mon.node1[node1][DEBUG ] {[node1][DEBUG ] "election_epoch": 6, [node1][DEBUG ]
"extra_probe_peers": [[node1][DEBUG ]
"10.240.240.212:6789/0", [node1][DEBUG ]
"10.240.240.213:6789/0"[node1][DEBUG ] ],
[node1][DEBUG ] "monmap": {[node1][DEBUG ] "created": "0.000000", [node1][DEBUG ] "epoch": 2, [node1][DEBUG ] "fsid":
"4dc38af6-f628-4c1f-b708-9178cf4e032b",
[node1][DEBUG ] "modified": "2014-06-07
22:38:29.435203", [node1][DEBUG ] "mons": [[node1][DEBUG ] {[node1][DEBUG ]
"addr": "10.240.240.211:6789/0", [node1][DEBUG ] "name": "node1", [node1][DEBUG ] "rank":
0[node1][DEBUG ] }, [node1][DEBUG ]
{[node1][DEBUG ] "addr":
"10.240.240.212:6789/0", [node1][DEBUG ] "name": "node2", [node1][DEBUG ] "rank":
1[node1][DEBUG ] }, [node1][DEBUG ]
{[node1][DEBUG ] "addr":
"10.240.240.213:6789/0", [node1][DEBUG ] "name": "node3", [node1][DEBUG ] "rank":
2[node1][DEBUG ] }[node1][DEBUG ] ][node1 ][DEBUG ] }, [node1][DEBUG ] "name": "node1", [node1][DEBUG ] "outside_quorum": [],
[node1][DEBUG ] "quorum": [[node1][DEBUG ] 0,
[node1][DEBUG ] 1, [node1][DEBUG ] 2[node1][DEBUG