openstack pike版如何使用ceph作后端存储

发布时间:2021-12-17 10:49:07 作者:小新
来源:亿速云 阅读:101

小编给大家分享一下openstack pike版如何使用ceph作后端存储,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!

节点分布
10.1.1.1 controller
10.1.1.2 compute
10.1.1.3 middleware
10.1.1.4 network
10.1.1.5 compute2
10.1.1.6 compute3
10.1.1.7 cinder
##分布式存储
后端存储用的是ceph,mon_host = 10.1.1.2,10.1.1.5,10.1.1.6
##给cinder创建数据库,服务以及endpoint
mysql -u root -p
create database cinder;
grant all privileges on cinder.* to 'cinder'@'localhost' identified by '123456';
grant all privileges on cinder.* to 'cinder'@'%' identified by '123456';


cat admin-openrc 
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_USERNAME=admin
export OS_PROJECT_NAME=admin
export OS_PASSWORD=123456
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_AUTH_URL=http://controller:35357/v3
source admin-openrc
创建cinder用户
openstack user create --domain default --password-prompt cinder
cinder 用户加入admin组
openstack role add --project service --user cinder admin
创建service
openstack service create --name cinderv2 --description "OpentStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpentStack Block Storage" volumev3
创建API endpoint
openstack endpoint create --region RegionOne volumev2 public http://cinder:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://cinder:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://cinder:8776/v2/%\(tenant_id\)s


openstack endpoint create --region RegionOne volumev3 public http://cinder:8776/v3/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://cinder:8776/v3/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://cinder:8776/v3/%\(tenant_id\)s


创建ceph pool
在ceph 上执行如下命令
ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool vms 128


ceph 用户授权
因为后端存储用的是ceph,所以要给ceph客户端授权,以便ceph用户能访问相应的ceph pool,使用到ceph的有glance,cinder,nova-compute
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rwx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=images'


ceph auth list
client.cinder
key: AQDQEWdaNU9YGBAAcEhKd6KQKHN9HeFIIS4+fw==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rwx pool=images
client.glance
key: AQD4EWdaTdZjJhAAuj8CvNY59evhiGtEa9wLzw==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children,allow rwx pool=images


在controller,cinder,compute节点建/etc/ceph目录
将给controller,cinder,compute节点,建立授权文件
ceph auth get-or-create client.glance |ssh controller sudo tee /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder |ssh cinder sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder |ssh compute sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder |ssh compute2 sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder |ssh compute3 sudo tee /etc/ceph/ceph.client.cinder.keyring
给ceph.client.glance.keyring 赋予glance权限ceph.client.cinder.keyring赋予cinder权限
chown glance.glance /etc/ceph/ceph.client.glance.keyring
chown cinder.cinder /etc/ceph/ceph.client.cinder.keyring
将ceph的配置文件/etc/ceph/ceph.conf,复制一份到glance,cinder和compute节点的/etc/ceph目录
安装和配置组件
cinder 节点
yum install -y openstack-cinder python-ceph ceph-common python-rbd
在/etc/ceph目录下要有如下文件
[root@cinder ~]# ll /etc/ceph/
total 12
-rw-r--r-- 1 cinder cinder  64 1月  26 15:52 ceph.client.cinder.keyring
-rw-r--r-- 1 root   root   263 1月  26 15:53 ceph.conf


cp /etc/cinder/cinder.conf{,.bak}
>/etc/cinder/cinder.conf


cat /etc/cindr/cinder.conf

[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:123456@middleware
log_dir = /var/log/cinder/api.log
enabled_backends = ceph


[database]
connection = mysql+pymysql://cinder:123456@middleware/cinder


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456


[oslo_concurrency]
lock_path = /var/lib/cinder/tmp




[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_secret_uuid = f85def47-c1ac-46fe-a1d5-c0139c46d91a

重启cinder服务
systemctl restart openstack-api.service
systemctl restart openstack-scheduler.service
systemctl restart openstack-volume.service




glance节点
安装ceph客户端
yum install -y python-ceph ceph-common python-rbd
在/etc/ceph目录下,要有如下文件
[root@controller ~]# ll /etc/ceph/
total 12
-rw-r--r-- 1 glance glance  64 1月  23 19:31 ceph.client.glance.keyring
-rw-r--r-- 1 root   root   416 1月  24 10:32 ceph.conf
有关ceph的配置
/etc/glance/glance.conf
[DEFAULT]
#enable image locaions and take advantage of copy-on-write cloning for images
show_image_direct_url = true
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
重启glance服务
systemctl restart openstack-glance-api.service


compute节点
安装ceph客户端
yum install -y python-ceph ceph-common python-rbd
uuidgen生产uid,和/etc/cinder/cinder.conf中的uuid保持一致
f85def47-c1ac-46fe-a1d5-c0139c46d91a
创建secret文件
cat secret.xml 
<secret ephemeral='no' private='no'>
  <uuid>f85def47-c1ac-46fe-a1d5-c0139c46d91a</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
定义secret
sudo virsh secret-define --file secret.xml
sudo virsh secret-set-value --secret f85def47-c1ac-46fe-a1d5-c0139c46d91a --base64 $(cat ceph.client.cinder.keyring |awk '/key/{print $3}')


virsh secret-list
 UUID                                  Usage
--------------------------------------------------------------------------------
 f85def47-c1ac-46fe-a1d5-c0139c46d91a  ceph client.cinder secret


/etc/nova/nova.conf配置
[libvirt]
virt_type = qemu
cpu_mode = none
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = f85def47-c1ac-46fe-a1d5-c0139c46d91a
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
inject_password = false
inject_key = false
inject_partition = -2
在/etc/ceph目录下要有如下文件
[root@cinder ~]# ll /etc/ceph/
total 12
-rw-r--r-- 1 cinder cinder  64 1月  26 15:52 ceph.client.cinder.keyring
-rw-r--r-- 1 root   root   263 1月  26 15:53 ceph.conf
重启nova-compute服务
systemctl restart openstack-nova-compute.service

以上是“openstack pike版如何使用ceph作后端存储”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注亿速云行业资讯频道!

推荐阅读:
  1. ceph与openstack的对接集成
  2. Openstack对接两套Ceph

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

ceph openstack pike

上一篇:如何进行Spark底层原理的解析

下一篇:python匿名函数怎么创建

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》