Ubuntu 20.04 搭建OpenStack Yoga(allinone)

2022-11-17 16:14:53 浏览数 (1)

很多文章都是devstack安装的allinone,我这里使用源码组件手动安装。

环境准备

Environment 这里需要先配置一些环境。 首先我这里是虚拟机安装的系统,可能设置的密码不是当前用户的root密码,反正就得重置一下 执行下面的命令,然后输入安装系统设置的密码,之后就可以了。 sudo passwd root 其实这里好像还需要关闭防火墙以及selinux,但是这系统直接没装,就省事了。

换源

需要先换一个源,方便下载 换源 gedit /etc/apt/sources.list

代码语言:javascript复制
deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse 
deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse 
deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse 
deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse 
deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse 
deb-src http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse 
deb-src http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse 
deb-src http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse 
deb-src http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse 
deb-src http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse focal

Jetbrains全家桶1年46,售后保障稳定

然后更新一下元数据,如果用upgrade会直接升级对应的包 sudo apt-get update

网络配置

首先得关闭NetworkManager,不然设置的静态IP不行,这个会和interfaces冲突,如果都存在默认使用前者管理网络。 NetworkManager好像在/etc/netplan/xxX里面配置。

代码语言:javascript复制
systemctl stop NetworkManager
systemctl disable NetworkManager

然后再配置一下ip转发 修改文件/etc/sysctl.conf

代码语言:javascript复制
net.ipv4.ip_forward=1	//取消注释

执行sysctl -p保存

静态IP

需要配置一下网络,改为桥接的方式,设置静态IP interfaces使用的配置文件是/etc/network/interfaces,修改如下

代码语言:javascript复制
auto lo
iface lo inet loopback

# The primary network interface

auto ens33
iface ens33 inet static
address 192.168.1.210
network 192.168.1.0
netmask 255.255.255.0
broadcast 192.168.1.255
gateway 192.168.1.1

然后在/etc/resolv.conf里面可以配置DNS,但是在这里配置好像是临时的,重启就失效了

代码语言:javascript复制
nameserver 114.114.114.114

随后重启网络systemctl restart networking 现在就可以ping通百度了。

桥接

实际应用的时候,我发现好像桥接比较好,通过一个网桥来连接到物理网卡 一个简单的配置,manual表示设置一个空的,一般用于配置网桥,static表示静态IP

代码语言:javascript复制
auto lo
iface lo inet loopback

# The primary network interface

auto ens33
iface ens33 inet manual

# inside bridge network port

auto br-mgmt
iface br-mgmt inet static
	address 192.168.1.210
	#network 192.168.1.0
	netmask 255.255.255.0
	#broadcast 192.168.1.255
	gateway 192.168.1.1

	# set static route for LAN
	#post-up route add -net 10.0.0.0 netmask 255.0.0.0 gw 10.18.44.1
	#post-up route add -net 161.26.0.0 netmask 255.255.0.0 gw 10.18.44.1
	bridge_ports ens33
	bridge_stp off
	bridge_fd 0

对于多个网桥的绑定好像也是可以的。 如果有多个网桥的话,最好只给这个外网的网桥配置一个网关,如果都配置可能会报错

代码语言:javascript复制
auto lo
iface lo inet loopback

# The primary network interface

auto ens33
iface ens33 inet manual

# inside bridge network port

auto br-ens33
iface br-ens33 inet static
	address 192.168.1.210
	#network 192.168.1.0
	netmask 255.255.255.0
	#broadcast 192.168.1.255
	gateway 192.168.1.1

	# set static route for LAN
	#post-up route add -net 10.0.0.0 netmask 255.0.0.0 gw 10.18.44.1
	#post-up route add -net 161.26.0.0 netmask 255.255.0.0 gw 10.18.44.1
	bridge_ports ens33
	bridge_stp off
	bridge_fd 1
	

auto br-mgmt
iface br-mgmt inet static
	address 10.17.23.10
	netmask 255.255.255.0

	# set static route for LAN
	#post-up route add -net 10.0.0.0 netmask 255.0.0.0 gw 10.18.44.1
	#post-up route add -net 161.26.0.0 netmask 255.255.0.0 gw 10.18.44.1
	bridge_ports ens33
	bridge_stp off
	bridge_fd 1

完事以后直接重启网络服务可能会报错?反正我这给他重启主机了

永久修改DNS

修改文件/etc/systemd/resolvd.conf 取消里面DNS的注释,填写相应的就可以了

主机名

修改host文件 /etc/hosts以及/etc/hostname文件 我忘了怎么生效了,直接重启reboot吧

基础服务

时间服务

Network Time Protocol (NTP) 安装chrony apt install chrony 修改文件/etc/chrony/chrony.conf,添加时钟服务器。如果是控制节点,需要让其他节点可以访问到,使用子网

代码语言:javascript复制
#注释掉几个pool
server controller iburst
allow 10.0.0.0/8
local stratum 10
#下面是配置文件全内容
file:/etc/chrony/chrony.conf
# Welcome to the chrony configuration file. See chrony.conf(5) for more
# information about usuable directives.
# This will use (up to):
# - 4 sources from ntp.ubuntu.com which some are ipv6 enabled
# - 2 sources from 2.ubuntu.pool.ntp.org which is ipv6 enabled as well
# - 1 source from [01].ubuntu.pool.ntp.org each (ipv4 only atm)
# This means by default, up to 6 dual-stack and up to 2 additional IPv4-only
# sources will be used.
# At the same time it retains some protection against one of the entries being
# down (compare to just using one of the lines). See (LP: #1754358) for the
# discussion.
#
# About using servers from the NTP Pool Project in general see (LP: #104525).
# Approved by Ubuntu Technical Board on 2011-02-08.
# See http://www.pool.ntp.org/join.html for more information.
#pool ntp.ubuntu.com iburst maxsources 4
#pool 0.ubuntu.pool.ntp.org iburst maxsources 1
#pool 1.ubuntu.pool.ntp.org iburst maxsources 1
#pool 2.ubuntu.pool.ntp.org iburst maxsources 2
server controller
allow 0.0.0.0/0
# This directive specify the location of the file containing ID/key pairs for
# NTP authentication.
keyfile /etc/chrony/chrony.keys
# This directive specify the file into which chronyd will store the rate
# information.
driftfile /var/lib/chrony/chrony.drift
# Uncomment the following line to turn logging on.
#log tracking measurements statistics
# Log files location.
logdir /var/log/chrony
# Stop bad estimates upsetting machine clock.
maxupdateskew 100.0
# This directive enables kernel synchronisation (every 11 minutes) of the
# real-time clock. Note that it can’t be used along with the 'rtcfile' directive.
rtcsync
# Step the system clock instead of slewing it if the adjustment is larger than
# one second, but only in the first three clock updates.
makestep 1 3
local stratum 10

重启服务 service chrony restart

可以通过chronyc sources验证是否配置好。 如果是^*说明配置好了,如果是?说明没有连接服务器

代码语言:javascript复制
210 Number of sources = 1
.-- Source mode  '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, ' ' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ]  /- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                     |          |  zzzz = estimated error.
||                                 |    |           
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* controller                   10   6    77   209     49ns[ 1267ns]  /- 2958ns

也可以通过timedatectl验证,如果成功同步了,System clock synchronized会变成yes,否则是no

代码语言:javascript复制
root@controller:/home/kang# timedatectl
Local time: Thu 2022-04-21 16:09:45  08  
Universal time: Thu 2022-04-21 08:09:45 UTC  
RTC time: Thu 2022-04-21 08:09:45      
Time zone: Asia/Ulaanbaatar ( 08,  0800)
System clock synchronized: yes                          
NTP service: active                       
RTC in local TZ: no
OpenStack软件包

OpenStack packages for Ubuntu 可以选择相应的版本进行安装。 这里使用yoga版本 add-apt-repository cloud-archive:yoga 这里安装一下客户端 apt install python3-openstackclient python3-pip

数据库

SQL database for Ubuntu 安装数据库 apt install mariadb-server python3-pymysql 然后需要创建配置文件/etc/mysql/mariadb.conf.d/99-openstack.cnf

代码语言:javascript复制
filename:/etc/mysql/mariadb.conf.d/99-openstack.cnf
[mysqld]
bind-address = 10.17.23.10
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

重启数据库 service mysql restart 随后配置一下密码,mysql_secure_installation

消息队列

Message queue for Ubuntu 先安装消息队列 apt install rabbitmq-server 配置一下,创建openstack用户

代码语言:javascript复制
rabbitmqctl add_user openstack openstack
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
memcached

Memcached for Ubuntu 安装 apt install memcached python3-memcache -y 配置一下/etc/memcached.conf,搜索127.0.0.1替换为controller ip。 sed -i ‘s/127.0.0.1/10.17.23.10/g’ /etc/memcached.conf 然后重启service memcached restart

etcd

Etcd for Ubuntu 直接安装,apt install etcd -y 修改配置文件/etc/default/etcd,修改为自己控制节点的管理IP

代码语言:javascript复制
ETCD_NAME="controller"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER="controller=http://10.17.23.10:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.17.23.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.17.23.10:2379"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.17.23.10:2379"

可以自动化修改,这里要保证运行的路径就是这个脚本路径,不然会找不到文件

代码语言:javascript复制
cp etcd /etc/default/etcd
sed -i "s/127.0.0.1/$controller_ip/g" /etc/default/etcd

启动服务

代码语言:javascript复制
systemctl enable etcd
systemctl restart etcd

OpenStack

OpenStack Yoga Installation Guides 然后开始组件安装

Keystone

Keystone Installation Tutorial for Ubuntu 创建数据库

代码语言:javascript复制
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' 
IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' 
IDENTIFIED BY 'openstack';

安装keystone组件

代码语言:javascript复制
apt install keystone

顺便下载一个配置工具,方便修改配置文件openstack-utils。但是ubuntu搜不到,这个好像是centos里面的,然后搜了一下发现ubuntu里面叫crudini,这两个其实是一样的,是同一个人使用python开发的。 OpenStack配置文件的快速修改方法 那就安装一下

代码语言:javascript复制
apt install curdini

修改配置文件/etc/keystone/keystone.conf

代码语言:javascript复制
crudini --set /etc/keystone/keystone.conf database connection "mysql pymysql://keystone:${password}@controller/keystone"
crudini --set /etc/keystone/keystone.conf token provider fernet

填充数据库

代码语言:javascript复制
su -s /bin/sh -c "keystone-manage db_sync" keystone

初始化令牌仓库

代码语言:javascript复制
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

创建管理员用户

代码语言:javascript复制
keystone-manage bootstrap --bootstrap-password admin 
--bootstrap-admin-url http://controller:5000/v3/ 
--bootstrap-internal-url http://controller:5000/v3/ 
--bootstrap-public-url http://controller:5000/v3/ 
--bootstrap-region-id RegionOne

然后配置一下阿帕奇,添加一个ServerName

代码语言:javascript复制
filename:/etc/apache2/apache2.conf
ServerName controller

重启apache service apache2 restart 然后添加一个环境变量脚本,替换一下密码,就是刚才配置的

代码语言:javascript复制
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

创建相应的domain、projects等

代码语言:javascript复制
#source adminrc_keystone
openstack project create --domain default --description "Service Project" service
openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default --password demo demo
openstack role create role_demo
openstack role add --project demo --user demo role_demo

最后验证一下 执行openstack token issue

Glance

Install and configure (Ubuntu) 先创建数据库,写脚本

代码语言:javascript复制
filename:glance_sql.sql
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'openstack';

导入 mysql -u root -proot < glance_sql.sql 然后source adminrc身份文件 安装

代码语言:javascript复制
apt install glance -y

创建相应的endpoinf和user等

代码语言:javascript复制
openstack user create --domain default --password openstack glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

修改配置文件/etc/glance/glance-api.conf,最后这个oslo_limit我不知道是干啥的配置,好像是连接keystone的?之前没见过这个参数

代码语言:javascript复制
[database]
# ...
connection = mysql pymysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
# ...
flavor = keystone
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[oslo_limit]
auth_url = http://controller:5000
auth_type = password
user_domain_id = default
username = glance
system_scope = all
password = openstack
service_name = glance
region_name = RegionOne

关于最后这个参数字段还是没看懂是干啥的,其实可以把service_name换成什么endpoint_id 这里其实应该还有一步,是什么授予权限,给service_name,但是我还没弄

代码语言:javascript复制
openstack role add --user MY_SERVICE --user-domain Default --system all reader

填充数据库

代码语言:javascript复制
su -s /bin/sh -c "glance-manage db_sync" glance

重启服务

代码语言:javascript复制
service glance-api restart

验证可以使用cirros上传到glance测试 下载镜像,上传glance

代码语言:javascript复制
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
glance image-create --name "cirros" 
--file cirros-0.4.0-x86_64-disk.img 
--disk-format qcow2 --container-format bare 
--visibility=public

可以使用openstack image list或者glance image-list查看镜像。

Placement

Install and configure Placement for Ubuntu 先配置数据库

代码语言:javascript复制
filename:placement_sql.sql
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'openstack';

导入数据库

代码语言:javascript复制
mysql -u root -proot < placement_sql.sql

配置一下openstack的用户信息

代码语言:javascript复制
source adminrc
openstack user create --domain default --password $password placement
openstack role add --project service --user placement admin
openstack service create --name placemen --description "Placement API" placement
openstack endpoint create --region $region placement public http://controller:8778
openstack endpoint create --region $region placement internal http://controller:8778
openstack endpoint create --region $region placement admin http://controller:8778

然后安装placement

代码语言:javascript复制
apt install placement

修改配置文件/etc/placement/placement.conf

代码语言:javascript复制
[placement_database]
# ...
connection = mysql pymysql://placement:PLACEMENT_DBPASS@controller/placement
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = openstack

然后同步数据库

代码语言:javascript复制
su -s /bin/sh -c "placement-manage db sync" placement

重启阿帕奇

代码语言:javascript复制
service apache2 restart

验证

代码语言:javascript复制
placement-status upgrade check

然后这里需要安装一个python库

代码语言:javascript复制
pip3 install osc-placement
shell
代码语言:javascript复制
#must source adminrc before run this script
controller_ip='10.17.23.10'
password='openstack'
region='RegionOne'
admin_password='admin'
admin_username='admin'
mysql -u root -proot < placement_sql.sql
echo 'succeed to init placement database'
openstack user create --domain default --password $password placement
openstack role add --project service --user placement admin
openstack service create --name placemen --description "Placement API" placement
openstack endpoint create --region $region placement public http://controller:8778
openstack endpoint create --region $region placement internal http://controller:8778
openstack endpoint create --region $region placement admin http://controller:8778
echo 'succeed to config placement user and endpoints in openstack'
apt install placement-api -y
crudini --set /etc/placement/placement.conf placement_database connection "mysql pymysql://placement:${password}@controller/placement"
crudini --set /etc/placement/placement.conf api auth_strategy "keystone"
crudini --set /etc/placement/placement.conf keystone_authtoken auth_url "http://controller:5000/v3"
crudini --set /etc/placement/placement.conf keystone_authtoken memcached_servers "controller:11211"
crudini --set /etc/placement/placement.conf keystone_authtoken auth_type "password"
crudini --set /etc/placement/placement.conf keystone_authtoken project_domain_name "Default"
crudini --set /etc/placement/placement.conf keystone_authtoken user_domain_name "Default"
crudini --set /etc/placement/placement.conf keystone_authtoken project_name "service"
crudini --set /etc/placement/placement.conf keystone_authtoken username "placement"
crudini --set /etc/placement/placement.conf keystone_authtoken password $password
echo 'succeed to config placement.conf [keystone_authtoken]'
su -s /bin/sh -c "placement-manage db sync" placement
echo 'succeed to fullfill placement database'
service apache2 restart
echo 'succeed restart apache2'

Nova

这里就分为控制节点和计算节点了。 如果是allinone的模式,先执行控制节点,然后再执行计算节点

控制节点

先创建数据库并且授权

代码语言:javascript复制
filename:nova_sql.sql
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'openstack';

然后初始化一下,创建nova的用户、服务和端点

代码语言:javascript复制
openstack user create --domain default --password $password nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region $region compute public http://controller:8774/v2.1
openstack endpoint create --region $region compute internal http://controller:8774/v2.1
openstack endpoint create --region $region compute admin http://controller:8774/v2.1

然后需要安装啦

代码语言:javascript复制
apt install nova-api nova-conductor nova-novncproxy nova-scheduler

之后就是修改配置文件/etc/nova/nova.conf

代码语言:javascript复制
[api_database]
connection = mysql pymysql://nova:openstack@controller/nova_api
[database]
connection = mysql pymysql://nova:openstack@controller/nova
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller:5672/
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = openstack
[DEFAULT]
my_ip = $my_ip
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = openstack

然后同步数据库

代码语言:javascript复制
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova

验证一下

代码语言:javascript复制
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

重启服务

代码语言:javascript复制
service nova-api restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
shell
代码语言:javascript复制
#must source adminrc before run this script
controller_ip='10.17.23.10'
password='openstack'
region='RegionOne'
admin_password='admin'
admin_username='admin'
mysql -u root -proot < nova_sql.sql
echo 'succeed to load nova database'
openstack user create --domain default --password $password nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region $region compute public http://controller:8774/v2.1
openstack endpoint create --region $region compute internal http://controller:8774/v2.1
openstack endpoint create --region $region compute admin http://controller:8774/v2.1
echo 'succeed init openstack nova user'
apt install nova-api nova-conductor nova-novncproxy nova-scheduler -y
echo 'succeed to install nova software'
crudini --set /etc/nova/nova.conf api_database connection "mysql pymysql://nova:${password}@controller/nova_api"
crudini --set /etc/nova/nova.conf database connection "mysql pymysql://nova:${password}@controller/nova"
crudini --set /etc/nova/nova.conf DEFAULT transport_url "rabbit://openstack:${password}@controller:5672/"
crudini --set /etc/nova/nova.conf api auth_strategy "keystone"
echo 'succeed to config database ,DEFAULT ,api in nova.conf'
crudini --set /etc/nova/nova.conf keystone_authtoken www_authenticate_uri "http://controller:5000/"
crudini --set /etc/nova/nova.conf keystone_authtoken auth_url "http://controller:5000/"
crudini --set /etc/nova/nova.conf keystone_authtoken memcached_servers "controller:11211"
crudini --set /etc/nova/nova.conf keystone_authtoken auth_type password
crudini --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
crudini --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
crudini --set /etc/nova/nova.conf keystone_authtoken project_name service
crudini --set /etc/nova/nova.conf keystone_authtoken username nova
crudini --set /etc/nova/nova.conf keystone_authtoken password $password
echo 'succeed to config keystone_authtoken in nova.conf'
crudini --set /etc/nova/nova.conf DEFAULT my_ip $controller_ip
crudini --set /etc/nova/nova.conf vnc enabled true
crudini --set /etc/nova/nova.conf vnc my_ip $controller_ip
crudini --set /etc/nova/nova.conf vnc my_ip $controller_ip
crudini --set /etc/nova/nova.conf glance api_servers "http://controller:9292"
crudini --set /etc/nova/nova.conf oslo_concurrency lock_path "/var/lib/nova/tmp"
echo 'succeed config my_ip ,vnc ,glance in nova.conf'
crudini --set /etc/nova/nova.conf placement region_name $region
crudini --set /etc/nova/nova.conf placement project_domain_name Default
crudini --set /etc/nova/nova.conf placement project_name service
crudini --set /etc/nova/nova.conf placement auth_type password
crudini --set /etc/nova/nova.conf placement user_domain_name Default
crudini --set /etc/nova/nova.conf placement auth_url "http://controller:5000/v3"
crudini --set /etc/nova/nova.conf placement username placement
crudini --set /etc/nova/nova.conf placement password $password
echo 'succeed to config placement in nova.conf'
su -s /bin/sh -c "nova-manage api_db sync" nova
echo 'succeed to sync nova api database'
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
echo 'succeed to register cell0 database'
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
echo 'succeed to fullfill nova database'
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
service nova-api restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
计算节点

先安装nova-compute

代码语言:javascript复制
apt install nova-compute

然后修改配置文件/etc/nova/nova.conf

代码语言:javascript复制
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = openstack
[DEFAULT]
my_ip = $my_ip
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = openstack

修改配置文件/etc/nova/nova-compute.conf

代码语言:javascript复制
[libvirt]
virt_type = qemu

重启nova-compute服务

代码语言:javascript复制
service nova-compute restart
控制节点注册计算节点

然后需要在控制节点注册一下 切到控制节点 查看一下服务,如果是up就好了 第一次我这里还出错了,我把配置文件里面的一个ip没有写,结果就是down

代码语言:javascript复制
openstack compute service list --service nova-compute

发现计算节点

代码语言:javascript复制
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
shell
代码语言:javascript复制
#must source adminrc before run this script
controller_ip='10.17.23.10'
password='openstack'
region='RegionOne'
admin_password='admin'
admin_username='admin'
apt install nova-compute -y
crudini --set /etc/nova/nova.conf DEFAULT transport_url "rabbit://openstack:${password}@controller"
crudini --set /etc/nova/nova.conf api auth_strategy keystone
echo 'succeed to config database ,api in nova.conf'
crudini --set /etc/nova/nova.conf keystone_authtoken www_authenticate_uri "http://controller:5000/"
crudini --set /etc/nova/nova.conf keystone_authtoken auth_url "http://controller:5000/"
crudini --set /etc/nova/nova.conf keystone_authtoken memcached_servers "controller:11211"
crudini --set /etc/nova/nova.conf keystone_authtoken auth_type password
crudini --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
crudini --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
crudini --set /etc/nova/nova.conf keystone_authtoken project_name service
crudini --set /etc/nova/nova.conf keystone_authtoken username nova
crudini --set /etc/nova/nova.conf keystone_authtoken password $password
echo 'succeed to config keystone_authtoken in nova.conf'
crudini --set /etc/nova/nova.conf DEFAULT my_ip $controller_ip
crudini --set /etc/nova/nova.conf vnc enabled true
crudini --set /etc/nova/nova.conf vnc server_listen "0.0.0.0"
crudini --set /etc/nova/nova.conf vnc server_proxyclient_address $controller_ip
crudini --set /etc/nova/nova.conf vnc novncproxy_base_url "http://controller:6080/vnc_auto.html"
echo 'succeed to config vnc in nova.conf'
crudini --set /etc/nova/nova.conf glance api_servers "http://controller:9292"
crudini --set /etc/nova/nova.conf oslo_concurrency lock_path "/var/lib/nova/tmp"
echo 'succeed to config glance ,oslo_concurrency in nova.conf'
crudini --set /etc/nova/nova.conf placement region_name $region
crudini --set /etc/nova/nova.conf placement project_domain_name Default
crudini --set /etc/nova/nova.conf placement project_name service
crudini --set /etc/nova/nova.conf placement auth_type password
crudini --set /etc/nova/nova.conf placement user_domain_name Default
crudini --set /etc/nova/nova.conf placement auth_url "http://controller:5000/v3"
crudini --set /etc/nova/nova.conf placement username placement
crudini --set /etc/nova/nova.conf placement password $password
echo 'succeed to config placement in nova.conf'
crudini --set /etc/nova/nova-compute.conf libvirt virt_type qemu
service nova-compute restart
echo 'succeed to restart nova-compute'

Cinder

这里如果需要装cinder的话,得先弄好一个硬盘,这就分物理机和虚拟机了。 []

Neutron

Install and configure for Ubuntu 同样分为控制节点和网络节点

控制节点

创建数据库

代码语言:javascript复制
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'openstack';

新创建neutron的openstack user、service和endpoint

代码语言:javascript复制
openstack user create --domain default --password openstack neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron 
--description "OpenStack Networking" network
openstack endpoint create --region RegionOne 
network public http://controller:9696
openstack endpoint create --region RegionOne 
network internal http://controller:9696
openstack endpoint create --region RegionOne 
network admin http://controller:9696

然后这里选择创建自服务服务网络 Networking Option 2: Self-service networks 安装软件

代码语言:javascript复制
apt install neutron-server neutron-plugin-ml2 
neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent 
neutron-metadata-agent

修改配置文件/etc/neutron/neutron.conf

代码语言:javascript复制
filename:/etc/neutron/neutron.conf
[database]
connection = mysql pymysql://neutron:openstack@controller/neutron
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = openstack
[DEFAULT]
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = openstack
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

修改配置文件/etc/neutron/plugins/ml2/ml2_conf.ini

代码语言:javascript复制
filename: /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true

修改配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini 这里的my_ip是这个控制节点的管理IP

代码语言:javascript复制
filename:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = true
local_ip = $my_ip
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

修改配置文件/etc/neutron/l3_agent.ini

代码语言:javascript复制
filename:/etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge

修改配置文件/etc/neutron/dhcp_agent.ini

代码语言:javascript复制
filename:/etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

修改配置文件/etc/neutron/metadata_agent.ini 这里的secret随便设置一个,需要和后面nova.conf的neutron里面的一致

代码语言:javascript复制
filename:/etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = openstack

修改配置文件/etc/nova/nova.conf 这里面的secret需要和上面保持一致

代码语言:javascript复制
[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = openstack
service_metadata_proxy = true
metadata_proxy_shared_secret = openstack

填充同步neutron的数据库

代码语言:javascript复制
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启服务

代码语言:javascript复制
service nova-api restart
service neutron-server restart
service neutron-linuxbridge-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart
shell
代码语言:javascript复制
#must source adminrc before run this script
controller_ip='10.17.23.10'
password='openstack'
region='RegionOne'
admin_password='admin'
admin_username='admin'
network_interface_name='ens33'
mysql -u root -proot < neutron_sql.sql
openstack user create --domain default --password $password neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron 
--description "OpenStack Networking" network
openstack endpoint create --region $region 
network public http://controller:9696
openstack endpoint create --region $region 
network internal http://controller:9696
openstack endpoint create --region $region 
network admin http://controller:9696
echo 'succeed to init openstack neutron user ,service ,endpoint'
apt install neutron-server neutron-plugin-ml2 
neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent 
neutron-metadata-agent -y
crudini --set /etc/neutron/neutron.conf database connection "mysql pymysql://neutron:${password}@controller/neutron"
crudini --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
crudini --set /etc/neutron/neutron.conf DEFAULT service_plugins router
crudini --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
crudini --set /etc/neutron/neutron.conf DEFAULT transport_url "rabbit://openstack:${password}@controller"
crudini --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
crudini --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
crudini --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
echo 'succeed to config DEFAULT in neutron.conf'
crudini --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri "http://controller:5000"
crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_url "http://controller:5000"
crudini --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers "controller:11211"
crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
crudini --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name Default
crudini --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name Default
crudini --set /etc/neutron/neutron.conf keystone_authtoken project_name service
crudini --set /etc/neutron/neutron.conf keystone_authtoken username neutron
crudini --set /etc/neutron/neutron.conf keystone_authtoken password $password
echo 'succeed to config keystone_authtoken in neutron.conf'
crudini --set /etc/neutron/neutron.conf nova auth_url "http://controller:5000"
crudini --set /etc/neutron/neutron.conf nova auth_type password
crudini --set /etc/neutron/neutron.conf nova project_domain_name Default
crudini --set /etc/neutron/neutron.conf nova user_domain_name Default
crudini --set /etc/neutron/neutron.conf nova region_name $region
crudini --set /etc/neutron/neutron.conf nova project_name service
crudini --set /etc/neutron/neutron.conf nova username nova
crudini --set /etc/neutron/neutron.conf nova password $password
echo 'succeed to config nova in neutron.conf'
crudini --set /etc/neutron/neutron.conf oslo_concurrency lock_path "/var/lib/neutron/tmp"
echo 'succeed to config oslo_concurrency in neutron.conf'
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers "flat,vlan,vxlan"
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers "linuxbridge,l2population"
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
echo 'succeed to config ml2 in ml2_conf.conf'
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges "1:1000"
echo 'succeed to config ml2_type_* in ml2_conf.conf'
crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings "provider:$network_interface_name"
crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip $controller_ip
crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver "neutron.agent.linux.iptables_firewall.IptablesFirewallDriver"
echo 'succeed to config linuxbridge.conf'
crudini --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge
echo 'succeed to config l3_agent.conf'
crudini --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
crudini --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver "neutron.agent.linux.dhcp.Dnsmasq"
crudini --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true
echo 'succeed to config dhcp_agent.conf'
crudini --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller
crudini --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret $password
echo 'succeed to config metadata_agent.conf'
crudini --set /etc/nova/nova.conf neutron auth_url "http://controller:5000"
crudini --set /etc/nova/nova.conf neutron auth_type password
crudini --set /etc/nova/nova.conf neutron project_domain_name Default
crudini --set /etc/nova/nova.conf neutron user_domain_name Default
crudini --set /etc/nova/nova.conf neutron region_name $region
crudini --set /etc/nova/nova.conf neutron project_name service
crudini --set /etc/nova/nova.conf neutron username neutron
crudini --set /etc/nova/nova.conf neutron password $password
crudini --set /etc/nova/nova.conf neutron service_metadata_proxy true
crudini --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret $password
echo 'succeed to config neutron in nova.conf'
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
echo 'succeed to fullfill neutron database'
service nova-api restart
service neutron-server restart
service neutron-linuxbridge-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart
echo 'succeed to restart services'
计算节点

先安装一下软件

代码语言:javascript复制
apt install neutron-linuxbridge-agent

修改配置文件/etc/neutron/neutron.conf

代码语言:javascript复制
filename:/etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = openstack
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

然后这里同样选择自服务网络的配置 Networking Option 2: Self-service networks 修改配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini 这里这个IP地址是计算节点的IP地址

代码语言:javascript复制
filename:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = true
local_ip = $my_ip
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

修改配置文件/etc/nova/nova.conf

代码语言:javascript复制
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = openstack

然后重启服务

代码语言:javascript复制
service nova-compute restart
service neutron-linuxbridge-agent restart

验证一下

代码语言:javascript复制
openstack extension list --network
openstack network agent list
报错

碰到一个报错在验证的时候 keystoneauth1.exceptions.auth_plugins.MissingRequiredOptions: Auth plugin requires parameters which were not given: auth_url

shell
代码语言:javascript复制
#must source adminrc before run this script
compute_ip='10.17.23.10'
password='openstack'
region='RegionOne'
admin_password='admin'
admin_username='admin'
network_interface_name='ens33'
apt install neutron-linuxbridge-agent -y
crudini --set /etc/neutron/neutron.conf DEFAULT transport_url "rabbit://openstack:${password}@controller"
crudini --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
echo 'succeed to config DEFAULT in neutron.conf'
crudini --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri "http://controller:5000"
crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_url "http://controller:5000"
crudini --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers "controller:11211"
crudini --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
crudini --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name Default
crudini --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name Default
crudini --set /etc/neutron/neutron.conf keystone_authtoken project_name service
crudini --set /etc/neutron/neutron.conf keystone_authtoken username neutron
crudini --set /etc/neutron/neutron.conf keystone_authtoken password $password
echo 'succeed to config keystone_authtoken in neutron.conf'
crudini --set /etc/neutron/neutron.conf oslo_concurrency lock_path "/var/lib/neutron/tmp"
echo 'succeed to config oslo in neutron.conf'
crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings "provider:${network_interface_name}"
crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip $compute_ip
crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
crudini --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver "neutron.agent.linux.iptables_firewall.IptablesFirewallDriver"
echo 'succeed to config linuxbridge_agent.ini'
crudini --set /etc/nova/nova.conf neutron auth_url "http://controller:5000"
crudini --set /etc/nova/nova.conf neutron auth_type password
crudini --set /etc/nova/nova.conf neutron project_domain_name Default
crudini --set /etc/nova/nova.conf neutron user_domain_name Default
crudini --set /etc/nova/nova.conf neutron region_name $region
crudini --set /etc/nova/nova.conf neutron project_name service
crudini --set /etc/nova/nova.conf neutron username neutron
crudini --set /etc/nova/nova.conf neutron password $password
echo 'succeed to config neutron in nova.conf'
service nova-compute restart
service neutron-linuxbridge-agent restart
echo 'succeed to restart services'

Horizon

配置一下可视化界面 先安装一下

代码语言:javascript复制
apt install openstack-dashboard

修改配置文件/etc/openstack-dashboard/local_settings.py

代码语言:javascript复制
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = { 

'default': { 

'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
OPENSTACK_KEYSTONE_URL = "http://%s/identity:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = { 

"identity": 3,
"image": 2,
"volume": 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
OPENSTACK_NEUTRON_NETWORK = { 

...
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_fip_topology_check': False,
}
TIME_ZONE = "Asia/Shanghai"

修改配置文件/etc/apache2/conf-available/openstack-dashboard.conf

代码语言:javascript复制
WSGIApplicationGroup %{ 
GLOBAL}

重启服务

代码语言:javascript复制
systemctl reload apache2.service
ervice apache2 restart

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/222965.html原文链接:https://javaforall.cn

0 人点赞