从零开始搭建Openstack-Pike(Ubuntu 16.04桌面版)

2022-01-12 10:09:33 浏览数 (1)

【Step1 创建虚拟机】 -----------

找一个服务器(自己的电脑也行),使用 Ubuntu-16.04.7 的镜像创建两个虚拟机。

虚拟机名称可以分别命名为openstack-controlleropenstack-compute

ubuntu-16.04.7-desktop-amd64.iso可以直接在官网下载,https://releases.ubuntu.com/16.04.7/,这里需要注意的是Pike版本只能使用16.04的ubuntu,其他版本的系统会存在兼容性问题。

关于虚拟机的配置问题,可以稍微溢出一些,控制节点需要多一些内存和硬盘。

控制节点【CPU:1* 4 核,内存 8G,硬盘 40G】

计算节点【CPU:1* 4 核,内存 4G,硬盘 30G】

网卡初始化需要一个管理网口,连接Internet(只需要能连接apt的官方源(cn.ubuntu)即可),不需要科学上网

【Step2 设置虚拟机网络】 -----------

使用vSphere在服务器上创建三个网络,分别为vxlan-net、vlan-net和flat-net。

创建完成后能看到虚拟机端口组连接在物理适配器上。

打开虚拟机设置界面,选择 添加(A) -- 添加网络适配器,这里可以添加四个网络适配器。分别对应的是 管理网络、Vxlan网络、Vlan网络和Flat网络。这些网卡后面安装Neutron组件的时候会用到。

设置设置

两个虚拟机的网络名称需要一模一样。controller节点叫vxlan-net,compute节点也叫vxlan-net,这样他们就在一个局域网内了。

进入虚拟机设置IP,设置IP时,除了管理网络需要设置网关,其他网卡不需要设置网关,使用直连路由即可。

添加网卡IP添加网卡IP

IP的设置比较随意,同网段内不冲突即可,这里举个栗子

controller节点

compute节点

管理网络

xx.xx.xx.61(连接internet的IP)

xx.xx.xx.62(连接internet的IP)

vxlan网络

10.0.1.61/24

10.0.1.62/24

vlan网络

10.0.0.61/24

10.0.0.62/24

Flat网络

172.31.0.61/24

172.31.0.62/24

【Step3 安装基础软件】 -----------

1、安装SSH sudo apt install ssh

2、安装vim sudo apt install vim

3、安装net-tools sudo apt install net-tools

其他工具随意安装都可。apt的源不用修改,用ubuntu的中国官方源 cn.archive.ubuntu.com 即可,如有需求用其他的源加速也可以。

4、设置网络别名:sudo vim /etc/hosts, 添加两行

代码语言:javascript复制
10.0.0.61 controller
10.0.0.62 compute

测试一下:controller节点上执行 ping compute, compute节点上执行 ping controller

5、设置SSH的root用户登录:sudo vim /etc/ssh/sshd_config, 修改 # Authentication 段配置为

代码语言:javascript复制
# Authentication
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes

6、如果可以连接Internet,Ubuntu桌面版的时间同步是没有问题的,用 date 命令查看虚拟机时间。

【Step4 安装Openstack的公用组件】 -----------

1、!!!重要:添加适用于Ubuntu16.04的Openstack Pike

代码语言:javascript复制
sudo add-apt-repository cloud-archive:pike

添加完成后会生成文件 /etc/apt/sources.list.d/cloudarchive-pike.list,查看是否存在即可

2、安装openstack命令行工具

代码语言:javascript复制
sudo apt install python-openstackclient
$ openstack --version
openstack 3.12.0

3、安装MariaDB

MariaDB是MySQL的一个分支,MySQL被甲骨文收购后存在版权问题,所以大家都用MariaDB

代码语言:javascript复制
sudo apt install mariadb-server python-pymysql

安装完成后创建 /etc/mysql/mariadb.conf.d/99-openstack.cnf 文件,添加如下信息

代码语言:javascript复制
[mysqld]
bind-address = 0.0.0.0
# bind-address = 10.0.0.61
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

绑定地址用10网段也可,只要能和compute节点通信即可。0.0.0.0 表示绑定所有IP地址

重启 mysql 服务(这里还是叫mysql, ̄□ ̄||)

代码语言:javascript复制
sudo service mysql restart

mysql -u root登录试试,如果出现报错 Access denied for user 'root'@'localhost',用sudo su切换到root用户再试试。

如果都不行,那就不是密码的问题,而是插件类型问题

解决办法:在/etc/mysql/mariadb.conf.d/99-openstack.cnf 的配置文件添加一行 skip-grant-table

代码语言:javascript复制
[mysqld]
bind-address = 0.0.0.0
# bind-address = 10.0.0.61
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
skip-grant-table
代码语言:javascript复制
$ sudo service mysql restart
$ mysql -u root 
MariaDB (none)> use mysql;
MariaDB mysql> select user, plugin from user;
### 看看插件的类型,如果plugin root的字段是auth_socket或者unix_socket,那就是有问题的
MariaDB mysql> update user set authentication_string=password("你的root密码"),plugin='mysql_native_password' where user='root';

注意替换字符串"你的root密码"。

最后删除99-openstack.cnf里面的skip-grant-table重启mysql。

代码语言:javascript复制
$ sudo service mysql restart
$ mysql -u root -p

4、安装消息队列组件---RabbitMQ

代码语言:javascript复制
sudo apt install rabbitmq-server

添加openstack用户,密码是admin,按需修改

代码语言:javascript复制
rabbitmqctl add_user openstack admin

允许用户的配置、写入和读取访问权限 openstack

代码语言:javascript复制
rabbitmqctl set_permissions openstack "." "." ".*"

5、安装缓存服务组件 -- MemCache

代码语言:javascript复制
sudo apt install memcached python-memcache

编辑 /etc/memcached.conf 文件,里面有一个-l的配置,修改为 controller的IP 10.0.0.61,这个IP后面可以记住

代码语言:javascript复制
$ sudo vim /etc/memcached.conf
-l 10.0.0.61

6、ETCD不用安装了,跳过

【Step5 安装Openstack-KeyStone 服务】 -----------

------------------ 以下均在控制节点上操作 ---------------------------

1、创建数据库和用户

登录mysql:如果你前面安装设置了密码就用 mysql -u root -p 登录,如果没有设置过密码,就用sudo su切换到root用户,直接mysql进入

代码语言:javascript复制
MariaDB (none)> CREATE DATABASE keystone;
MariaDB (none)> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB (none)> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

替换KEYSTONE_DBPASS为合适的密码,例如 用 admin 替换命令中的 KEYSTONE_DBPASS。

2、安装Keystone组件, 安装完成之后编辑 /etc/keystone/keystone.conf

代码语言:javascript复制
$ sudo apt install keystone apache2 libapache2-mod-wsgi

$ sudo vim /etc/keystone/keystone.conf
代码语言:javascript复制
[database]
# ...
connection = mysql pymysql://keystone:admin@controller/keystone

[token]
# ...
provider = fernet

这里的密码已经改成admin了,配置文件需要同步修改

3、同步数据库

代码语言:javascript复制
$ sudo su
$ /bin/sh -c "keystone-manage db_sync" keystone

初始化 Fernet 密钥库

代码语言:javascript复制
$ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
$ keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

引导身份服务(注意这里的密码已经改成了admin)

代码语言:javascript复制
keystone-manage bootstrap --bootstrap-password admin --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:5000/v3/ -- bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

4、配置Apache HTTP服务,sudo vim /etc/apache2/apache2.conf,添加一行

代码语言:javascript复制
ServerName controller

重启 apache服务 sudo service apache2 restart

5、创建域、项目、用户和角色(一口气执行完即可, user create需要输入密码,demo用户的密码)

代码语言:javascript复制
$ openstack project create --domain default --description "Service Project" service
$ openstack project create --domain default --description "Demo Project" demo
$ openstack user create --domain default --password-prompt demo
User Password:
Repeat User Password:
$ openstack role create user
$ openstack role add --project demo --user demo user

验证一下,admin用户输入admin的密码, demo用户输入demo的密码

代码语言:javascript复制
$ unset OS_AUTH_URL OS_PASSWORD
$ openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue
Password:

创建一个脚本叫 admin-openrc

代码语言:javascript复制
sudo vim admin-openrc
代码语言:javascript复制
### 添加环境变量的设置,需要改一下OS_PASSWORD,这里我已经改成了admin
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

测试一下

代码语言:javascript复制
$ . admin-openrc
$ openstack token issue
 ------------ ----------------------------------------------------------------- 
| Field      | Value                                                           |
 ------------ ----------------------------------------------------------------- 
| expires    | 2016-02-12T20:44:35.659723Z                                     |
| id         | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl |
|            | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e |
|            | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E       |
| project_id | 343d245e850143a096806dfaefa9afdc                                |
| user_id    | ac3377633149401296f6c0d92d79dc16                                |
 ------------ ----------------------------------------------------------------- 

【Step6 安装Openstack-glance 服务】 -----------

------------------ 以下均在控制节点上操作 ---------------------------

1、同keystone一样,先创建数据库和用户

代码语言:javascript复制
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';

替换GLANCE_DBPASS为合适的密码。这里还是假定密码是admin。

2、创建域、项目、用户和角色

代码语言:javascript复制
$ . admin-openrc
$ openstack user create --domain default --password-prompt glance
$ openstack role add --project service --user glance admin
$ openstack service create --name glance --description "OpenStack Image" image

### 创建glance服务的API-Endpoint, 
$ openstack endpoint create --region RegionOne image public http://controller:9292
$ openstack endpoint create --region RegionOne image internal http://controller:9292
$ openstack endpoint create --region RegionOne image admin http://controller:9292

3、安装glance组件

编辑配置文件 /etc/glance/glance-api.conf

代码语言:javascript复制
$ sudo apt install glance
### 
$ sudo vim /etc/glance/glance-api.conf
代码语言:javascript复制
[database]
# ...
connection = mysql pymysql://glance:GLANCE_PASS@controller/glance

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]
# ...
flavor = keystone

[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

编辑配置文件 /etc/glance/glance-registry.conf

代码语言:javascript复制
[database]
# ...
connection = mysql pymysql://glance:GLANCE_PASS@controller/glance

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]
# ...
flavor = keystone

填充glance服务数据库

代码语言:javascript复制
$ sudo su
$ /bin/sh -c "glance-manage db_sync" glance

最后重启服务

代码语言:javascript复制
# service glance-registry restart
# service glance-api restart

测试一下

代码语言:javascript复制
$ . admin-openrc
$ wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
$ openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public

$ openstack image list
 -------------------------------------- -------- -------- 
| ID                                   | Name   | Status |
 -------------------------------------- -------- -------- 
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active |
 -------------------------------------- -------- -------- 

wget卡住了怎么办 !!试试这个网址 https://github.com/Areturn/openstack-install/blob/master/cirros-0.3.5-x86_64-disk.img,还不行就去网上下载一个,这个镜像不大,到处都有。

【Step7 安装Openstack-Nova 服务】 -----------

------------------ 第一部分 控制节点配置 以下在控制节点上操作 ---------------------------

1、老规矩,先创建数据库和用户

代码语言:javascript复制
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';

替换NOVA_DBPASS为合适的密码。本文还是以admin作为密码举例

代码语言:javascript复制
$ . admin-openrc
$ openstack user create --domain default --password-prompt nova
$ openstack role add --project service --user nova admin

$ openstack service create --name nova --description "OpenStack Compute" compute

$ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
$ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
$ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

# 创建一个placement用户,密码假设也是admin

代码语言:javascript复制
$ openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:

$ openstack role add --project service --user placement admin

$ openstack service create --name placement --description "Placement API" placement

$ openstack endpoint create --region RegionOne placement public http://controller:8778
$ openstack endpoint create --region RegionOne placement internal http://controller:8778
$ openstack endpoint create --region RegionOne placement admin http://controller:8778

2、安装Nova组件,这里面是没有nova-compute的,控制节点不需要nova-compute

代码语言:javascript复制
# apt install nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler nova-placement-api

配置 /etc/nova/nova.conf 这里配置会比较多,注意替换密码

代码语言:javascript复制
[DEFAULT]
# log_dir 这一行配置要注释
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 10.0.0.61
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]
connection = mysql pymysql://nova:NOVA_DBPASS@controller/nova_api

[database]
connection = mysql pymysql://nova:NOVA_DBPASS@controller/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS

[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS

如果嫌麻烦就把所有密码配置成一样的

3、注册和填充数据库

代码语言:javascript复制
$ sudo su
$ /bin/sh -c "nova-manage api_db sync" nova
$ /bin/sh -c "nova-manage cell_v2 map_cell0" nova
$ /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
$ /bin/sh -c "nova-manage db sync" nova
$ sudo nova-manage cell_v2 list_cells

4、重启所有服务

代码语言:javascript复制
$ service nova-api restart
$ service nova-consoleauth restart
$ service nova-scheduler restart
$ service nova-conductor restart
$ service nova-novncproxy restart

------------------ 第二部分 计算节点配置 以下在计算节点上操作 ---------------------------

1、直接安装 nova-compute 组件

代码语言:javascript复制
# apt install nova-compute

配置 /etc/nova/nova.conf

代码语言:javascript复制
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 10.0.0.62

use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS

[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS

3、适配虚拟机硬件加速

代码语言:javascript复制
$ egrep -c '(vmx|svm)' /proc/cpuinfo
4

如果打印 >=1 就不用修改,如果是0那就需要修改 /etc/nova/nova-compute.conf,把kvm改成qemu

代码语言:javascript复制
[libvirt]
# virt_type = kvm
virt_type = qemu

4、重启服务

代码语言:javascript复制
$ sudo service nova-compute restart

5、验证一下 ------- 控制节点操作

代码语言:javascript复制
. admin-openrc
$ openstack compute service list --service nova-compute
 ---- ------- -------------- ------ ------- --------- ---------------------------- 
| ID | Host  | Binary       | Zone | State | Status  | Updated At                 |
 ---- ------- -------------- ------ ------- --------- ---------------------------- 
| 1  | node1 | nova-compute | nova | up    | enabled | 2017-04-14T15:30:44.000000 |
 ---- ------- -------------- ------ ------- --------- ---------------------------- 
$ sudo su
$ /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3

6、设置自动发现节点(可选)

代码语言:javascript复制
sudo vim /etc/nova/nova.conf

[scheduler]
discover_hosts_in_cells_interval = 300

【Step8 安装Openstack-Neutron 服务】 -----------

Neutron服务相对来说最复杂,需要配置网络和虚拟交换机,这里我们以OpenvSwitch为例。

------------------ 第一部分 控制节点配置 以下在控制节点上操作 ---------------------------

1、老规矩,先创建数据库和用户

代码语言:javascript复制
$ mysql -u root -p
MariaDB [(none)] > CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';

2、创建用户、服务,这个步骤相信你已经滚瓜烂熟了,Orz

代码语言:javascript复制
$ . admin-openrc
$ openstack user create --domain default --password-prompt neutron
$ openstack role add --project service --user neutron admin
$ openstack service create --name neutron --description "OpenStack Networking" network

$ openstack endpoint create --region RegionOne network public http://controller:9696
$ openstack endpoint create --region RegionOne network internal http://controller:9696
$ openstack endpoint create --region RegionOne network admin http://controller:9696

3、这里我们用自建网络来搭建,需要安装 neutron-openvswitch-agent

代码语言:javascript复制
sudo apt install neutron-server neutron-plugin-ml2 neutron-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

4、配置Neutron-Server

代码语言:javascript复制
sudo vim /etc/neutron/neutron.conf
代码语言:javascript复制
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
auth_strategy = keystone
transport_url = rabbit://openstack:RABBIT_PASS@controller
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql pymysql://neutron:NEUTRON_DBPASS@controller/neutron

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

5、配置ML2 plugin

代码语言:javascript复制
 sudo vim /etc/neutron/plugins/ml2/ml2_conf.ini
代码语言:javascript复制
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
flat_networks = provider
vni_ranges = 1:1000
enable_ipset = true

6、配置L3 agent、DHCP agent 和MetaData agent

代码语言:javascript复制
sudo vim /etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = openvswitch
代码语言:javascript复制
sudo vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
代码语言:javascript复制
sudo vim /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET

7、回头配置一下Nova

代码语言:javascript复制
 sudo vim /etc/nova/nova.conf
 
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET

8、发布到数据库

代码语言:javascript复制
$ sudo su
$ /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

9、重启服务

代码语言:javascript复制
$ service nova-api restart
$ service neutron-server restart
$ service neutron-openvswitch-agent restart
$ service neutron-dhcp-agent restart
$ service neutron-metadata-agent restart
$ service neutron-l3-agent restart

------------------ 第二部分 计算节点配置 以下在计算节点上操作 ---------------------------

1、安装openvswitch agent

代码语言:javascript复制
apt install neutron-linuxbridge-agent

2、修改neutron配置

代码语言:javascript复制
sudo vim /etc/neutron/neutron.conf
代码语言:javascript复制
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

3、配置Nova

代码语言:javascript复制
sudo vim /etc/nova/nova.conf

[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS

4、重启服务

代码语言:javascript复制
$ sudo service nova-compute restart

------------------ 第三部分 OpenvSwitch配置 控制节点和计算节点都需要安装 ---------------------------

1、安装openvswitch并配置

代码语言:javascript复制
$ sudo apt install openvswitch-switch

2、添加外部网络br-ex

代码语言:javascript复制
# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex <原网卡>

前面提到了虚拟机有四个网卡,这里选择第四个网卡,连接flat-net的那个。

删除原来的网卡配置,换成br-ex来配置。<原网卡>换成你得网卡名称,一般叫 ensxx

代码语言:javascript复制
auto br-ex
iface br-ex inet static
address 172.31.0.61
netmask 255.255.255.0

## External network interface
auto <原网卡>
iface <原网卡> inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

重新拉起接口 br-ex

代码语言:javascript复制
$ sudo ifdown br-ex 
$ sudo ifup br-ex

3、配置neutron-openvswitch-agent

代码语言:javascript复制
sudo vim /etc/neutron/plugins/ml2/openvswitch_agent.ini

[agent]
tunnel_types = vxlan
l2_population = true

[ovs]
local_ip = 10.0.1.61
bridge_mappings = provider:br-ex

[securitygroup]
firewall_driver = openvswitch

这里需要注意吧local_ip换一下,本文举例的控制节点是10.0.1.61,计算节点是10.0.1.62

4、重启服务(注意,所有节点都是一样的配置,配置文件的IP换一下即可,配置完后都要重启)

代码语言:javascript复制
$ sudo service neutron-l3-agent restart
$ sudo service neutron-openvswitch-agent restart

5、那么到这里基本已经完成了Neutron全部的配置,最后在控制节点重启neutron-server即可

代码语言:javascript复制
sudo service neutron-server restart

6、查看OVS的连接状态

代码语言:javascript复制
$ sudo ovs-vsctl show
ee4c4da7-6be5-4c98-9312-1de608132b4d
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
        Port "ens35"
            Interface "ens35"
    ovs_version: "2.8.4"

is_connected状态都是true,说明插件安装成功。

【Step9 安装仪表盘,用UI界面来测试】 -----------

1、安装 openstack-dashboard, 并配置 Apache2 Http-Server

代码语言:javascript复制
$ sudo apt install openstack-dashboard

如果下面的配置和原文件有冲突,就删除原来的配置

代码语言:javascript复制
sudo vim /etc/openstack-dashboard/local_settings.py

OPENSTACK_HOST = "controller"

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
代码语言:javascript复制
sudo vim /etc/apache2/conf-available/openstack-dashboard.conf
## 添加一行
WSGIApplicationGroup %{GLOBAL}
代码语言:javascript复制
$ sudo service apache2 reload

到此即完成了仪表盘的安装,打开仪表盘吧,访问第一个外网网卡的IP,http://xxx.xxx.xxx.61/horizon

输入Defalut admin admin进入

【Step10 使用仪表盘,创建虚拟机,并测试Neutron网络】 -----------

1、创建一个集中式(legacy)的路由器,一个Vxlan类型的网络,一个虚拟机,子网网段为192.168.1.0/24。

如图所示:

2、在控制节点上用ip netns查看Dhcp和Router

代码语言:javascript复制
$ ip netns
qdhcp-b169637c-45ff-4753-ad01-434abce4aac0 (id: 2)
qrouter-04567b6a-df3b-428f-a341-027ee12deb0e (id: 1)

进入qdhcp-xxx空间,可以看到有一个叫 tap14b0644d-05 的端口,它的IP是192.168.1.2

代码语言:javascript复制
sudo ip netns exec qdhcp-b169637c-45ff-4753-ad01-434abce4aac0 bash
# ifconfig
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

tap14b0644d-05 Link encap:Ethernet  HWaddr fa:16:3e:fe:7a:be  
          inet addr:192.168.1.2  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fefe:7abe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:483 errors:0 dropped:0 overruns:0 frame:0
          TX packets:468 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:39560 (39.5 KB)  TX bytes:45319 (45.3 KB)

进入router-xxx空间,可以看到有一个叫 qr-a694c798-e2 的端口,它的IP是192.168.1.1

代码语言:javascript复制
$ sudo ip netns exec qrouter-04567b6a-df3b-428f-a341-027ee12deb0e bash
# ifconfig
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:235 errors:0 dropped:0 overruns:0 frame:0
          TX packets:235 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:26152 (26.1 KB)  TX bytes:26152 (26.1 KB)

qr-a694c798-e2 Link encap:Ethernet  HWaddr fa:16:3e:0c:88:20  
          inet addr:192.168.1.1  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe0c:8820/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:632 errors:0 dropped:0 overruns:0 frame:0
          TX packets:607 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:55404 (55.4 KB)  TX bytes:62657 (62.6 KB)

查看ovs的网桥,可以看到这两个端口都在br-int(集成网桥)上,并且vlan tag是一样的

代码语言:javascript复制
Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "tap14b0644d-05"
            tag: 2
            Interface "tap14b0644d-05"
                type: internal
        Port "qr-a694c798-e2"
            tag: 2
            Interface "qr-a694c798-e2"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}

3、进入qrouter-xxx空间,ping虚拟机的ip,可以访问,说明二层网络是通的

代码语言:javascript复制
sudo ip netns exec qrouter-04567b6a-df3b-428f-a341-027ee12deb0e bash
# ping 192.168.1.17
PING 192.168.1.17 (192.168.1.17) 56(84) bytes of data.
64 bytes from 192.168.1.17: icmp_seq=1 ttl=64 time=7.98 ms
64 bytes from 192.168.1.17: icmp_seq=2 ttl=64 time=0.783 ms
64 bytes from 192.168.1.17: icmp_seq=3 ttl=64 time=0.516 ms
64 bytes from 192.168.1.17: icmp_seq=4 ttl=64 time=0.399 ms
64 bytes from 192.168.1.17: icmp_seq=5 ttl=64 time=0.414 ms
^C
--- 192.168.1.17 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4061ms
rtt min/avg/max/mdev = 0.399/2.019/7.985/2.986 ms

4、ssh进入cirros虚拟机,密码默认是cubswin:)

代码语言:javascript复制
# ssh cirros@192.168.1.17
cirros@192.168.1.17's password: 
$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:02:01:FD  
          inet addr:192.168.1.17  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe02:1fd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:180 errors:0 dropped:0 overruns:0 frame:0
          TX packets:207 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:25044 (24.4 KiB)  TX bytes:24391 (23.8 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

5、查看路由,测试网关可达性

代码语言:javascript复制
$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 eth0
169.254.169.254 192.168.1.1     255.255.255.255 UGH   0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
$
$ ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1): 56 data bytes
64 bytes from 192.168.1.1: seq=0 ttl=64 time=1.200 ms
64 bytes from 192.168.1.1: seq=1 ttl=64 time=0.782 ms
64 bytes from 192.168.1.1: seq=2 ttl=64 time=0.466 ms
^C
--- 192.168.1.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.466/0.816/1.200 ms

6、继续在cirros虚拟机里面ping192.168.1.2,观察OVS流标的变化。

代码语言:javascript复制
 watch -n 1 -d 'sudo ovs-ofctl dump-flows -O openflow13 br-int'
观察ovs流表的变化观察ovs流表的变化

可以很明显的看到OVS流标被命中,并且port-id和虚拟机端口、DHCP端口的ID一致。

至此,整个Openstack-Pike版本的环境基本搭建完毕。

-------------- END -------------------

0 人点赞