Centos7上部署openstack ocata配置详解

2019-09-11 16:01:43 浏览数 (1)

之前写过一篇《openstack mitaka 配置详解》然而最近使用发现阿里不再提供m版本的源,所以最近又开始学习ocata版本,并进行总结,写下如下文档

OpenStack ocata版本官方文档:https://docs.openstack.org/ocata/install-guide-rdo/environment.html

同时如果不想一步步安装,可以执行安装脚本:http://www.cnblogs.com/yaohong/p/7251852.html

一:环境

1.1主机网络

  • 系统版本 CentOS7
  • 控制节点: 1 处理器, 4 GB 内存, 及5 GB 存储
  • 计算节点: 1 处理器, 2 GB 内存, 及10 GB 存储

   说明:

  1:以CentOS7为镜像,安装两台机器(怎样安装详见http://www.cnblogs.com/yaohong/p/7240387.html)并注意配置双网卡和控制两台机器的内存。

  2:修改机器主机名分别为:controller和compute1

#hostnamectl set-hostname hostname

  3:编辑controller和compute1的 /etc/hosts 文件

#vi /etc/hosts

  4:验证

采取互ping以及ping百度的方式

1.2网络时间协议(NTP)

[控制节点安装NTP]

NTP主要为同步时间所用,时间不同步,可能造成你不能创建云主机

#yum install chrony (安装软件包)

#vi /etc/chrony.conf 增加

server NTP_SERVER iburst allow ip地址网段(可以去掉,指代允许你的ip地址网段可以访问NTP)

#systemctl enable chronyd.service (设置为系统自启动)

#systemctl start chronyd.service (启动NTP服务)

[计算节点安装NTP]

# yum install chrony

#vi /etc/chrony.conf `` 释除``server`` 值外的所有内容。修改它引用控制节点:

server controller iburst

# systemctl enable chronyd.service (加入系统自启动)

# systemctl start chronyd.service (启动ntp服务)

[验证NTP]

控制节点和计算节点分别执行#chronyc sources,出现如下

[验证NTP]

    控制节点和计算节点分别执行#chronyc sources,出现如下

1.3Openstack包

[openstack packages安装在控制和计算节点]   安装openstack最新的源: #yum install centos-release-openstack-ocata #yum install https://rdoproject.org/repos/rdo-release.rpm

#yum upgrade (在主机上升级包) #yum install python-openstackclient (安装opentack必须的插件) #yum install openstack-selinux

1.4SQL数据库

    安装在控制节点,指南中的步骤依据不同的发行版使用MariaDB或 MySQL。OpenStack 服务也支持其他 SQL 数据库。     #yum install mariadb mariadb-server python2-PyMySQL

#vi /etc/mysql/conf.d/mariadb_openstack.cnf

    加入:    [mysqld]       bind-address = 192.168.1.73 (安装mysql的机器的IP地址,这里为controller地址)       default-storage-engine = innodb       innodb_file_per_table       collation-server = utf8_general_ci       character-set-server = utf8     #systemctl enable mariadb.service (将数据库服务设置为自启动)     #systemctl start mariadb.service (将数据库服务设置为开启)     设置mysql属性:     #mysql_secure_installation (此处参照http://www.cnblogs.com/yaohong/p/7352386.html,中坑一)

1.5消息队列

    消息队列在openstack整个架构中扮演着至关重要(交通枢纽)的作用,正是因为openstack部署的灵活性、模块的松耦合、架构的扁平化,反而使openstack更加依赖于消息队列(不一定使用RabbitMQ,

    可以是其他的消息队列产品),所以消息队列收发消息的性能和消息队列的HA能力直接影响openstack的性能。如果rabbitmq没有运行起来,你的整openstack平台将无法使用。rabbitmq使用5672端口。     #yum install rabbitmq-server     #systemctl enable rabbitmq-server.service(加入自启动)     #systemctl start rabbitmq-server.service(启动)     #rabbitmqctl add_user openstack RABBIT_PASS (增加用户openstack,密码自己设置替换掉RABBIT_PASS)     #rabbitmqctl set_permissions openstack ".*" ".*" ".*" (给新增的用户授权,没有授权的用户将不能接受和传递消息)

1.6Memcached

memcache为选择安装项目。使用端口11211

[控制节点]   #yum install memcached python-memcached

修改/etc/sysconfig/memcached中的OPTIONS为。

OPTIONS="-l 127.0.0.1,::1,controller"

#systemctl enable memcached.service

 #systemctl start memcached.service

二:认证服务

2.1安装和配置

登录数据库创建keystone数据库。

【只在控制节点部署】   #mysql -u root -p   #CREATE DATABASE keystone; 设置授权用户和密码:   #GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '自定义的密码';   #GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '自定义的密码'; 安全并配置组件: #yum install openstack-keystone httpd mod_wsgi #vi /etc/keystone/keystone.conf

[database] connection = mysql pymysql://keystone:密码@controller/keystone provider = fernet

初始化身份认证服务的数据库

# su -s /bin/sh -c "keystone-manage db_sync" keystone(一点要查看数据库是否生成表成功)   初始化keys:   #keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone   引导身份服务:

keystone-manage bootstrap --bootstrap-password ADMIN_PASS --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

配置apache:   #vi /etc/httpd/conf/httpd.conf

ServerName controller(将ServerName 后面改成主机名,防止启动报错)

创建一个指向/usr/share/keystone/wsgi-keystone.conf文件的链接:

#ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

启动httpd:   #systemctl enable httpd.service   #systemctl start httpd.service

配置管理账户

#vi admin加入

export OS_USERNAME=admin

export OS_PASSWORD=123456

export OS_PROJECT_NAME=admin

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_DOMAIN_NAME=Default

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

2.2创建域、项目、用户和角色

创建Service Project:   #penstack project create --domain default

--description "Service Project" service   创建Demo Project:   #openstack project create --domain default

--description "Demo Project" demo

创建 demo 用户:   #openstack user create --domain default

--password-prompt demo   创建user角色:   #openstack role create user   将用户租户角色连接起来:   #openstack role add --project demo --user demo user

2.3验证

vi /etc/keystone/keystone-paste.ini

从``[pipeline:public_api]``,[pipeline:admin_api]``和``[pipeline:api_v3]``部分删除``admin_token_auth

重置``OS_TOKEN``和``OS_URL`` 环境变量:

unset OS_AUTH_URL OS_PASSWORD

作为 admin 用户,请求认证令牌:   #openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue

这里会遇到错误:

由于是Http错误,所以返回Apache HTTP 服务配置的地方,重启Apache 服务,并重新设置管理账户:

  # systemctlrestart httpd.service

  $ export OS_USERNAME=admin

  $ export OS_PASSWORD=ADMIN_PASS

  $ export OS_PROJECT_NAME=admin

  $ export OS_USER_DOMAIN_NAME=Default

  $ export OS_PROJECT_DOMAIN_NAME=Default

  $ export OS_AUTH_URL=http://controller:35357/v3

  $ export OS_IDENTITY_API_VERSION=3

执行完后再次执行

#openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue

 输入密码之后,有正确的输出即为配置正确。

图2.4 admin认证服务验证

作为``demo`` 用户,请求认证令牌:

#openstack --os-auth-url http://controller:5000/v3

--os-project-domain-name default --os-user-domain-name default

--os-project-name demo --os-username demo token issue

2.4创建 OpenStack 客户端环境脚本

可将环境变量设置为脚本:   #vi admin-openrc 加入:

export OS_PROJECT_DOMAIN_NAME=default  export OS_USER_DOMAIN_NAME=default  export OS_PROJECT_NAME=admin  export OS_USERNAME=admin  export OS_PASSWORD=123456(admin设置的密码)  export OS_AUTH_URL=http://controller:35357/v3  export OS_IDENTITY_API_VERSION=3  export OS_IMAGE_API_VERSION=2

#vi demo-openrc 加入:

export OS_PROJECT_DOMAIN_NAME=default   export OS_USER_DOMAIN_NAME=default   export OS_PROJECT_NAME=demo   export OS_USERNAME=demo   export OS_PASSWORD=123456(demo设置的密码)   export OS_AUTH_URL=http://controller:35357/v3   export OS_IDENTITY_API_VERSION=3   export OS_IMAGE_API_VERSION=2

#. admin-openrc (加载``admin-openrc``文件来身份认证服务的环境变量位置和``admin``项目和用户证书)    #openstack token issue(请求认证令牌)

图2.6 请求认证令牌

三:镜像服务

3.1安装配置

建立glance数据   登录mysql   #mysql -u root -p (用数据库连接客户端以 root 用户连接到数据库服务器)   #CREATE DATABASE glance;(创建 glance 数据库)   授权    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '密码'; (对``glance``数据库授予恰当的权限)    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '密码';(对``glance``数据库授予恰当的权限)   运行环境变量:   #. admin-openrc   创建glance用户信息:   #openstack user create --domain default --password-prompt glance

添加 admin 角色到 glance 用户和 service 项目上 #openstack role add --project service --user glance admin   创建``glance``服务实体:   #openstack service create --name glance --description "OpenStack Image" image

图3.1 创建glance服务实体

创建镜像服务的 API 端点:   #penstack endpoint create --region RegionOne image public http://controller:9292

图3.2 创建镜像服务API端点

#penstack endpoint create --region RegionOne image internal http://controller:9292

图3.3 创建镜像服务API端点

  #penstack endpoint create --region RegionOne image admin http://controller:9292

图3.4 创建镜像服务API端点

  安装:   #yum install openstack-glance   #vi /etc/glance/glance-api.conf 配置

[database]   connection = mysql pymysql://glance:密码@controller/glance  [keystone_authtoken](配置认证)  加入:   auth_uri = http://controller:5000   auth_url = http://controller:35357   memcached_servers = controller:11211   auth_type = password   project_domain_name = default   user_domain_name = default   project_name = service   username = glance   password = xxxx   [paste_deploy]   flavor = keystone  [glance_store]    stores = file,http   default_store = file   filesystem_store_datadir = /var/lib/glance/images/

#vi /etc/glance/glance-registry.conf

[database]   connection = mysql pymysql://glance:密码@controller/glance   [keystone_authtoken](配置认证)   加入:      auth_uri = http://controller:5000      auth_url = http://controller:35357      memcached_servers = control:11211      auth_type = password      project_domain_name = default      user_domain_name = default      project_name = service      username = glance      password = xxxx  [paste_deploy]      flavor = keystone

 同步数据库:       #su -s /bin/sh -c "glance-manage db_sync" glance     启动glance:       #systemctl enable openstack-glance-api.service openstack-glance-registry.service       # systemctl start openstack-glance-api.service openstack-glance-registry.service

3.2验证

运行环境变量:   #. admin-openrc   下载一个比较小的镜像:   #wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

解决办法: yum -y install wget 再执行 wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

上传镜像:   #openstack image create "cirros"

--file cirros-0.3.5-x86_64-disk.img

--disk-format qcow2 --container-format bare

--public

图3.5 上传镜像

  查看:  #openstack image list

图3.6 确认镜像上传

有输出证明glance配置正确

四:计算服务

4.1安装并配置控制节点

建立nova的数据库:   #mysql -u root -p (用数据库连接客户端以 root 用户连接到数据库服务器)   #CREATE DATABASE nova_api;   #CREATE DATABASE nova; (创建 nova_api 和 nova 数据库:)

#CREATE DATABASE nova_cell0;

  对数据库进行正确的授权:   #GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '密码';   #GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '密码';   #GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '密码';   #GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '密码';

#GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost'

IDENTIFIED BY '密码';

#GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%'

IDENTIFIED BY '密码';

运行环境变量:   #. admin-openrc   创建nova用户:   #openstack user create --domain default --password-prompt nova   #openstack role add --project service --user nova admin   创建 nova 服务实体:   #openstack service create --name nova --description "OpenStack Compute" compute   创建 Compute 服务 API 端点:   #openstack endpoint create --region RegionOne

compute public http://controller:8774/v2.1

#openstack endpoint create --region RegionOne

compute internal http://controller:8774/v2.1

#openstack endpoint create --region RegionOne

compute admin http://controller:8774/v2.1

#openstack user create --domain default --password-prompt placement

#openstack role add --project service --user placement admin

#openstack service create --name placement --description "Placement API" placement

#openstack endpoint create --region RegionOne placement public http://controller:8778

# openstack endpoint create --region RegionOne placement internal http://controller:8778

#openstack endpoint create --region RegionOne placement admin http://controller:8778

安装:   # yum install openstack-nova-api openstack-nova-conductor

openstack-nova-console openstack-nova-novncproxy

openstack-nova-scheduler openstack-nova-placement-api   #vi /etc/nova/nova.conf

[DEFAULT]. enabled_apis = osapi_compute,metadata [api_database] # connection = mysql pymysql://nova:NOVA_DBPASS@controller/nova_api [database] # connection = mysql pymysql://nova:NOVA_DBPASS@controller/nova [DEFAULT] #transport_url = rabbit://openstack:RABBIT_PASS@controller [api] #auth_strategy = keystone [keystone_authtoken] #auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = 密码 [DEFAULT] #my_ip = 10.0.0.11 [DEFAULT] # use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [vnc] enabled = true vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip [glance] #api_servers = http://controller:9292 [oslo_concurrency] #lock_path = /var/lib/nova/tmp [placement] #os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = PLACEMENT_PASS

#vi /etc/httpd/conf.d/00-nova-placement-api.conf

加入:

<Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory>

重启httpd 服务:

#systemctl restart httpd

填充nova-api数据库:

#su -s /bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库:

#su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

创建cell1单元格

#su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

填充新星数据库:

su -s /bin/sh -c "nova-manage db sync" nova

验证nova cell0和cell1是否正确注册:

nova-manage cell_v2 list_cells

#systemctl enable openstack-nova-api.service

openstack-nova-consoleauth.service openstack-nova-scheduler.service

openstack-nova-conductor.service openstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service

openstack-nova-consoleauth.service openstack-nova-scheduler.service

openstack-nova-conductor.service openstack-nova-novncproxy.service

4.2安装并配置计算节点

#yum install openstack-nova-compute

编辑

#vi /etc/nova/nova.conf

[DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS(计算节点ip地址) use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = NOVA_PASS [vnc] enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = PLACEMENT_PASS

#egrep -c '(vmx|svm)' /proc/cpuinfo (确定您的计算节点是否支持虚拟机的硬件加速)

  如果为0则需要修改#vi /etc/nova/nova.conf

[libvirt]  virt_type = qemu

启动计算服务及其依赖,并将其配置为随系统自动启动: 启动:  #systemctl enable libvirtd.service openstack-nova-compute.service  #systemctl start libvirtd.service openstack-nova-compute.service 将计算节点添加到单元数据库

这个在控制节点上执行

#. admin-openrc

# openstack hypervisor list

#su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

vi /etc/nova/nova.conf

  [scheduler]

  discover_hosts_in_cells_interval = 300

4.3验证

在控制节点验证:   运行环境变量: #. admin-openrc #openstack compute service list  输出正常即为配置正确

#openstack catalog list

#openstack image list

#nova-status upgrade check

五:Networking服务

5.1安装并配置控制节点

创建neutron数据库   #mysql -u root -p   #CREATE DATABASE neutron;

对``neutron`` 数据库授予合适的访问权限,使用合适的密码替换``NEUTRON_DBPASS``:   #GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';   #GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';   运行环境变量:   #. admin-openrc   创建``neutron``用户:   #openstack user create --domain default --password-prompt neutron   #openstack role add --project service --user neutron admin   添加``admin`` 角色到``neutron`` 用户:   #openstack service create --name neutron --description "OpenStack Networking" network   创建网络服务API端点

#openstack endpoint create --region RegionOne network public http://controller:9696   #openstack endpoint create --region RegionOne network internal http://controller:9696   #openstack endpoint create --region RegionOne network admin http://controller:9696   创建vxlan网络:   #yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables   #vi /etc/neutron/neutron.conf

[DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:密码@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [database] connection = mysql pymysql://neutron:密码@controller/neutron [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password =密码 [nova] auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 密码 [oslo_concurrency] lock_path = /var/lib/neutron/tmp

配置ml2扩展:   #vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true

配置网桥:

  #vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[DEFAULT] [agent] [linux_bridge] physical_interface_mappings = provider:“第二张网卡名称” [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] enable_vxlan = true local_ip = 192.168.1.146(本地网络ip) l2_population = true

配置3层网络:   #vi /etc/neutron/l3_agent.ini

[DEFAULT]  interface_driver = linuxbridge

配置dhcp:   #vi /etc/neutron/dhcp_agent.ini

[DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true

配置metadata agent  #vi /etc/neutron/metadata_agent.ini

[DEFAULT]  nova_metadata_ip = controller  metadata_proxy_shared_secret = METADATA_SECRET

为计算机节点配置网络服务

#vi /etc/nova/nova.conf

[neutron]      url = http://controller:9696      auth_url = http://controller:35357      auth_type = password      project_domain_name = default      user_domain_name = default      region_name = RegionOne      project_name = service      username = neutron      password = xxxx      service_metadata_proxy = True      metadata_proxy_shared_secret = METADATA_SECRET

创建扩展连接:    #ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini    同步数据库

# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf

--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启计算API 服务:    #systemctl restart openstack-nova-api.service    #systemctl enable neutron-server.service

neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service    #systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

启用layer-3服务并设置其随系统自启动    # systemctl enable neutron-l3-agent.service    #systemctl start neutron-l3-agent.service

5.2安装并配置计算节点

#yum install openstack-neutron-linuxbridge ebtables ipset    #vi /etc/neutron/neutron.conf

[DEFAULT] transport_url = rabbit://openstack:密码@controller auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 密码 [oslo_concurrency] lock_path = /var/lib/neutron/tmp

配置vxlan   #vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]  physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME(第二个网卡名称)  [vxlan]  enable_vxlan = True  local_ip = OVERLAY_INTERFACE_IP_ADDRESS(本地网络地址)  l2_population = True  [securitygroup]  enable_security_group = True  firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

#vi /etc/nova/nova.conf

[neutron]      url = http://controller:9696      auth_url = http://controller:35357      auth_type = password      project_domain_name = default      user_domain_name = default      region_name = RegionOne      project_name = service      username = neutron      password = xxxx

重启计算服务   #systemctl restart openstack-nova-compute.service   #systemctl enable neutron-linuxbridge-agent.service   #systemctl enable neutron-linuxbridge-agent.service

5.3验证

运行环境变量:   #. admin-openrc

#openstack extension list --network

#openstack network agent list

六:Dashboard

6.1配置

#yum install openstack-dashboard   #vi /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"  ALLOWED_HOSTS = ['one.example.com', 'two.example.com'] SESSION_ENGINE = 'django.contrib.sessions.backends.cache'CACHES = {   'default': {   'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',      'LOCATION': 'controller:11211',    }   }OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOSTOPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True      OPENSTACK_API_VERSIONS = {        "identity": 3,        "image": 2,        "volume": 2,        }OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"OPENSTACK_NEUTRON_NETWORK = { 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_V**': False, 'enable_fip_topology_check': False, } TIME_ZONE = "TIME_ZONE"

启动:   #systemctl restart httpd.service memcached.service

6.2登录

在网页上输入网址http://控制节点ip/dashboard/auth/login

域:default

用户名:admin或者demo

密码:自己设置的

图6.1 登录页面

0 人点赞