Tungsten Fabric知识库丨关于OpenStack、K8s、CentOS安装问题的补充

2020-09-22 17:50:54 浏览数 (1)

作者:Tatsuya Naganawa 译者:TF编译组

多kube-master部署

3个Tungsten Fabric控制器节点:m3.xlarge(4 vcpu)-> c3.4xlarge(16 vcpu)(由于schema-transformer需要cpu资源进行acl计算,因此我需要添加资源) 100 kube-master, 800 workers: m3.medium

在下面这个链接内容中,tf-controller安装和first-containers.yaml是相同的

  • https://github.com/tnaganawa/tungstenfabric-docs/blob/master/TungstenFabricPrimer.md#2-tungstenfabric-up-and-running

Ami也相同(ami-3185744e),但是内核版本通过yum -y update kernel(转换为映像,并用于启动实例)更新

/tmp/aaa.pem是ec2实例中指定的密钥对

附cni.yaml文件:

  • https://github.com/tnaganawa/tungstenfabric-docs/blob/master/multi-kube-master-deployment-cni-tungsten-fabric.yaml
代码语言:javascript复制
(在其中一个Tungsten Fabric控制器节点键入命令)
yum -y install epel-release
yum -y install parallel

aws ec2 describe-instances --query 'Reservations[*].Instances[*].PrivateIpAddress' --output text | tr 't' 'n' > /tmp/all.txt
head -n 100 /tmp/all.txt > masters.txt
tail -n 800 /tmp/all.txt > workers.txt

ulimit -n 4096
cat all.txt | parallel -j1000 ssh -i /tmp/aaa.pem -o StrictHostKeyChecking=no centos@{} id
cat all.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo sysctl -w net.bridge.bridge-nf-call-iptables=1
 -
cat -n masters.txt | parallel -j1000 -a - --colsep 't' ssh -i /tmp/aaa.pem centos@{2} sudo kubeadm init --token aaaaaa.aaaabbbbccccdddd --ignore-preflight-errors=NumCPU --pod-network-cidr=10.32.{1}.0/24 --service-cidr=10.96.{1}.0/24 --service-dns-domain=cluster{1}.local
-
vi assign-kube-master.py
computenodes=8
with open ('masters.txt') as aaa:
 with open ('workers.txt') as bbb:
  for masternode in aaa.read().rstrip().split('n'):
   for i in range (computenodes):
    tmp=bbb.readline().rstrip()
    print ("{}t{}".format(masternode, tmp))
 -
cat masters.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo cp /etc/kubernetes/admin.conf /tmp/admin.conf
cat masters.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo chmod 644 /tmp/admin.conf
cat -n masters.txt | parallel -j1000 -a - --colsep 't' scp -i /tmp/aaa.pem centos@{2}:/tmp/admin.conf kubeconfig-{1}
-
cat -n masters.txt | parallel -j1000 -a - --colsep 't' kubectl --kubeconfig=kubeconfig-{1} get node
-
cat -n join.txt | parallel -j1000 -a - --colsep 't' ssh -i /tmp/aaa.pem centos@{3} sudo kubeadm join {2}:6443 --token aaaaaa.aaaabbbbccccdddd --discovery-token-unsafe-skip-ca-verification
-
(modify controller-ip in cni-tungsten-fabric.yaml)
cat -n masters.txt | parallel -j1000 -a - --colsep 't' cp cni-tungsten-fabric.yaml cni-{1}.yaml
cat -n masters.txt | parallel -j1000 -a - --colsep 't' sed -i -e "s/k8s2/k8s{1}/" -e "s/10.32.2/10.32.{1}/" -e "s/10.64.2/10.64.{1}/" -e "s/10.96.2/10.96.{1}/"  -e "s/172.31.x.x/{2}/" cni-{1}.yaml
-
cat -n masters.txt | parallel -j1000 -a - --colsep 't' kubectl --kubeconfig=kubeconfig-{1} apply -f cni-{1}.yaml
-
sed -i 's!kubectl!kubectl --kubeconfig=/etc/kubernetes/admin.conf!' set-label.sh 
cat masters.txt | parallel -j1000 scp -i /tmp/aaa.pem set-label.sh centos@{}:/tmp
cat masters.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo bash /tmp/set-label.sh
-
cat -n masters.txt | parallel -j1000 -a - --colsep 't' kubectl --kubeconfig=kubeconfig-{1} create -f first-containers.yaml

在OpenStack上嵌套安装Kubernetes

可以在all-in-one的openstack节点上尝试嵌套安装kubernetes。

在ansible-deployer安装了该节点之后,

  • https://github.com/tnaganawa/tungstenfabric-docs/blob/master/TungstenFabricPrimer.md#openstack-1

此外,需要手动创建为vRouter TCP/9091连接本地服务

  • https://github.com/Juniper/contrail-kubernetes-docs/blob/master/install/kubernetes/nested-kubernetes.md#option-1-fabric-snat--link-local-preferred

此配置将创建DNAT/SNAT,例如从src: 10.0.1.3:xxxx, dst-ip: 10.1.1.11:9091到src: compute's vhost0 ip:xxxx dst-ip: 127.0.0.1:9091,因此openstack VM中的CNI可以与计算节点上的vrouter-agent直接通信,并为容器选择端口/ip信息。

  • IP地址可以来自子网,也可以来自子网外部。

在该节点上,将创建两个Centos7(或ubuntu bionic)节点,并将使用相同的程序(见下面链接)安装kubernetes集群,

  • https://github.com/tnaganawa/tungstenfabric-docs/blob/master/TungstenFabricPrimer.md#kubeadm

当然yaml文件需要嵌套安装。

代码语言:javascript复制
./resolve-manifest.sh contrail-nested-kubernetes.yaml > cni-tungsten-fabric.yaml

KUBEMANAGER_NESTED_MODE: "{{ KUBEMANAGER_NESTED_MODE }}" ## this needs to be "1"
KUBERNESTES_NESTED_VROUTER_VIP: {{ KUBERNESTES_NESTED_VROUTER_VIP }} ## this parameter needs to be the same IP with the one defined in link-local service (such as 10.1.1.11)

如果coredns接收到ip,则说明嵌套安装正常。

vRouter ml2插件

我尝试了vRouter neutron插件的ml2功能。

  • https://opendev.org/x/networking-opencontrail/
  • https://www.youtube.com/watch?v=4MkkMRR9U2s

使用AWS上的三个CentOS7.5(4 cpu,16 GB内存,30 GB磁盘,ami: ami-3185744e)。

随附基于本文件的步骤。

  • https://opendev.org/x/networking-opencontrail/src/branch/master/doc/source/installation/playbooks.rst
代码语言:javascript复制
openstack-controller: 172.31.15.248
tungsten-fabric-controller (vRouter): 172.31.10.212
nova-compute (ovs): 172.31.0.231

(命令在tungsten-fabric-controller上,使用centos用户(不是root用户))

sudo yum -y remove PyYAML python-requests
sudo yum -y install git patch
sudo easy_install pip
sudo pip install PyYAML requests ansible==2.8.8
ssh-keygen
 add id_rsa.pub to authorized_keys on all three nodes (centos user (not root))

git clone https://opendev.org/x/networking-opencontrail.git
cd networking-opencontrail
patch -p1 < ml2-vrouter.diff 

cd playbooks
cp -i hosts.example hosts
cp -i group_vars/all.yml.example group_vars/all.yml

(ssh to all the nodes once, to update known_hosts)

ansible-playbook main.yml -i hosts

 - devstack日志位于/opt/stack/logs/stack.sh.log中
 - openstack进程日志写在/var/log/messages中
 - 'systemctl list-unit-files | grep devstack'显示openstack进程的systemctl条目

(openstack控制器节点)
  一旦devstack因mariadb登录错误而失败,请键入此命令进行修复。(对于openstack控制器的ip和fqdn,需要修改最后两行)
  命令将由“centos”用户(不是root用户)键入.
   mysqladmin -u root password admin
   mysql -uroot -padmin -h127.0.0.1 -e 'GRANT ALL PRIVILEGES ON *.* TO '''root'''@'''%''' identified by '''admin''';'
   mysql -uroot -padmin -h127.0.0.1 -e 'GRANT ALL PRIVILEGES ON *.* TO '''root'''@'''172.31.15.248''' identified by '''admin''';'
   mysql -uroot -padmin -h127.0.0.1 -e 'GRANT ALL PRIVILEGES ON *.* TO '''root'''@'''ip-172-31-15-248.ap-northeast-1.compute.internal''' identified by '''admin''';'

下附hosts、group_vars/all和patch的代码。(有些仅是错误修正,但有些会更改默认行为)

代码语言:javascript复制
[centos@ip-172-31-10-212 playbooks]$ cat hosts
controller ansible_host=172.31.15.248 ansible_user=centos

# 此host应为计算主机组中的一个.
# 单独部署Tungsten Fabtic计算节点的playbook尚未准备好.
contrail_controller ansible_host=172.31.10.212 ansible_user=centos local_ip=172.31.10.212

[contrail]
contrail_controller

[openvswitch]
other_compute ansible_host=172.31.0.231 local_ip=172.31.0.231 ansible_user=centos

[compute:children]
contrail
openvswitch
[centos@ip-172-31-10-212 playbooks]$ cat group_vars/all.yml
---
# IP address for OpenConrail (e.g. 192.168.0.2)
contrail_ip: 172.31.10.212

# Gateway address for OpenConrail (e.g. 192.168.0.1)
contrail_gateway:

# Interface name for OpenConrail (e.g. eth0)
contrail_interface:

# IP address for OpenStack VM (e.g. 192.168.0.3)
openstack_ip: 172.31.15.248

# 在VM上使用的OpenStack分支.
openstack_branch: stable/queens

# 也可以选择使用其它插件版本(默认为OpenStack分支)
networking_plugin_version: master

# Tungsten Fabric docker image tag for contrail-ansible-deployer
contrail_version: master-latest

# 如果为true,则使用Tungsten Fabric驱动程序安装networking_bgpvpn插件
install_networking_bgpvpn_plugin: false

# 如果true,则与设备管理器(将启动)和vRouter集成
# 封装优先级将被设置为'VXLAN,MPLSoUDP,MPLSoGRE'.
dm_integration_enabled: false

# 带有DM集成拓扑的文件的可选路径。设置并启用DM集成后,topology.yaml文件将复制到此位置
dm_topology_file:

# 如果为true,则将为当前ansible用户创建的实例密码设置为instance_password的值
change_password: false
# instance_password: uberpass1

# 如果已设置,请使用此数据覆盖docker daemon /etc config文件
# docker_config:
[centos@ip-172-31-10-212 playbooks]$ 



[centos@ip-172-31-10-212 networking-opencontrail]$ cat ml2-vrouter.diff 
diff --git a/playbooks/roles/contrail_node/tasks/main.yml b/playbooks/roles/contrail_node/tasks/main.yml
index ee29b05..272ee47 100644
--- a/playbooks/roles/contrail_node/tasks/main.yml
    b/playbooks/roles/contrail_node/tasks/main.yml
@@ -7,7  7,6 @@
       - epel-release
       - gcc
       - git
-      - ansible-2.4.*
       - yum-utils
       - libffi-devel
     state: present
@@ -61,20  60,20 @@
     chdir: ~/contrail-ansible-deployer/
     executable: /bin/bash

-- name: Generate ssh key for provisioning other nodes
-  openssh_keypair:
-    path: ~/.ssh/id_rsa
-    state: present
-  register: contrail_deployer_ssh_key
-
-- name: Propagate generated key
-  authorized_key:
-    user: "{{ ansible_user }}"
-    state: present
-    key: "{{ contrail_deployer_ssh_key.public_key }}"
-  delegate_to: "{{ item }}"
-  with_items: "{{ groups.contrail }}"
-  when: contrail_deployer_ssh_key.public_key
 #- name: Generate ssh key for provisioning other nodes
 #  openssh_keypair:
 #    path: ~/.ssh/id_rsa
 #    state: present
 #  register: contrail_deployer_ssh_key
 #
 #- name: Propagate generated key
 #  authorized_key:
 #    user: "{{ ansible_user }}"
 #    state: present
 #    key: "{{ contrail_deployer_ssh_key.public_key }}"
 #  delegate_to: "{{ item }}"
 #  with_items: "{{ groups.contrail }}"
 #  when: contrail_deployer_ssh_key.public_key

 - name: Provision Node before deploy contrail
   shell: |
@@ -105,4  104,4 @@
     sleep: 5
     host: "{{ contrail_ip }}"
     port: 8082
-    timeout: 300
 No newline at end of file
     timeout: 300
diff --git a/playbooks/roles/contrail_node/templates/instances.yaml.j2 b/playbooks/roles/contrail_node/templates/instances.yaml.j2
index e3617fd..81ea101 100644
--- a/playbooks/roles/contrail_node/templates/instances.yaml.j2
    b/playbooks/roles/contrail_node/templates/instances.yaml.j2
@@ -14,6  14,7 @@ instances:
       config_database:
       config:
       control:
       analytics:
       webui:
 {% if "contrail_controller" in groups["contrail"] %}
       vrouter:
diff --git a/playbooks/roles/docker/tasks/main.yml b/playbooks/roles/docker/tasks/main.yml
index 8d7971b..5ed9352 100644
--- a/playbooks/roles/docker/tasks/main.yml
    b/playbooks/roles/docker/tasks/main.yml
@@ -6,7  6,6 @@
       - epel-release
       - gcc
       - git
-      - ansible-2.4.*
       - yum-utils
       - libffi-devel
     state: present
@@ -62,4  61,4 @@
       - docker-py==1.10.6
       - docker-compose==1.9.0
     state: present
-    extra_args: --user
 No newline at end of file
     extra_args: --user
diff --git a/playbooks/roles/node/tasks/main.yml b/playbooks/roles/node/tasks/main.yml
index 0fb1751..d9ab111 100644
--- a/playbooks/roles/node/tasks/main.yml
    b/playbooks/roles/node/tasks/main.yml
@@ -1,13  1,21 @@
 ---
-- name: Update kernel
 - name: Install required utilities
   become: yes
   yum:
-    name: kernel
-    state: latest
-  register: update_kernel
     name:
       - python3-devel
       - libibverbs  ## needed by openstack controller node
     state: present

-- name: Reboot the machine
-  become: yes
-  reboot:
-  when: update_kernel.changed
-  register: reboot_machine
 #- name: Update kernel
 #  become: yes
 #  yum:
 #    name: kernel
 #    state: latest
 #  register: update_kernel
 #
 #- name: Reboot the machine
 #  become: yes
 #  reboot:
 #  when: update_kernel.changed
 #  register: reboot_machine
diff --git a/playbooks/roles/restack_node/tasks/main.yml b/playbooks/roles/restack_node/tasks/main.yml
index a11e06e..f66d2ee 100644
--- a/playbooks/roles/restack_node/tasks/main.yml
    b/playbooks/roles/restack_node/tasks/main.yml
@@ -9,7  9,7 @@
   become: yes
   pip:
     name:
-      - setuptools
       - setuptools==43.0.0
       - requests
     state: forcereinstall


[centos@ip-172-31-10-212 networking-opencontrail]$

完成安装大约需要50分钟。

尽管/home/centos/devstack/openrc可以用于“demo”用户登录,但是需要管理员访问权限来指定其网络类型(vRouter为空,ovs为“vxlan”),因此需要手动创建adminrc。

代码语言:javascript复制
[centos@ip-172-31-15-248 ~]$ cat adminrc 
export OS_PROJECT_DOMAIN_ID=default
export OS_REGION_NAME=RegionOne
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_IDENTITY_API_VERSION=3
export OS_PASSWORD=admin
export OS_AUTH_TYPE=password
export OS_AUTH_URL=http://172.31.15.248/identity  ## this needs to be modified
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_VOLUME_API_VERSION=2
[centos@ip-172-31-15-248 ~]$ 


openstack network create testvn
openstack subnet create --subnet-range 192.168.100.0/24 --network testvn subnet1
openstack network create --provider-network-type vxlan testvn-ovs
openstack subnet create --subnet-range 192.168.110.0/24 --network testvn-ovs subnet1-ovs

 - 创建了两个虚拟网络
[centos@ip-172-31-15-248 ~]$ openstack network list
 -------------------------------------- ------------ -------------------------------------- 
| ID                                   | Name       | Subnets                              |
 -------------------------------------- ------------ -------------------------------------- 
| d4e08516-71fc-401b-94fb-f52271c28dc9 | testvn-ovs | 991417ab-7da5-44ed-b686-8a14abbe46bb |
| e872b73e-100e-4ab0-9c53-770e129227e8 | testvn     | 27d828eb-ada4-4113-a6f8-df7dde2c43a4 |
 -------------------------------------- ------------ -------------------------------------- 
[centos@ip-172-31-15-248 ~]$

 - testvn's provider:network_type为空

[centos@ip-172-31-15-248 ~]$ openstack network show testvn
 --------------------------- -------------------------------------- 
| Field                     | Value                                |
 --------------------------- -------------------------------------- 
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2020-01-18T16:14:42Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | e872b73e-100e-4ab0-9c53-770e129227e8 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | None                                 |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | testvn                               |
| port_security_enabled     | True                                 |
| project_id                | 84a573dbfadb4a198ec988e36c4f66f6     |
| provider:network_type     | local                                |
| provider:physical_network | None                                 |
| provider:segmentation_id  | None                                 |
| qos_policy_id             | None                                 |
| revision_number           | 3                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 27d828eb-ada4-4113-a6f8-df7dde2c43a4 |
| tags                      |                                      |
| updated_at                | 2020-01-18T16:14:44Z                 |
 --------------------------- -------------------------------------- 
[centos@ip-172-31-15-248 ~]$ 

 - 它是在Tungsten Fabric的数据库中创建的

(venv) [root@ip-172-31-10-212 ~]# contrail-api-cli --host 172.31.10.212 ls -l virtual-network
virtual-network/e872b73e-100e-4ab0-9c53-770e129227e8  default-domain:admin:testvn
virtual-network/5a88a460-b049-4114-a3ef-d7939853cb13  default-domain:default-project:dci-network
virtual-network/f61d52b0-6577-42e0-a61f-7f1834a2f45e  default-domain:default-project:__link_local__
virtual-network/46b5d74a-24d3-47dd-bc82-c18f6bc706d7  default-domain:default-project:default-virtual-network
virtual-network/52925e2d-8c5d-4573-9317-2c346fb9edf0  default-domain:default-project:ip-fabric
virtual-network/2b0469cf-921f-4369-93a7-2d73350c82e7  default-domain:default-project:_internal_vn_ipv6_link_local
(venv) [root@ip-172-31-10-212 ~]# 


 - 另一方面,testvn-ovs's provider:network_type是vxlan,并且segmentation ID,mtu都是自动指定的

[centos@ip-172-31-15-248 ~]$ openstack network show testvn
 --------------------------- -------------------------------------- 
| Field                     | Value                                |
 --------------------------- -------------------------------------- 
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2020-01-18T16:14:42Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | e872b73e-100e-4ab0-9c53-770e129227e8 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | None                                 |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | testvn                               |
| port_security_enabled     | True                                 |
| project_id                | 84a573dbfadb4a198ec988e36c4f66f6     |
| provider:network_type     | local                                |
| provider:physical_network | None                                 |
| provider:segmentation_id  | None                                 |
| qos_policy_id             | None                                 |
| revision_number           | 3                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 27d828eb-ada4-4113-a6f8-df7dde2c43a4 |
| tags                      |                                      |
| updated_at                | 2020-01-18T16:14:44Z                 |
 --------------------------- -------------------------------------- 
[centos@ip-172-31-15-248 ~]$ openstack network show testvn-ovs
 --------------------------- -------------------------------------- 
| Field                     | Value                                |
 --------------------------- -------------------------------------- 
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2020-01-18T16:14:47Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | d4e08516-71fc-401b-94fb-f52271c28dc9 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | None                                 |
| is_vlan_transparent       | None                                 |
| mtu                       | 1450                                 |
| name                      | testvn-ovs                           |
| port_security_enabled     | True                                 |
| project_id                | 84a573dbfadb4a198ec988e36c4f66f6     |
| provider:network_type     | vxlan                                |
| provider:physical_network | None                                 |
| provider:segmentation_id  | 50                                   |
| qos_policy_id             | None                                 |
| revision_number           | 3                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 991417ab-7da5-44ed-b686-8a14abbe46bb |
| tags                      |                                      |
| updated_at                | 2020-01-18T16:14:49Z                 |
 --------------------------- -------------------------------------- 
[centos@ip-172-31-15-248 ~]$

CentOS 8安装过程

代码语言:javascript复制
centos8.2
ansible-deployer is used

only python3 is used (no python2)
 - 需要ansible 2.8.x

1 x for tf-controller and kube-master, 1 vRouter

(all nodes)
yum install python3 chrony
alternatives --set python /usr/bin/python3

(vRouter nodes)
yum install network-scripts
 - 这是必需的,因为vRouter当前不支持NetworkManager

(ansible node)
sudo yum -y install git
sudo pip3 install PyYAML requests ansible           
cirros-deployment-86885fbf85-tjkwn   1/1     Running   0          13s   10.47.255.249   ip-172-31-2-120.ap-northeast-1.compute.internal              
[root@ip-172-31-7-20 ~]# 
[root@ip-172-31-7-20 ~]# 
[root@ip-172-31-7-20 ~]# kubectl exec -it cirros-deployment-86885fbf85-7z78k sh
/ # ip -o a
1: lo    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever
17: eth0    inet 10.47.255.250/12 scope global eth0       valid_lft forever preferred_lft forever
/ # ping 10.47.255.249
PING 10.47.255.249 (10.47.255.249): 56 data bytes
64 bytes from 10.47.255.249: seq=0 ttl=63 time=0.657 ms
64 bytes from 10.47.255.249: seq=1 ttl=63 time=0.073 ms
^C
--- 10.47.255.249 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.073/0.365/0.657 ms
/ # 


 - 为了使chrony在安装路由器后正常工作,可能需要重新启动chrony服务器

[root@ip-172-31-4-206 ~]#  chronyc -n sources
210 Number of sources = 5
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? 169.254.169.123               3   4     0   906  -8687ns[  -12us]  /-  428us
^? 129.250.35.250                2   7     0  1002    429us[  428us]  /-   73ms
^? 167.179.96.146                2   7     0   937    665us[  662us]  /- 2859us
^? 194.0.5.123                   2   6     0  1129    477us[  473us]  /-   44ms
^? 103.202.216.35                3   6     0   933   9662ns[ 6618ns]  /-  145ms
[root@ip-172-31-4-206 ~]# 
[root@ip-172-31-4-206 ~]# 
[root@ip-172-31-4-206 ~]# service chronyd status
Redirecting to /bin/systemctl status chronyd.service
· chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2020-06-28 16:00:34 UTC; 33min ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
 Main PID: 727 (chronyd)
    Tasks: 1 (limit: 49683)
   Memory: 2.1M
   CGroup: /system.slice/chronyd.service
           └─727 /usr/sbin/chronyd

Jun 28 16:00:33 localhost.localdomain chronyd[727]: Using right/UTC timezone to obtain leap second data
Jun 28 16:00:34 localhost.localdomain systemd[1]: Started NTP client/server.
Jun 28 16:00:42 localhost.localdomain chronyd[727]: Selected source 169.254.169.123
Jun 28 16:00:42 localhost.localdomain chronyd[727]: System clock TAI offset set to 37 seconds
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 167.179.96.146 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 103.202.216.35 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 129.250.35.250 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 194.0.5.123 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 169.254.169.123 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Can't synchronise: no selectable sources
[root@ip-172-31-4-206 ~]# service chronyd restart
Redirecting to /bin/systemctl restart chronyd.service
[root@ip-172-31-4-206 ~]# 
[root@ip-172-31-4-206 ~]# 
[root@ip-172-31-4-206 ~]# service chronyd status
Redirecting to /bin/systemctl status chronyd.service
· chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2020-06-28 16:34:41 UTC; 2s ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 25252 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
  Process: 25247 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 25250 (chronyd)
    Tasks: 1 (limit: 49683)
   Memory: 1.0M
   CGroup: /system.slice/chronyd.service
           └─25250 /usr/sbin/chronyd

Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal systemd[1]: Starting NTP client/server...
Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[25250]: chronyd version 3.5 starting ( CMDMON  NTP  REFCLOCK  RTC  PRIVDROP  SCFILTER  SIGND>
Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[25250]: Frequency 35.298  /- 0.039 ppm read from /var/lib/chrony/drift
Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[25250]: Using right/UTC timezone to obtain leap second data
Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal systemd[1]: Started NTP client/server.
[root@ip-172-31-4-206 ~]#  chronyc -n sources
210 Number of sources = 5
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 169.254.169.123               3   4    17     4  -2369ns[  -27us]  /-  451us
^- 94.154.96.7                   2   6    17     5     30ms[   30ms]  /-  148ms
^- 185.51.192.34                 2   6    17     3  -2951us[-2951us]  /-  150ms
^- 188.125.64.6                  2   6    17     3   9526us[ 9526us]  /-  143ms
^- 216.218.254.202               1   6    17     5     15ms[   15ms]  /-   72ms
[root@ip-172-31-4-206 ~]# 


[root@ip-172-31-4-206 ~]# contrail-status 
Pod      Service      Original Name           Original Version  State    Id            Status         
         rsyslogd                             nightly-master    running  5fc76e57c156  Up 16 minutes  
vrouter  agent        contrail-vrouter-agent  nightly-master    running  bce023d8e6e0  Up 5 minutes   
vrouter  nodemgr      contrail-nodemgr        nightly-master    running  9439a304cbcf  Up 5 minutes   
vrouter  provisioner  contrail-provisioner    nightly-master    running  1531b1403e49  Up 5 minutes   

WARNING: container with original name '' have Pod or Service empty. Pod: '' / Service: 'rsyslogd'. Please pass NODE_TYPE with pod name to container's env

vrouter kernel module is PRESENT
== Contrail vrouter ==
nodemgr: active
agent: active

[root@ip-172-31-4-206 ~]#

原文链接: https://github.com/tnaganawa/tungstenfabric-docs/blob/master/TungstenFabricKnowledgeBase.md

0 人点赞