在CentOS 8.4中安装GreenPlum 6

2023-11-22 16:21:02 浏览数 (1)

1.Greenplum 集群规划

hostname

IP

os

user

password

role

gp-mdw

193.169.100.151

Centos 8.4

root

admin123

Master

gp-smdw

193.169.100.152

Centos 8.4

root

admin123

Master Standy

gp-sdw01

193.169.100.153

Centos 8.4

root

admin123

Primary SegmentMirror Segment

gp-sdw02

193.169.100.154

Centos 8.4

root

admin123

Primary SegmentMirror Segment

gp-sdw03

193.169.100.155

Centos 8.4

root

admin123

Primary SegmentMirror Segment

  • 官网文档看似很多命令和配置文件,实际上大部分都只是为了更方便操作多个节点,只有初始化时才使用到了配置文件。
  • 官网文档中 mdw 表示 Master 主机、smdw 表示 Standby 主机、sdw表示 Segment 主机。
  • 加载了/usr/local/greenplum-db/greenplum_path.sh 的 LD_LIBRARY_PATH 后 yum 或者 apt 将无法使用

https://network.pivotal.io/products/vmware-tanzu-greenplum#/releases/1163282/file_groups/9837

2.平台需求

  1. 操作系统 CentOS 64-bit 7.3及其以上版本,将swap设置为和内存一样大。
  2. 依赖软件包 apr apr-util bash bzip2 curl krb5 libcurl libevent libxml2 libyaml zlib openldap openssh openssl openssl-libs perl readline rsync R sed tar zip
  3. Java
  • Open JDK 8 or Open JDK 11
  • Oracle JDK 8 or Oracle JDK 11
  1. 硬件和网络 (1)最小物理内存16G (2)集群中所有主机在同一局域网内(连接万兆交换机),每台主机建议至少两块万兆网卡,做bonding mode4。 (3)数据分区使用XFS文件系统。master、standby master主机只需要一个数据分区/data,segment主机需要两个数据分区/data1、/data2,用作primary和mirror。

官方文档:http://docs.greenplum.org/6-12/install_guide/platform-requirements.html

  1. 用户数据空间计算 磁盘空间 * 0.7 = 可用空间 = (2 * 用户数据空间) 用户数据空间/3 其中: 2 * 用户数据空间 为 primary mirror 所需空间。 用户数据空间/3 为工作区(work space)所需空间。

举例: 磁盘空间=2T,则用户数据可用空间=2T0.73/7=600G。

官方文档:http://docs.greenplum.org/6-12/install_guide/capacity_planning.html

3.系统配置

3.1.系统配置

step 1.修改主机名,在 /etc/hosts 添加如下内容 编辑/etc/hosts文件,添加Greenplum中的所有IP、主机名、别名。master别名为mdw,standby master别名为smdw,segment别名为sdw1、sdw2 …

代码语言:javascript复制
cat >> /etc/hosts <<EOF
# Greenplum DB
193.169.100.151  gp-mdw
193.169.100.152  gp-smdw
193.169.100.153  gp-sdw01
193.169.100.154  gp-sdw02
193.169.100.155  gp-sdw03
EOF

一般建议的命名规则如下: 项目名_gp_节点 Master : dis_gp_mdw Standby Master : dis_gp_smdw Segment Host : dis_gp_sdw1 dis_gp_sdw2 以此类推

如果Standby也搭建在某Segment host下,则命名为:dis_gp_sdw3_smdw

step 2.禁用SELinux和防火墙

代码语言:javascript复制
setenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

systemctl stop firewalld && systemctl disable firewalld

step 3.编辑 /etc/sysctl.conf 文件,添加如下参数设置:

代码语言:javascript复制
cat >> /etc/sysctl.d/99-sysctl.conf <<EOF
# kernel.shmall = _PHYS_PAGES / 2 # See Shared Memory Pages
kernel.shmall = 197951838
# kernel.shmmax = kernel.shmall * PAGE_SIZE 
kernel.shmmax = 810810728448
kernel.shmmni = 4096
vm.overcommit_memory = 2 # See Segment Host Memory
vm.overcommit_ratio = 95 # See Segment Host Memory

net.ipv4.ip_local_port_range = 10000 65535 # See Port Settings
kernel.sem = 250 2048000 200 8192
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ipfrag_high_thresh = 41943040
net.ipv4.ipfrag_low_thresh = 31457280
net.ipv4.ipfrag_time = 60
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.swappiness = 10
vm.zone_reclaim_mode = 0
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100

# 大于64G内存推荐
# vm.dirty_background_ratio = 0 # See System Memory
# vm.dirty_ratio = 0
# vm.dirty_background_bytes = 1610612736
# vm.dirty_bytes = 4294967296
EOF

配置 vm.min_free_kbytes

代码语言:javascript复制
awk 'BEGIN {OFMT = "%.0f";} /MemTotal/ {print "vm.min_free_kbytes =", $2 * .03;}' /proc/meminfo >> /etc/sysctl.d/99-sysctl.conf

使配置生效

step 4.编辑 /etc/security/limits.d/99-nproc.conf文件,添加(或修改)如下参数设置:

代码语言:javascript复制
cat > /etc/security/limits.d/99-nproc.conf <<EOF
* soft nofile 524288
* hard nofile 524288
* soft nproc 131072
* hard nproc 131072
EOF

step 5.设置XFS文件系统mount选项 编辑/etc/fstab文件,添加XFS文件系统mount选项为rw,nodev,noatime,nobarrier,inode64,例如:

代码语言:javascript复制
/dev/data /data xfs rw,nodev,noatime,nobarrier,inode64 0 0
/dev/data1 /data1 xfs rw,nodev,noatime,nobarrier,inode64 0 0
/dev/data2 /data2 xfs rw,nodev,noatime,nobarrier,inode64 0 0

使配置生效

step 6.设置预读块的值为16384

代码语言:javascript复制
# 获取值,例如:
/sbin/blockdev --getra /dev/sda

# 设置值,例如:
/sbin/blockdev --setra 16384 /dev/sda

将设置命令添加到/etc/rc.d/rc.local文件,并将该文件设置为可执行,使得系统重启自动执行。

代码语言:javascript复制
chmod  x /etc/rc.d/rc.local

step 7.设置磁盘访问I/O调度策略

代码语言:javascript复制
echo deadline > /sys/block/sda/queue/scheduler
echo deadline > /sys/block/fd0/queue/scheduler
echo deadline > /sys/block/hdc/queue/scheduler

将设置命令添加到/etc/rc.d/rc.local文件,使得系统重启自动执行。

代码语言:javascript复制
echo "echo deadline > /sys/block/sda/queue/scheduler" >> /etc/rc.d/rc.local

以下方法重启后无效

代码语言:javascript复制
grubby --update-kernel=ALL --args="elevator=mq-deadline"
grubby --info=ALL

step 8.禁用透明大页面(THP)

代码语言:javascript复制
# 查看当前配置
cat /sys/kernel/mm/transparent_hugepage/enabled

# 设置
echo never > /sys/kernel/mm/transparent_hugepage/enabled

# 永久生效
echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.d/rc.local

使得系统重启自动生效:

代码语言:javascript复制
grubby --update-kernel=ALL --args="transparent_hugepage=never"
grubby --info=ALL

step 9.禁止IPC对象删除 编辑/etc/systemd/logind.conf文件,设置RemoveIPC参数:

代码语言:javascript复制
RemoveIPC=no

重启服务使配置生效:

代码语言:javascript复制
service systemd-logind restart

step 10.设置SSH连接数阈值 编辑/etc/ssh/sshd_config文件,设置以下参数:

代码语言:javascript复制
# 指定允许同时并发的 SSH 连接数量(MaxStartups)。如果设为0,就表示没有限制。这个属性也可以设为A:B:C的形式,比如
# 这三部分数值的含义如下:
# 10:表示如果达到10个并发连接,就开始拒绝连接,不过不是全部拒绝
# 30:当连接数到达10时,之后的连接有30%的概率被拒绝掉
# 100:如果达到100个并发连接,则后面的连接将100%被拒绝。
MaxStartups 10:30:200

# 每个连接可以并行开启多少个会话(session),默认值是10.
MaxSessions 200

# 每30秒往客户端发送会话请求,保持连接
ClientAliveInterval 30

# 表示3表示重连3次失败后,重启SSH会话
ClientAliveCountMax 3

或将配置追加到尾部:

代码语言:javascript复制
cat >> /etc/ssh/sshd_config <<EOF
MaxStartups 10:30:200
MaxSessions 200
EOF

重启服务使配置生效:

代码语言:javascript复制
systemctl restart sshd

step 11.确认或配置时区 “date” 命令的输出应该为东八区,例如:Thu Feb 25 08:13:00 CST 2021 如果在安装操作系统时设置的时区有误,可以执行 “tzselect” 命令更改时区,依次选择 Asia -> China -> Beijing Time -> YES 即可。一定要在安装Greenplum前确保时区设置正确,因为在Greenplum系统初始化后,LC_COLLATE、LC_CTYPE的值不能再更改。

3.2.配置同步系统时钟

step 1.在 gp_mdw 主机的配置时间服务器

代码语言:javascript复制
pool 193.169.100.106 iburst

step 2.在 gp_smdw 主机配置时间服务器

代码语言:javascript复制
pool 193.169.100.107 iburst
pool gp-mdw iburst

step 3.在所有segment主机的添加NTP服务器

代码语言:javascript复制
pool gp-mdw iburst
pool gp-smdw iburst

step 4.在所有主机启动chronyd服务并查看时间同步状态

3.3.创建gpadmin账户

step 1.建立组和用户

代码语言:javascript复制
groupadd -r -g 1001 gpadmin
useradd gpadmin -r -m -g gpadmin -u 1001
passwd gpadmin

step 2.使用 visudo 授予gpadmin用户sudo访问权限 编辑110行,去掉下行的注释:

代码语言:javascript复制
%wheel        ALL=(ALL)       NOPASSWD: ALL

将gpadmin用户添加到wheel组:

代码语言:javascript复制
usermod -aG wheel gpadmin

3.4.安装Java(可选)

代码语言:javascript复制
# 查找yum资源库中的java包
yum search java | grep -i --color JDK

# 安装Java 1.8
yum install -y java-1.8.0-openjdk.x86_64 java-1.8.0-openjdk-devel.x86_64

# 验证安装
java -version

重启主机使所有配置生效。 官方文档:http://docs.greenplum.org/6-12/install_guide/prep_os.html

4.安装Greenplum数据库软件

4.1.安装Greenplum数据库

在所有主机用root用户执行以下步骤。

step 1.下载安装包

代码语言:javascript复制
wegt https://github.com/greenplum-db/gpdb/releases/download/6.14.1/open-source-greenplum-db-6.22.0-rhel8-x86_64.rpm

step 2.安装

代码语言:javascript复制
dnf install open-source-greenplum-db-6.22.0-rhel8-x86_64.rpm -y

step 3.修改安装目录的属主和组

代码语言:javascript复制
chown -R gpadmin:gpadmin /usr/local/greenplum*
chgrp -R gpadmin /usr/local/greenplum*

4.2.使用gpssh-exkeys 工具,打通n-n的免密登陆

在master主机用 root 和 gpadmin用户执行以下步骤

step 1.设置Greenplum环境

代码语言:javascript复制
source /usr/local/greenplum-db/greenplum_path.sh

# 确认 gpssh
which gpssh

step 2.root 和 gpadmin用户 启用 SSH免密

代码语言:javascript复制
ssh-keygen -t rsa -b 4096

ssh-copy-id gp-mdw
ssh-copy-id gp-smdw
ssh-copy-id gp-sdw01
ssh-copy-id gp-sdw02
ssh-copy-id gp-sdw03

step 3.在gpadmin用户主目录创建名为all_host的文件,内容为所有Greenplum主机名,例如:

代码语言:javascript复制
cat > all_host <<EOF
gp-mdw
gp-smdw
gp-sdw01
gp-sdw02
gp-sdw03
EOF

step 4.设置主机免密码登陆(root 和 gpadmin 都配置)

代码语言:javascript复制
[gpadmin@gp-mdw ~]$ gpssh-exkeys -f all_host
[STEP 1 of 5] create local ID and authorize on local host
  ... /home/gpadmin/.ssh/id_rsa file exists ... key generation skipped

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] retrieving credentials from remote hosts
  ... send to gp-smdw
  ... send to gp-sdw01
  ... send to gp-sdw02
  ... send to gp-sdw03

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts
  ... finished key exchange with gp-smdw
  ... finished key exchange with gp-sdw01
  ... finished key exchange with gp-sdw02
  ... finished key exchange with gp-sdw03

[INFO] completed successfully
[gpadmin@gp-mdw ~]$

打通服務器後可以使用gpssh進行批量操作了:

代码语言:javascript复制
[gpadmin@gp-mdw ~]$ gpssh -f /home/gpadmin/all_host
=> pwd
[gp-sdw02] /home/gpadmin
[  gp-mdw] /home/gpadmin
[ gp-smdw] /home/gpadmin
[gp-sdw03] /home/gpadmin
[gp-sdw01] /home/gpadmin
=>

4.3.确认软件安装

在 master 主机用gpadmin用户执行以下步骤。

代码语言:javascript复制
gpssh -f all_host -e 'ls -l /usr/local/greenplum-db-6.22.0'

如果Greenplum安装成功,应该能够在没有密码提示的情况下登录到所有主机。

官方文档:http://docs.greenplum.org/6-12/install_guide/install_gpdb.html

5.创建数据存储区

source /usr/local/greenplum-db/greenplum_path.sh gpssh -f /home/gpadmin/all_host

5.1.在master和standby master主机创建数据存储区

在 master 主机用 gpadmin 用户执行以下命令。

代码语言:javascript复制
[root@gp-mdw ~]# source /usr/local/greenplum-db/greenplum_path.sh

[root@gp-mdw ~]# gpssh -h gp-mdw -e 'mkdir -p /opt/greenplum/data/master'
[gp-mdw] mkdir -p /opt/greenplum/data/master
[root@gp-mdw ~]# gpssh -h gp-mdw -e 'chown gpadmin:gpadmin /opt/greenplum/data/master'
[gp-mdw] chown gpadmin:gpadmin /opt/greenplum/data/master
[root@gp-mdw ~]#

[root@gp-mdw ~]# gpssh -h gp-smdw -e 'mkdir -p /opt/greenplum/data/master'
[gp-smdw] mkdir -p /opt/greenplum/data/master
[root@gp-mdw ~]# gpssh -h gp-smdw -e 'chown gpadmin:gpadmin /opt/greenplum/data/master'
[gp-smdw] chown gpadmin:gpadmin /opt/greenplum/data/master
[root@gp-mdw ~]# 

5.2.在segment主机创建数据存储区

在master主机用gpadmin用户执行以下步骤。

step 1.创建名为seg_host的文件,内容为所有segment主机名,例如:

代码语言:javascript复制
cat > seg_hosts <<EOF
gp-sdw01
gp-sdw02
gp-sdw03
EOF

step 2.在所有segment主机上创建主和镜像数据目录位置

代码语言:javascript复制
[root@gp-mdw ~]# gpssh -f seg_hosts 
=> mkdir -p /opt/greenplum/data1/primary
[gp-sdw02]
[gp-sdw01]
[gp-sdw03]
=> mkdir -p /opt/greenplum/data1/mirror
[gp-sdw02]
[gp-sdw01]
[gp-sdw03]
=> mkdir -p /opt/greenplum/data2/primary
[gp-sdw02]
[gp-sdw01]
[gp-sdw03]
=> mkdir -p /opt/greenplum/data2/mirror
[gp-sdw02]
[gp-sdw01]
[gp-sdw03]
=> chown -R gpadmin /opt/greenplum/data1/*
[gp-sdw02]
[gp-sdw01]
[gp-sdw03]
=> chown -R gpadmin /opt/greenplum/data2/*
[gp-sdw02]
[gp-sdw01]
[gp-sdw03]
=>

官方文档:http://docs.greenplum.org/6-12/install_guide/create_data_dirs.html

5.3.复制系统参数到其他节点(只能复制普通文件)

代码语言:javascript复制
source /usr/local/greenplum-db/greenplum_path.sh

gpscp -f /home/gpadmin/seg_hosts /etc/hosts root@=:/etc/hosts

gpscp -f seg_hosts /etc/sysctl.d/99-sysctl.conf root@=:/etc/sysctl.d/99-sysctl.conf

gpscp -f seg_hosts /etc/security/limits.d/99-nproc.conf root@=:/etc/security/limits.d/99-nproc.conf

gpssh -f seg_hosts  -e 'sysctl -p'
gpssh -f seg_hosts  -e 'reboot'

6.验证系统

6.1.验证网络性能

在master主机用gpadmin用户执行以下步骤。

step 1.设置Greenplum环境

代码语言:javascript复制
source /usr/local/greenplum-db/greenplum_path.sh

step 2.检查点对点网络传输速度:

代码语言:javascript复制
# 双向同时发包,适合偶数个网口的情况
gpcheckperf -f all_host -r N -d /tmp > subnet.out

# 单向顺序发包,适合奇数或偶数个网口的情况
gpcheckperf -f all_host -r n -d /tmp > subnet.out

step 3.检查全矩阵多对多网络传输速度:

代码语言:javascript复制
gpcheckperf -f all_host -r M -d /tmp > subnet.out

结果应该大于100MB/s。

6.2.验证磁盘I/O和内存带宽性能

在master主机用gpadmin用户执行以下步骤。 step 1.设置Greenplum环境

代码语言:javascript复制
source /usr/local/greenplum-db/greenplum_path.sh

step 2.检查磁盘I/O(dd)和内存带宽(stream)性能

代码语言:javascript复制
gpcheckperf -f seg_hosts -r ds -D -d /opt/greenplum/data1/primary -d /opt/greenplum/data2/primary -d /opt/greenplum/data1/mirror -d /opt/greenplum /data2/mirror > io.out

官方文档:http://docs.greenplum.org/6-12/install_guide/validate.html

7.初始化Greenplum数据库系统

7.1.初始化Greenplum数据库(gpadmin用户)

在master主机用gpadmin用户执行以下步骤。

step 1.设置Greenplum环境

代码语言:javascript复制
source /usr/local/greenplum-db/greenplum_path.sh

step 2.创建Greenplum数据库配置文件

代码语言:javascript复制
cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpinitsystem_config

step 3.编辑 /home/gpadmin/gpinitsystem_config 文件内容如下

代码语言:javascript复制
# FILE NAME: gpinitsystem_config

# Configuration file needed by the gpinitsystem

################################################
#### REQUIRED PARAMETERS
################################################

#### 这个Greenplum 系统的名称用引号括起来。
ARRAY_NAME="Greenplum Data Platform"

#### 实用程序生成的数据目录的命名约定。
SEG_PREFIX=gpseg

#### 计算主要分段端口号的基数。
PORT_BASE=6000

#### 将创建主要段数据目录的文件系统位置。 列表中的位置数量决定了每个物理主机将创建的主要段的数量(如果主机文件中列出了主机的多个地址,则段的数量将均匀分布在指定的接口地址上)。
#declare -a DATA_DIRECTORY=(/data1/primary /data1/primary /data1/primary /data2/primary /data2/primary /data2/primary)
declare -a DATA_DIRECTORY=(/opt/greenplum/data1/primary /opt/greenplum/data1/primary /opt/greenplum/data1/primary /opt/greenplum/data2/primary /opt/greenplum/data2/primary /opt/greenplum/data2/primary)

#### 主控主机的操作系统配置的主机名或 IP 地址。
MASTER_HOSTNAME=gp-mdw

#### 将创建主数据目录的文件系统位置。
MASTER_DIRECTORY=/opt/greenplum/data/master

#### 主实例的端口号。
MASTER_PORT=5432

#### Shell 实用程序用于连接到远程主机。
TRUSTED_SHELL=ssh

#### 自动 WAL 检查点之间的最大日志文件段。
CHECK_POINT_SEGMENTS=8

#### 默认服务器端字符集编码。
ENCODING=UNICODE

################################################
#### OPTIONAL MIRROR PARAMETERS
################################################

#### 计算镜像段端口号的基数。
#MIRROR_PORT_BASE=7000

#### 将创建镜像段数据目录的文件系统位置。 镜像位置的数量必须等于 DATA_DIRECTORY 参数中指定的主要位置的数量。
#declare -a MIRROR_DATA_DIRECTORY=(/data1/mirror /data1/mirror /data1/mirror /data2/mirror /data2/mirror /data2/mirror)
declare -a MIRROR_DATA_DIRECTORY=(/opt/greenplum/data1/mirror /opt/greenplum/data1/mirror /opt/greenplum/data1/mirror /opt/greenplum/data2/mirror /opt/greenplum/data2/mirror /opt/greenplum/data2/mirror)

################################################
#### OTHER OPTIONAL PARAMETERS
################################################

#### 初始化后创建此名称的数据库。
#DATABASE_NAME=name_of_database

#### 在此处指定主机地址文件的位置,而不是使用 gpinitsystem 的 -h 选项。
#MACHINE_LIST_FILE=/home/gpadmin/gpconfigs/hostfile_gpinitsystem
MACHINE_LIST_FILE=/home/gpadmin/seg_hosts

创建此文件并仅将您的主机名放入文件中: MACHINE_LIST_FILE=./hostlist_singlenode

将此行更新为您要用于初选的现有目录,例如: declare -a DATA_DIRECTORY=(/home/gpadmin/primary /home/gpadmin/primary) 目录重复的次数控制段数。

更新这一行以获得您机器的主机名,在我的例子中,主机名是“ubuntu”: MASTER_HOSTNAME=ubuntu

更新文件中的主数据目录条目并通过创建目录确保它存在: MASTER_DIRECTORY=/home/gpadmin/master

这足以让数据库初始化并运行起来,所以关闭文件,让我们初始化集群。我们将有一个主段实例和两个具有此配置的主段实例。在更高级的设置中,您将在其他主机上配置备用主服务器和段镜像,并且数据将自动在主段之间分片(分布)并从主段镜像到镜像。

step 4.初始化数据库(只需master节点)**

代码语言:javascript复制
source /usr/local/greenplum-db/greenplum_path.sh

gpinitsystem -c gpinitsystem_config -s gp-smdw

其中 gp-smdw 是指master的standby所在的节点,将standby放在最后一个节点,可能是约定俗成的吧。

gpinitsystem应用程序将验证系统配置,确保可以连接到每个主机并访问配置中指定的数据目录。

代码语言:javascript复制
[gpadmin@gp-mdw ~]$ gpinitsystem -c gpinitsystem_config -s gp-smdw
20220928:12:43:09:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Checking configuration parameters, please wait...
20220928:12:43:09:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Reading Greenplum configuration file gpinitsystem_config
20220928:12:43:09:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Locale has not been set in gpinitsystem_config, will set to default value
20220928:12:43:09:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Locale set to en_US.utf8
20220928:12:43:09:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-No DATABASE_NAME set, will exit following template1 updates
20220928:12:43:09:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-MASTER_MAX_CONNECT not set, will set to default value 250
20220928:12:43:10:050132 gpinitsystem:gp-mdw:gpadmin-[WARN]:-Standby Master open file limit is 1024 should be >= 65535
20220928:12:43:10:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Checking configuration parameters, Completed
20220928:12:43:10:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Commencing multi-home checks, please wait...
...
20220928:12:43:11:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Configuring build for standard array
20220928:12:43:11:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Commencing multi-home checks, Completed
20220928:12:43:11:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Building primary segment instance array, please wait...
..................
20220928:12:43:23:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Checking Master host
20220928:12:43:23:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Checking new segment hosts, please wait...
..................20220928:12:43:48:050132 gpinitsystem:gp-mdw:gpadmin-[WARN]:-Host gp-sdw03 open files limit is 1024 should be >= 65535

20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Checking new segment hosts, Completed
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Greenplum Database Creation Parameters
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:---------------------------------------
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Master Configuration
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:---------------------------------------
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Master instance name       = Greenplum Data Platform
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Master hostname            = gp-mdw
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Master port                = 5432
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Master instance dir        = /opt/greenplum/data/master/gpseg-1
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Master LOCALE              = en_US.utf8
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Greenplum segment prefix   = gpseg
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Master Database            = 
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Master connections         = 250
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Master buffers             = 128000kB
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Segment connections        = 750
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Segment buffers            = 128000kB
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Checkpoint segments        = 8
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Encoding                   = UNICODE
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Postgres param file        = Off
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Initdb to be used          = /usr/local/greenplum-db-6.22.0/bin/initdb
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-GP_LIBRARY_PATH is         = /usr/local/greenplum-db-6.22.0/lib
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-HEAP_CHECKSUM is           = on
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-HBA_HOSTNAMES is           = 0
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[WARN]:-Ulimit check               = Warnings generated, see log file <<<<<
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Array host connect type    = Single hostname per node
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Master IP address [1]      = ::1
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Master IP address [2]      = 193.169.100.151
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Master IP address [3]      = fe80::84e5:72ff:fe50:5dde
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Standby Master             = gp-smdw
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Number of primary segments = 6
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Standby IP address         = ::1
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Standby IP address         = 193.169.100.152
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Standby IP address         = fe80::107f:1eff:fe76:6fe1
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total Database segments    = 18
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Trusted shell              = ssh
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Number segment hosts       = 3
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Mirroring config           = OFF
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:----------------------------------------
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Greenplum Primary Segment Configuration
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:----------------------------------------
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw01     6000     gp-sdw01     /opt/greenplum/data1/primary/gpseg0     2
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw01     6001     gp-sdw01     /opt/greenplum/data1/primary/gpseg1     3
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw01     6002     gp-sdw01     /opt/greenplum/data1/primary/gpseg2     4
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw01     6003     gp-sdw01     /opt/greenplum/data2/primary/gpseg3     5
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw01     6004     gp-sdw01     /opt/greenplum/data2/primary/gpseg4     6
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw01     6005     gp-sdw01     /opt/greenplum/data2/primary/gpseg5     7
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw02     6000     gp-sdw02     /opt/greenplum/data1/primary/gpseg6     8
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw02     6001     gp-sdw02     /opt/greenplum/data1/primary/gpseg7     9
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw02     6002     gp-sdw02     /opt/greenplum/data1/primary/gpseg8     10
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw02     6003     gp-sdw02     /opt/greenplum/data2/primary/gpseg9     11
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw02     6004     gp-sdw02     /opt/greenplum/data2/primary/gpseg10     12
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw02     6005     gp-sdw02     /opt/greenplum/data2/primary/gpseg11     13
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw03     6000     gp-sdw03     /opt/greenplum/data1/primary/gpseg12     14
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw03     6001     gp-sdw03     /opt/greenplum/data1/primary/gpseg13     15
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw03     6002     gp-sdw03     /opt/greenplum/data1/primary/gpseg14     16
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw03     6003     gp-sdw03     /opt/greenplum/data2/primary/gpseg15     17
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw03     6004     gp-sdw03     /opt/greenplum/data2/primary/gpseg16     18
20220928:12:43:49:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-gp-sdw03     6005     gp-sdw03     /opt/greenplum/data2/primary/gpseg17     19

如果所有检查都成功,程序将提示确认配置,例如:

代码语言:javascript复制
Continue with Greenplum creation Yy|Nn (default=N):
> y

键入y开始执行初始化。安装成功结束时,程序将启动Greenplum数据库系统,应该看到:

代码语言:javascript复制
20220928:12:43:58:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Building the Master instance database, please wait...
20220928:12:44:33:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Starting the Master in admin mode
20220928:12:44:38:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Commencing parallel build of primary segment instances
20220928:12:44:38:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Spawning parallel processes    batch [1], please wait...
..................
20220928:12:44:38:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
....................................................................................
20220928:12:46:03:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------
20220928:12:46:03:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Parallel process exit status
20220928:12:46:03:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------
20220928:12:46:03:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total processes marked as completed           = 18
20220928:12:46:03:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total processes marked as killed              = 0
20220928:12:46:03:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total processes marked as failed              = 0
20220928:12:46:03:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------
20220928:12:46:03:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Removing back out file
20220928:12:46:03:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-No errors generated from parallel processes
20220928:12:46:03:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Restarting the Greenplum instance in production mode
20220928:12:46:03:061581 gpstop:gp-mdw:gpadmin-[INFO]:-Starting gpstop with args: -a -l /home/gpadmin/gpAdminLogs -m -d /opt/greenplum/data/master/gpseg-1
20220928:12:46:03:061581 gpstop:gp-mdw:gpadmin-[INFO]:-Gathering information and validating the environment...
20220928:12:46:03:061581 gpstop:gp-mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20220928:12:46:03:061581 gpstop:gp-mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20220928:12:46:03:061581 gpstop:gp-mdw:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 6.22.0 build commit:4b6c079bc3aed35b2f161c377e208185f9310a69 Open Source'
20220928:12:46:03:061581 gpstop:gp-mdw:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
20220928:12:46:03:061581 gpstop:gp-mdw:gpadmin-[INFO]:-Master segment instance directory=/opt/greenplum/data/master/gpseg-1
20220928:12:46:03:061581 gpstop:gp-mdw:gpadmin-[INFO]:-Stopping master segment and waiting for user connections to finish ...
server shutting down
20220928:12:46:04:061581 gpstop:gp-mdw:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20220928:12:46:04:061581 gpstop:gp-mdw:gpadmin-[INFO]:-Terminating processes for segment /opt/greenplum/data/master/gpseg-1
20220928:12:46:04:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /opt/greenplum/data/master/gpseg-1
20220928:12:46:04:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Gathering information and validating the environment...
20220928:12:46:04:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 6.22.0 build commit:4b6c079bc3aed35b2f161c377e208185f9310a69 Open Source'
20220928:12:46:04:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Greenplum Catalog Version: '301908232'
20220928:12:46:04:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Starting Master instance in admin mode
20220928:12:46:05:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20220928:12:46:05:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20220928:12:46:05:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Setting new master era
20220928:12:46:05:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Master Started...
20220928:12:46:05:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Shutting down master
20220928:12:46:06:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
..
20220928:12:46:09:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Process results...
20220928:12:46:09:061603 gpstart:gp-mdw:gpadmin-[INFO]:-----------------------------------------------------
20220928:12:46:09:061603 gpstart:gp-mdw:gpadmin-[INFO]:-   Successful segment starts                                            = 18
20220928:12:46:09:061603 gpstart:gp-mdw:gpadmin-[INFO]:-   Failed segment starts                                                = 0
20220928:12:46:09:061603 gpstart:gp-mdw:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20220928:12:46:09:061603 gpstart:gp-mdw:gpadmin-[INFO]:-----------------------------------------------------
20220928:12:46:09:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Successfully started 18 of 18 segment instances 
20220928:12:46:09:061603 gpstart:gp-mdw:gpadmin-[INFO]:-----------------------------------------------------
20220928:12:46:09:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Starting Master instance gp-mdw directory /opt/greenplum/data/master/gpseg-1 
20220928:12:46:10:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Command pg_ctl reports Master gp-mdw instance active
20220928:12:46:10:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Connecting to dbname='template1' connect_timeout=15
20220928:12:46:10:061603 gpstart:gp-mdw:gpadmin-[INFO]:-No standby master configured.  skipping...
20220928:12:46:10:061603 gpstart:gp-mdw:gpadmin-[INFO]:-Database successfully started
20220928:12:46:10:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
20220928:12:46:11:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Starting initialization of standby master gp-smdw
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Checking for data directory /opt/greenplum/data/master/gpseg-1 on gp-smdw
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:------------------------------------------------------
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master initialization parameters
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:------------------------------------------------------
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum master hostname               = gp-mdw
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum master data directory         = /opt/greenplum/data/master/gpseg-1
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum master port                   = 5432
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master hostname       = gp-smdw
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master port           = 5432
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master data directory = /opt/greenplum/data/master/gpseg-1
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum update system catalog         = On
20220928:12:46:11:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
20220928:12:46:12:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-The packages on gp-smdw are consistent.
20220928:12:46:12:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Adding standby master to catalog...
20220928:12:46:12:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Database catalog updated successfully.
20220928:12:46:12:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Updating pg_hba.conf file...
20220928:12:46:13:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
20220928:12:46:23:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Starting standby master
20220928:12:46:23:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Checking if standby master is running on host: gp-smdw  in directory: /opt/greenplum/data/master/gpseg-1
20220928:12:46:26:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
20220928:12:46:27:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
20220928:12:46:27:061695 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Successfully created standby master on gp-smdw
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Successfully completed standby master initialization
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Scanning utility log file for any warning messages
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[WARN]:-*******************************************************
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[WARN]:-Scan of log file indicates that some warnings or errors
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[WARN]:-were generated during the array creation
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Please review contents of log file
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-/home/gpadmin/gpAdminLogs/gpinitsystem_20220928.log
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To determine level of criticality
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[WARN]:-*******************************************************
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Greenplum Database instance successfully created
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-------------------------------------------------------
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To complete the environment configuration, please 
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-update gpadmin .bashrc file with the following
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/opt/greenplum/data/master/gpseg-1"
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-   to access the Greenplum scripts for this instance:
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-   or, use -d /opt/greenplum/data/master/gpseg-1 option for the Greenplum scripts
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-   Example gpstate -d /opt/greenplum/data/master/gpseg-1
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20220928.log
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Standby Master gp-smdw has been configured
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To activate the Standby Master Segment in the event of Master
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-failure review options for gpactivatestandby
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-------------------------------------------------------
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-The Master /opt/greenplum/data/master/gpseg-1/pg_hba.conf post gpinitsystem
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-has been configured to allow all hosts within this new
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-new array must be explicitly added to this file
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-located in the /usr/local/greenplum-db-6.22.0/docs directory
20220928:12:46:27:050132 gpinitsystem:gp-mdw:gpadmin-[INFO]:-------------------------------------------------------
[gpadmin@gp-mdw ~]$ 

如果初始化过程中遇到任何错误,整个过程将失败,并且可能留下一个部分创建的系统。此时应查看错误消息和日志,以确定故障原因以及故障发生的位置。日志在主节点/home/gpadmin/gpAdminLogs/的gpinitsystem_2016XXXX.log文件中。

需要注意的是如果初始化失败,一定要认真查看这个日志文件,一味重复安装没有太大意义,重要的是要找到主要原因。

根据发生错误的时间,可能需要进行清理,然后重试gpinitsystem程序。例如,如果创建了一些段实例,但有些失败,则可能需要停止postgres进程,并从数据存储区域中删除任何由gpinitsystem创建的数据目录。如果需要,将创建一个backout脚本来帮助清理。

如果gpinitsystem程序失败,并且系统处于部分安装状态,将会创建以下备份脚本:

代码语言:javascript复制
~/gpAdminLogs/backout_gpinitsystem_<user>_<timestamp>

可以使用此脚本清理部分创建的Greenplum数据库系统。此回退脚本将删除任何gpinitsystem创建的数据目录、postgres进程和日志文件。

代码语言:javascript复制
sh backout_gpinitsystem_gpadmin_20071031_121053

更正导致gpinitsystem失败的错误并运行backout脚本后,重新初始化Greenplum数据库

spread mirror 模式: (spread模式,主机的第一个mirror在下个主机,第二个mirror在次下个主机,第三mirror在次次下个主机…) 执行初始化命令:gpinitsystem加上–S,节点分布方式为spread

代码语言:javascript复制
gpinitsystem -c gpinitsystem_config -h seg_hosts -s gp-smdw –S

测试运行安装的Greenplum数据库

7.2.设置Greenplum环境变量

在master主机用gpadmin用户执行以下步骤。

step 1.编辑资源文件 ~/.bashrc,在文件中添加如下环境变量

代码语言:javascript复制
cat >>  ~/.bashrc <<EOF
source /usr/local/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/opt/greenplum/data/master/gpseg-1
export PGPORT=5432
export PGUSER=gpadmin
export PGDATABASE=postgres
export LD_PRELOAD=/lib64/libz.so.1 ps
EOF

step 2.使配置生效

代码语言:javascript复制
source ~/.bashrc

step 3.将环境文件复制到standby master

代码语言:javascript复制
cd ~
scp .bashrc gp-smdw:`pwd`

7.3.允许客户端连接

Master 主机中 /opt/greenplum/data/master/gpseg-1/pg_hba.conf 已配置为允许此新中的所有主机 数组进行交互。

编辑/opt/greenplum/data/master/gpseg-1/pg_hba.conf文件,添加如下客户端ip或网段,允许任意地址访问。

代码语言:javascript复制
host   all   all    0.0.0.0/0    md5

pg_hba.conf中的条目会按顺序进行匹配。通用原则是越靠前的条目匹配条件越严格,但认证方法越弱;越靠后的条目匹配条件越松散,但认证方法越强。本地socket连接使用ident认证方式。

7.4.修改参数

根据具体硬件配置,postgresql.conf中的属性值可参考https://pgtune.leopard.in.ua/#/。

step 1.检查集群状态

代码语言:javascript复制
[gpadmin@gp-mdw ~]$ gpstate 
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-Starting gpstate with args: 
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.22.0 build commit:4b6c079bc3aed35b2f161c377e208185f9310a69 Open Source'
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.22.0 build commit:4b6c079bc3aed35b2f161c377e208185f9310a69 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-10), 64-bit compiled on Sep  8 2022 22:39:10'
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-Gathering data from segments...
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-Greenplum instance status summary
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-----------------------------------------------------
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Master instance                                = Active
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Master standby                                 = gp-smdw
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Standby master state                           = Standby host passive
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Total segment instance count from metadata     = 18
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-----------------------------------------------------
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Primary Segment Status
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-----------------------------------------------------
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Total primary segments                         = 18
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Total primary segment valid (at master)        = 18
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Total primary segment failures (at master)     = 0
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files missing   = 0
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found     = 18
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing    = 0
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found      = 18
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Total number of /tmp lock files missing        = 0
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Total number of /tmp lock files found          = 18
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Total number postmaster processes missing      = 0
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Total number postmaster processes found        = 18
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-----------------------------------------------------
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Mirror Segment Status
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-----------------------------------------------------
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-   Mirrors not configured on this array
20220928:13:00:57:062018 gpstate:gp-mdw:gpadmin-[INFO]:-----------------------------------------------------
[gpadmin@gp-mdw ~]$ 

step 2.显示有镜像状态问题的段 显示具有潜在问题的主/镜像段对的详细信息,例如 1)活动段在变化跟踪模式下运行,这意味着一个段已关闭 2)活动段处于重新同步模式,这意味着它正在赶上镜像的变化 3) 段不在其首选角色中,例如在系统初始化时作为主段的段现在充当镜像,这意味着您可能有一个或多个段主机具有不平衡的处理负载。

代码语言:javascript复制
[gpadmin@gp-mdw ~]$ gpstate -e
20220928:13:01:02:062124 gpstate:gp-mdw:gpadmin-[INFO]:-Starting gpstate with args: -e
20220928:13:01:02:062124 gpstate:gp-mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.22.0 build commit:4b6c079bc3aed35b2f161c377e208185f9310a69 Open Source'
20220928:13:01:02:062124 gpstate:gp-mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.22.0 build commit:4b6c079bc3aed35b2f161c377e208185f9310a69 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-10), 64-bit compiled on Sep  8 2022 22:39:10'
20220928:13:01:02:062124 gpstate:gp-mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20220928:13:01:02:062124 gpstate:gp-mdw:gpadmin-[INFO]:-Physical mirroring is not configured
[gpadmin@gp-mdw ~]$

step 3.显示有关备用主配置的信息:

代码语言:javascript复制
[gpadmin@gp-mdw ~]$ gpstate -f
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:-Starting gpstate with args: -f
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.22.0 build commit:4b6c079bc3aed35b2f161c377e208185f9310a69 Open Source'
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.22.0 build commit:4b6c079bc3aed35b2f161c377e208185f9310a69 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-10), 64-bit compiled on Sep  8 2022 22:39:10'
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:-Standby master details
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:-----------------------
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:-   Standby address          = gp-smdw
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:-   Standby data directory   = /opt/greenplum/data/master/gpseg-1
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:-   Standby port             = 5432
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:-   Standby PID              = 22993
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:-   Standby status           = Standby host passive
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:--------------------------------------------------------------
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:--pg_stat_replication
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:--------------------------------------------------------------
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:--WAL Sender State: streaming
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:--Sync state: sync
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:--Sent Location: 0/C003C00
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:--Flush Location: 0/C003C00
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:--Replay Location: 0/C003B80
20220928:13:01:15:062147 gpstate:gp-mdw:gpadmin-[INFO]:--------------------------------------------------------------
[gpadmin@gp-mdw ~]$

step 4.设置参数

代码语言:javascript复制
gpconfig -c max_connections -v 2500 -m 500
gpconfig -c max_prepared_transactions -v 500
gpconfig -c shared_buffers -v 5GB -m 32GB
gpconfig -c effective_cache_size -v 16GB -m 96GB
gpconfig -c maintenance_work_mem -v 1280MB -m 2GB
gpconfig -c checkpoint_completion_target -v 0.9
gpconfig -c wal_buffers -v 16MB -m 16MB
# gpconfig -c checkpoint_segments -v 32 --skipvalidation
gpconfig -c effective_io_concurrency -v 200
gpconfig -c default_statistics_target -v 100
gpconfig -c random_page_cost -v 1.1
gpconfig -c log_statement -v none
gpconfig -c gp_enable_global_deadlock_detector -v on
gpconfig -c gp_workfile_compression -v on
gpconfig -c gp_max_partition_level -v 1
# 物理内存 * 0.9 / (primary mirror数量),单位MB。例如,256G内存,6个primary,6个mirror,设置为19660。
gpconfig -c gp_vmem_protect_limit -v 19660
# 专用master、standby主机上设置为CPU核数,segment主机上设置为CPU核数/(primary mirror数量)。例如64核,6 primary 6 mirror,设置如下:
gpconfig -c gp_resqueue_priority_cpucores_per_segment -v 5.3 -m 64

step 3.执行检查点

代码语言:javascript复制
psql -c "CHECKPOINT"

step 4.重启greenplum

代码语言:javascript复制
gpstop -r

官方文档:http://docs.greenplum.org/6-12/install_guide/init_gpdb.html

8.增加standby 、mirror

8.1.增加standby

在standby服务器上执行

代码语言:javascript复制
mkdir /opt/greenplum/data/master
chown gpadmin:gpadmin /opt/greenplum/data/master

在master服务器上执行

代码语言:javascript复制
gpinitstandby -s <standby_name>

中间输入一次 Y

8.2.增加 mirror

mirror就是镜像,也叫数据备份。mirror对于数据存储来说很重要。如果前面在GP初始化文件里忘记配置mirror了,请按照下面的方法添加

代码语言:javascript复制
gpaddmirrors -p 1000

运行过程中需要输入两次 mirror 路径:/opt/greenplum/data1/mirror

8.3.访问方式

可以通过gpAdmin桌面客户端来访问,也可以用命令行来访问

9.测试数据库

9.1.创建临时表空间

代码语言:javascript复制
create tablespace tmptbs location '/data/tmptbs';
alter role all set temp_tablespaces='tmptbs';

9.2.创建用户

代码语言:javascript复制
create role dwtest with password '123456' login createdb;

9.3.测试登录

代码语言:javascript复制
psql -U dwtest -h mdw

0 人点赞