大数据环境配置之CentOS中安装kerberos

2022-11-18 16:52:02 浏览数 (1)

安装与配置

安装软件

主机上安装krb5、krb5-server和krb5-client

代码语言:javascript复制
yum install krb5-server krb5-libs krb5-auth-dialog krb5-workstation -y

其它2台从机器安装krb5-level、krb5-workstation

代码语言:javascript复制
yum install krb5-devel krb5-workstation -y

配置

配置/etc/krb5.conf。修改其中的realm,把默认的EXAMPLE.COM修改为自己要定义的值,也可以不修改。如:HADOOP.COM。

其中,需要修改以下参数: default_realm:默认的realm。设置为realm。如HADOOP.COM kdc:代表要kdc的位置。添加格式是机器名 admin_server:代表admin的位置。格式是机器名 default_domain:代表默认的域名。 注意:

krb5.conf

Master、Worker节点均需配置,且文件内容必须相同

代码语言:javascript复制
vi /etc/krb5.conf

内容

代码语言:javascript复制
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
default_realm = HADOOP.COM
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 10000d
renew_lifetime = 10000d
forwardable = true

[realms]
HADOOP.COM = {
   kdc = hadoop01:88
   admin_server = hadoop01:749
}

[domain_realm]
.example.com = HADOOP.COM
example.com = HADOOP.COM

kdc.conf

仅配置Master节点,如果没有,需自建

代码语言:javascript复制
vi /var/kerberos/krb5kdc/kdc.conf

内容

代码语言:javascript复制
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88

[realms]
 HADOOP.COM = {
    #master_key_type = aes256-cts
    acl_file = /var/kerberos/krb5kdc/kadm5.acl
    dict_file = /usr/share/dict/words
    admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
    max_life = 10000d
    max_renewable_life = 10000d
    supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}

kadm5.acl

代码语言:javascript复制
vi /var/kerberos/krb5kdc/kadm5.acl

内容

代码语言:javascript复制
*/admin@HADOOP.COM     *

创建 Kerberos数据库

创建Kerberos数据库,需要设置管理员密码,创建成功后会在/var/Kerberos/krb5kdc/下生成一系列文件,

如果重新创建,需要先删除/var/kerberos/krb5kdc下面principal相关文件。

代码语言:javascript复制
cd /var/kerberos/krb5kdc/
rm -rf principal*

需在Master节点的root用户下执行以下命令新建数据库:

代码语言:javascript复制
kdb5_util create -s -r HADOOP.COM

数据库创建成功后,需启动krb5服务:

代码语言:javascript复制
service krb5kdc start
service kadmin start

创建 kerberos的管理员

在Master节点的root用户下分别执行以下命令:

代码语言:javascript复制
kadmin.local

添加管理员

代码语言:javascript复制
addprinc admin/admin@HADOOP.COM

创建 kerberos的普通用户

创建 kerberos的普通用户及密钥文件,为配置时,各节点可以相互访问用。

  1. 在Master节点的root用户下分别执行以下命令
代码语言:javascript复制
kadmin.local
#创建用户
addprinc root/hadoop@HADOOP.COM

#生成密钥文件(生成到当前路径下)
xst -k hadoop.keytab root/hadoop@HADOOP.COM
  1. 将文件到Master和Worker节点的/var/kerberos/krb5kdc/ 目录,并设置相应的组,并将权限为400
代码语言:javascript复制
chown root:root hadoop.keytab

hadoop配置

core-site.xml

代码语言:javascript复制
<property>
  <name>hadoop.security.authorization</name>
  <value>true</value>
</property>

<property>
  <name>hadoop.security.authentication</name>
  <value>kerberos</value>
</property>

hdfs-site.xml

代码语言:javascript复制
<!-- kerberos start -->
<!-- namenode -->
<property>
  <name>dfs.namenode.keytab.file</name>
  <value>/data/tools/bigdata/kerberos/hadoop.keytab</value>
</property>

<property>
  <name>dfs.namenode.kerberos.principal</name>
  <value>root/hadoop@HADOOP.COM</value>
</property>

<property>
  <name>dfs.namenode.kerberos.internal.spnego.principal</name>
  <value>root/hadoop@HADOOP.COM</value>
</property>

<property>
  <name>dfs.web.authentication.kerberos.principal</name>
  <value>root/hadoop@HADOOP.COM</value>
</property>


<property>
  <name>dfs.web.authentication.kerberos.keytab</name>
  <value>/data/tools/bigdata/kerberos/hadoop.keytab</value>
</property>



<!-- datanode -->
<property>
  <name>dfs.datanode.keytab.file</name>
  <value>/data/tools/bigdata/kerberos/hadoop.keytab</value>
</property>

<property>
  <name>dfs.datanode.kerberos.principal</name>
  <value>/data/tools/bigdata/kerberos/hadoop.keytab</value>
</property>

<property>
  <name>dfs.http.policy</name>
  <value>HTTPS_ONLY</value>
</property>

<!-- 
<property>
<name>dfs.https.port</name>
<value>50470</value>
</property>
-->



<property>
  <name>dfs.data.transfer.protection</name>
  <value>integrity</value>
</property>

<property>
  <name>dfs.block.access.token.enable</name>
  <value>true</value>
</property>

<property>
  <name>dfs.datanode.data.dir.perm</name>
  <value>700</value>
</property>

<!--
<property>
<name>dfs.datanode.https.address</name>
<value>0.0.0.0:50475</value>
</property> -->

<!-- journalnode -->

<property>
  <name>dfs.journalnode.keytab.file</name>
  <value>/data/tools/bigdata/kerberos/hadoop.keytab</value>
</property>

<property>
  <name>dfs.journalnode.kerberos.principal</name>
  <value>root/hadoop@HADOOP.COM</value>
</property>



<property>
  <name>dfs.journalnode.kerberos.internal.spnego.principal</name>
  <value>root/hadoop@HADOOP.COM</value>
</property>

<!-- kerberos end-->

hadoop_env.sh

代码语言:javascript复制
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=${JAVA_HOME}/lib -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=hadoop01:88"

ssl-server.xml

代码语言:javascript复制
<property>
  <name>ssl.server.truststore.location</name>
  <value>/data/tools/bigdata/hadoop-2.7.7/etc/hadoop/truststore</value>
  <description>Truststore to be used by NN and DN. Must be specified.
  </description>
</property>

<property>
  <name>ssl.server.truststore.password</name>
  <value>123456</value>
  <description>Optional. Default value is "".
  </description>
</property>

<property>
  <name>ssl.server.truststore.type</name>
  <value>jks</value>
  <description>Optional. The keystore file format, default value is "jks".
  </description>
</property>

<property>
  <name>ssl.server.truststore.reload.interval</name>
  <value>10000</value>
  <description>Truststore reload check interval, in milliseconds.
    Default value is 10000 (10 seconds).
  </description>
</property>

<property>
  <name>ssl.server.keystore.location</name>
  <value>/data/tools/bigdata/hadoop-2.7.7/etc/hadoop/keystore</value>
  <description>Keystore to be used by NN and DN. Must be specified.
  </description>
</property>

<property>
  <name>ssl.server.keystore.password</name>
  <value>123456</value>
  <description>Must be specified.
  </description>
</property>

<property>
  <name>ssl.server.keystore.keypassword</name>
  <value>123456</value>
  <description>Must be specified.
  </description>
</property>

<property>
  <name>ssl.server.keystore.type</name>
  <value>jks</value>
  <description>Optional. The keystore file format, default value is "jks".
  </description>
</property>


<property>
  <name>ssl.server.exclude.cipher.list</name>
  <value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
    SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
    SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
    SSL_RSA_WITH_RC4_128_MD5</value>
  <description>Optional. The weak security cipher suites that you want excluded from SSL communication.</description>
</property>
</configuration>

0 人点赞