侧边栏壁纸
博主头像
jack

日拱一卒无有尽,功不唐捐终入海

  • 累计撰写 25 篇文章
  • 累计创建 13 个标签
  • 累计收到 6 条评论

目 录CONTENT

文章目录

hive3.x集成hadoop3.x的kerberos认证功能部署

jack
2023-04-23 / 0 评论 / 0 点赞 / 791 阅读 / 2,179 字 / 正在检测是否收录...

mysql部署

ipa1界面新建linux mysql用户

ipa界面新建mysql用户,并将mysql用户添加到bigdata用户组
image-1682297817363
修改mysql 用户默认的登录shell
image-1682298195985

新建mysql软件目录

切换到root用户,新建mysql软件目录,并将mysql目录所有者设置为mysql用户

su

mkdir /opt/bigdata/mysql
chown -R   mysql:bigdata  /opt/bigdata/mysql

新建mysql57 数据目录

su

mkdir -pv /data/bigdata/mysql/mysql57/data

chown -R   mysql:bigdata /data/bigdata/mysql

下载和解压

su mysql
 cd /opt/bigdata/mysql/

下载

wget https://qiniu.tobehacker.com/bigdata/mysql/mysql-5.7.11-Linux-glibc2.5-x86_64.tar.gz 

解压

tar -xvf   mysql-5.7.11-Linux-glibc2.5-x86_64.tar.gz  

重命名

mv mysql-5.7.11-linux-glibc2.5-x86_64/  mysql-5.7

初始化安装,生成初始密码

cd /opt/bigdata/mysql/mysql-5.7/bin

# 切换到mysql用户 ,初始化命令
su mysql

./mysqld --initialize --user=mysql --datadir=/data/bigdata/mysql/mysql57/data --basedir=/opt/bigdata/mysql/mysql-5.7

# 临时密码在输出日志最后面 例如:
[Warning] Changed limits: max_open_files: 1024 (requested 5000)
[Warning] Changed limits: table_open_cache: 431 (requested 2000)
[Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
 [Warning] InnoDB: New log files created, LSN=45790
[Warning] InnoDB: Creating foreign key constraint system tables.
[Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 488e99a1-e242-11ed-afb8-000c294cd313.
[Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
[Note] A temporary password is generated for root@localhost: /TI%%e<cy0av

创建/配置 /etc/my.cnf

su

touch /etc/my.cnf

配置如下

[mysqld]
datadir=/data/bigdata/mysql/mysql57/data
port = 3306
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
symbolic-links=0
max_connections=600
innodb_file_per_table=1

修改配置文件权限

chmod -R 755 /etc/my.cnf
chown mysql:bigdata /etc/my.cnf

修改mysql.server

su mysql

vim /opt/bigdata/mysql/mysql-5.7/support-files/mysql.server 

将两个路径改为上面初始化设置的路径

basedir=/opt/bigdata/mysql/mysql-5.7
datadir=/data/bigdata/mysql/mysql57/data
# 有些省略 ......
if test -z "$basedir"
then
  basedir=/opt/bigdata/mysql/mysql-5.7
  bindir=/opt/bigdata/mysql/mysql-5.7/bin
  if test -z "$datadir"
  then
    datadir=/data/bigdata/mysql/mysql57/data
  fi
  sbindir=/opt/bigdata/mysql/mysql-5.7/bin
  libexecdir=/opt/bigdata/mysql/mysql-5.7/bin
else
  bindir="$basedir/bin"
  if test -z "$datadir"
  then
    datadir="$basedir/data"
  fi
  sbindir="$basedir/sbin"
  libexecdir="$basedir/libexec"
fi

配置环境变量

vim /home/mysql/.bashrc

# 添加以下内容
export PATH=$PATH:/opt/bigdata/mysql/mysql-5.7/support-files
export PATH=$PATH:/opt/bigdata/mysql/mysql-5.7/bin

# 生效配置
source .bashrc

启动mysql server

mysql.server start

修改root用户密码

# 登录,密码是之前生成的临时密码:  /TI%%e<cy0av
mysql -u root -p
# 修改密码
set password for root@localhost = password('admin123456');
# 开启远程连接
use mysql
update user set user.Host='%' where user.User='root';
FLUSH PRIVILEGES;

新建数据库 hive用户

CREATE USER 'hive'@'%' IDENTIFIED BY 'admin123456';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;

防火墙端口(可选)

# 打开3306端口
firewall-cmd --zone=public --add-port=3306/tcp --permanent

# 配置立即生效
firewall-cmd --reload

#关闭防火墙
systemctl stop firewalld

hive部署

ipa1界面新建hive用户

ipa界面新建hive用户,并将hive用户添加到bigdata用户组
参考上面mysql用户创建

新建hive软件安装目录

su

mkdir /opt/bigdata/hive/

修改所有者

chown -R  hive:bigdata  /opt/bigdata/hive/

新建hive数据目录

su

mkdir -pv /data/bigdata/hive/hive-3.1.2/data/

修改所有者

chown -R   hive:bigdata  /data/bigdata/hive/hive-3.1.2/data/

下载和解压

su hive
cd /opt/bigdata/hive/
wget https://qiniu.tobehacker.com/bigdata/hive/apache-hive-3.1.2-bin.tar.gz
tar -xvf   apache-hive-3.1.2-bin.tar.gz

mv apache-hive-3.1.2-bin hive-3.1.2

下载mysql连接驱动

su hive

cd /opt/bigdata/hive/hive-3.1.2/lib/

wget https://qiniu.tobehacker.com/bigdata/mysql/mysql-connector-java-5.1.27-bin.jar

配置hive-env.sh

下面修改Hive的环境变量,添加Hive 的配置和依赖目录:

mv /opt/bigdata/hive/hive-3.1.2/conf/hive-env.sh.template /opt/bigdata/hive/hive-3.1.2/conf/hive-env.sh

vim /opt/bigdata/hive/hive-3.1.2/conf/hive-env.sh

# 追加下面内容,注意 Hadoop 和 Hive 的安装目录修改为自己的: 

export HADOOP_HOME=/opt/bigdata/hadoop/hadoop-3.3.4
export HIVE_CONF_DIR=/opt/bigdata/hive/hive-3.1.2/conf/
export HIVE_AUX_JARS_PATH=/opt/bigdata/hive/hive-3.1.2/lib

配置hive-site.xml

cd /opt/bigdata/hive/hive-3.1.2/conf
touch hive-site.xml
vim /opt/bigdata/hive/hive-3.1.2/conf/hive-site.xml

配置内容如下,注意修改:

 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value> jdbc:mysql://slave2.tobehacker.com:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false&amp;useUnicode=true&amp;characterEncoding=UTF-8</value>
    </property>
	 <!-- 这里注意MySQL 版本,我的是 5.X , 如果是 8 需要换成 8 的驱动 -->
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
    </property>
	<!-- MySQL的用户名 -->
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>hive</value>
    </property>
	<!-- MySQL的密码-->
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>admin123456</value>
    </property>

    <!-- H2S运行绑定host -->
    <property>
        <name>hive.server2.thrift.bind.host</name>
        <value>slave2.tobehacker.com</value>
    </property>

    <!-- 远程模式部署metastore 服务地址 -->
    <property>
        <name>hive.metastore.uris</name>
        <value>thrift://slave2.tobehacker.com:9083</value>
    </property>

    <!-- 关闭元数据存储授权  -->
    <property>
        <name>hive.metastore.event.db.notification.api.auth</name>
        <value>false</value>
    </property>

    <!-- 关闭元数据存储版本的验证 -->
    <property>
        <name>hive.metastore.schema.verification</name>
        <value>false</value>
    </property>
  
  <!-- 此处集成 hadoop kerberos 认证功能 -->
  
    <property>
      <name>hive.server2.authentication</name>
      <value>kerberos</value>
    </property>

    <property>
      <name>hive.server2.authentication.kerberos.principal</name>
      <value>hive/_HOST@TOBEHACKER.COM</value>
    </property>

    <property>
      <name>hive.server2.authentication.kerberos.keytab</name>
      <value>/etc/security/keytabs/hive.service.keytab</value>
    </property>

    <property>
      <name>hive.metastore.kerberos.principal</name>
      <value>hive/_HOST@TOBEHACKER.COM</value>
    </property>

    <property>
      <name>hive.metastore.kerberos.keytab.file</name>
      <value>/etc/security/keytabs/hive.service.keytab</value>
    </property>

    <property>
      <name>hive.metastore.sasl.enabled</name>
      <value>true</value>
    </property>
  
</configuration>

ipa1 添加hive服务主体

添加hive服务主体

sudo ipa service-add hive/slave2.tobehacker.com@TOBEHACKER.COM

生成knox服务主体的keytab

ipa-getkeytab -s ipa1.tobehacker.com -p hive/slave2.tobehacker.com@TOBEHACKER.COM  -k /etc/security/keytabs/hive.service.keytab

scp keytab到slave2主机

scp /etc/security/keytabs/hive.service.keytab root@slave2:/etc/security/keytabs/

在slave2主机上执行

su
chown hive:bigdata /etc/security/keytabs/hive.service.keytab

hadoop core-site.xml添加如下内容

	<!-- hiveserver2 beeline客户端连接需要 -->
    <property>
        <name>hadoop.proxyuser.hive.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hive.groups</name>
        <value>*</value>
    </property>

注意: 修改完成重启hadoop

环境变量配置

vim /home/hive/.bashrc

末尾添加如下内容

export HIVE_HOME=/opt/bigdata/hive/hive-3.1.2
export PATH=$PATH:$HIVE_HOME/bin

生效配置

source /home/hive/.bashrc

问题集合

guava问题

将 Hive 中的 guava 依赖包和 Hadoop 中的一致,不然启动的时候回报错,删除 Hive 中的guava 包:

rm /opt/bigdata/hive/hive-3.1.2/lib/guava-19.0.jar

将Hadoop 中的依赖包 Copy 过来

cp /opt/bigdata/hadoop/hadoop-3.3.4/share/hadoop/common/lib/guava-27.0-jre.jar /opt/bigdata/hive/hive-3.1.2/lib/

处理log4j

防止日志冲突

cd /opt/bigdata/hive/hive-3.1.2/lib
 
mv log4j-slf4j-impl-2.10.0.jar log4j-slf4j-impl-2.10.0.jar.bak

hive初始化 metadata

schematool -initSchema -dbType mysql -verbos

检查是否初始化完成74张表
image-1682347429730

启动 Metastore 服务

nohup hive --service metastore > /tmp/hive.log 2>&1  &

问题:

 Could not read password file: /opt/bigdata/hadoop/hadoop-3.3.4/etc/hadoop-ldapbind-pwd.txt

解决:

chmod 777  /opt/bigdata/hadoop/hadoop-3.3.4/etc/hadoop-ldapbind-pwd.txt

chown hadoop:bigdata /opt/bigdata/hadoop/hadoop-3.3.4/etc/hadoop-ldapbind-pwd.txt

查看元数据服务端口

lsof -i:9083

COMMAND   PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
java    51941 hive  567u  IPv4 4341611      0t0  TCP *:emc-pp-mgmtsvc (LISTEN)

启动hiveserver2

nohup hive --service hiveserver2 > /tmp/hiveserver2.log 2>&1 &

问题
如果启动错误,首检查配置,如果配置无误,基本就是各类权限问题。

  1. 权限问题
# 临时解决 TODO 后期优化
hadoop fs -chmod 777 /
  1. tmp目录权限问题
su
# 临时解决 TODO 后期优化
chmod -R 777 /tmp/

使用hive客户端

kinit hive

hive 

show database;

create database bxc;

使用beeline客户端

kinit hive
beeline -u 
!connect jdbc:hive2://slave2.tobehacker.com:10000/;principal=hive/slave2.tobehacker.com@TOBEHACKER.COM;
0

评论区