在FinalShell中安装Hadoop3的详细步骤是什么?
# FinalShell 连接 CentOS7 安装 Hadoop3.3.6 超详细完整步骤 全程在 **FinalShell** 操作,复制命令直接运行,适配:CentOS7 + Hadoop3.3.6 + JDK8,伪分布式/完全分布式通用。 --- # 前期准备(FinalShell 内全部执行) ## 1. 关闭防火墙 & SELinux ```bash # 关闭防火墙 systemctl stop firewalld systemctl disable firewalld # 关闭SELinux setenforce 0 sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config ``` ## 2. 安装依赖 ```bash yum install -y wget net-tools vim openssh-clients ``` --- # 第一步:安装 JDK8(Hadoop 必备) ## 1. 一键安装 OpenJDK8 ```bash yum install -y java-1.8.0-openjdk-devel ``` ## 2. 查看JDK真实路径 ```bash readlink -f $(which java) ``` 示例路径: `/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402-0.b06.el7_9.x86_64` ## 3. 配置全局环境变量 ```bash vim /etc/profile ``` 文件最底部添加: ```sh export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402-0.b06.el7_9.x86_64 export PATH=$JAVA_HOME/bin:$PATH ``` 保存退出:`ESC` → 输入 `:wq` 回车 ## 4. 生效配置 ```bash source /etc/profile java -version ``` 显示 `1.8.0` 即为成功。 --- # 第二步:配置主机名 + hosts映射 ## 1. 设置主机名 ```bash hostnamectl set-hostname hadoop-master ``` ## 2. 配置hosts ```bash vim /etc/hosts ``` 添加一行(写你自己虚拟机IP): ``` 192.168.122.130 hadoop-master ``` --- # 第三步:配置 SSH 免密登录 ```bash # 生成密钥 全部回车 ssh-keygen -t rsa # 写入免密文件 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys # 测试免密 ssh hadoop-master date ``` --- # 第四步:下载 & 解压 Hadoop3.3.6 ## 1. 进入 /opt 目录 ```bash cd /opt ``` ## 2. 清华镜像下载 Hadoop ```bash wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz ``` ## 3. 解压 + 重命名 ```bash tar -zxvf hadoop-3.3.6.tar.gz mv hadoop-3.3.6 hadoop ``` ## 4. Hadoop 环境变量配置 ```bash vim /etc/profile ``` 追加以下内容: ```sh export HADOOP_HOME=/opt/hadoop export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export HADOOP_PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin export PATH=$PATH:$HADOOP_PATH ``` 生效: ```bash source /etc/profile # 验证 hadoop version ``` --- # 第五步:修改 Hadoop 核心配置文件 配置目录:`/opt/hadoop/etc/hadoop` ## 1. 配置 hadoop-env.sh ```bash vim /opt/hadoop/etc/hadoop/hadoop-env.sh ``` 末尾添加: ```sh export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402-0.b06.el7_9.x86_64 export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root ``` ## 2. 配置 core-site.xml ```bash vim /opt/hadoop/etc/hadoop/core-site.xml ``` 替换全部内容: ```xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop-master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop/tmp</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property> </configuration> ``` ## 3. 配置 hdfs-site.xml ```bash vim /opt/hadoop/etc/hadoop/hdfs-site.xml ``` ```xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.permissions.enabled</name> <value>false</value> </property> </configuration> ``` ## 4. 配置 mapred-site.xml ```bash vim /opt/hadoop/etc/hadoop/mapred-site.xml ``` ```xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> </configuration> ``` ## 5. 配置 yarn-site.xml ```bash vim /opt/hadoop/etc/hadoop/yarn-site.xml ``` ```xml <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop-master</value> </property> </configuration> ``` ## 6. 修改 workers ```bash vim /opt/hadoop/etc/hadoop/workers ``` 清空原有内容,写入: ``` hadoop-master ``` --- # 第六步:初始化 & 启动 Hadoop ## 1. 格式化 NameNode(**只执行一次**) ```bash hdfs namenode -format ``` ## 2. 启动 HDFS ```bash start-dfs.sh ``` ## 3. 启动 YARN ```bash start-yarn.sh ``` ## 4. 查看进程 ```bash jps ``` 正常进程: - NameNode - DataNode - SecondaryNameNode - ResourceManager - NodeManager --- # 第七步:访问 Web 页面 虚拟机放行端口或关闭防火墙后,浏览器打开: - HDFS:`http://虚拟机IP:9870` - YARN:`http://虚拟机IP:8088` --- # 常用停止命令 ```bash stop-dfs.sh stop-yarn.sh ```





