Ubuntu上搭建hadoop和spark集群

前端之家收集整理的这篇文章主要介绍了Ubuntu上搭建hadoop和spark集群前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
在Ubuntu上搭建hadoop和spark集群,1台master(namenode),3台slave(datanode)


1. 安装Java

 1
 2
@H_502_27@
sudo mkdir /usr/local/java/
sudo tar xvf jdk-7u79-linux-x64.tgz -C /usr/local/java/
来自CODE的代码片
java.env

2. 安装scala

 1
 2
@H_502_27@
$ sudo mkdir /usr/local/src/scala
$ sudo tar xvf scala-2.10.5.tgz -C /usr/local/src/scala/
来自CODE的代码片
spark.env

3. 安装hadoop

 1
 2
@H_502_27@
sudo mkdir /usr/local/hadoop
sudo tar xvf hadoop-2.7.1.tar.gz -C /usr/local/hadoop/
来自CODE的代码片
hadoop.env

4. 安装spark

Spark runs on Java 6+,Python 2.6+ and R 3.1+. For the Scala API,Spark 1.4.1 uses Scala 2.10. You will need to use a compatible Scala version (2.10.x).
 1
 2
@H_502_27@
sudo mkdir /usr/local/spark
sudo tar xvf spark-1.4.1-bin-hadoop2.6.tgz -C /usr/local/spark/
来自CODE的代码片
spark.env

5. 添加环境路径

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
@H_502_27@
# config /etc/profile
export JAVA_HOME=/usr/local/java/jdk1.7.0_79
export SCALA_HOME=/usr/local/scala/scala-2.10.5
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export HADOOP_HOME=/usr/local/hadoop
export SPARK_HOME=/usr/local/spark/spark-1.4.1-bin-hadoop2.6
export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$SPARK_HOME/bin
------------------------------
source /etc/profile
来自CODE的代码片
profile

6. 创建hadoop用户,从master发布到slave


  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
@H_502_27@
sudo chown -R hadoop:hadoop hadoop
sudo chown -R hadoop:hadoop spark
sudo chown -R hadoop:hadoop scala
sudo scp -r /usr/local/hadoop hadoop@slave1:~/
sudo mv ~/hadoop /usr/local/
sudo scp -r /usr/local/scala hadoop@slave1:~/
sudo mv ~/scala /usr/local/
sudo scp -r /usr/local/spark hadoop@slave1:~/
sudo mv ~/spark /usr/local/
来自CODE的代码片
deploy


7. 配置hadoop

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
@H_502_27@
<!-- /usr/local/hadoop/etc/hadoop/core-site.xml -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master_ip:9000</value>
</property>
</configuration>
<!-- /usr/local/hadoop/etc/hadoop/hdfs-site.xml -->
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/usr/local/hadoop/datalog1</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/usr/local/hadoop/data1</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
<!-- /usr/local/hadoop/etc/hadoop/mapred-site.xml -->
<configuration>
<property>
<name>mapred.job.tracker</name>
@H_403_400@ <value>master:9001</value>
</property>
@H_808_404@
</configuration>
<!-- /usr/local/hadoop/etc/hadoop/hadoop-env.sh -->
# The java implementation to use.
export JAVA_HOME=/usr/local/java/jdk1.7.0_79
来自CODE的代码片
hadoop

8. 配置spark

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
@H_502_27@
# /usr/local/spark/spark-1.4.1-bin-hadoop2.6/conf/spark-env.sh
#jdk
export JAVA_HOME=/usr/local/java/jdk1.7.0_79
#scala
export SCALA_HOME=/usr/local/scala/scala-2.10.5
#spark master ip
export SPARK_MASTER_IP=192.168.1.1
export SPARK_WORKER_MEMORY=2g
#hadoop config folder
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
# /usr/local/spark/spark-1.4.1-bin-hadoop2.6/conf/slaves
master
slave1
slave2
slave3
#
来自CODE的代码片
spark-env
原文链接:https://www.f2er.com/ubuntu/353741.html

猜你在找的Ubuntu相关文章