在Ubuntu上搭建hadoop和spark集群,1台master(namenode),3台slave(datanode)
1. 安装Java
来自CODE的代码片
java.env
2. 安装scala
来自CODE的代码片
spark.env
3. 安装hadoop
来自CODE的代码片
hadoop.env
4. 安装spark
Spark runs on Java 6+,Python 2.6+ and R 3.1+. For the Scala API,Spark 1.4.1 uses Scala 2.10. You will need to use a compatible Scala version (2.10.x).
来自CODE的代码片
spark.env
5. 添加环境路径
1 2 3 4 5 6 7 8 9 10 11 12 13 |
@H_502_27@
# config /etc/profile
export JAVA_HOME=/usr/local/java/jdk1.7.0_79
export SCALA_HOME=/usr/local/scala/scala-2.10.5
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export HADOOP_HOME=/usr/local/hadoop
export SPARK_HOME=/usr/local/spark/spark-1.4.1-bin-hadoop2.6
export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$SPARK_HOME/bin
------------------------------
source /etc/profile
|
来自CODE的代码片
profile
6. 创建hadoop用户,从master发布到slave
1 2 3 4 5 6 7 8 9 10 11 12 |
@H_502_27@
sudo chown -R hadoop:hadoop hadoop
sudo chown -R hadoop:hadoop spark
sudo chown -R hadoop:hadoop scala
sudo scp -r /usr/local/hadoop hadoop@slave1:~/
sudo mv ~/hadoop /usr/local/
sudo scp -r /usr/local/scala hadoop@slave1:~/
sudo mv ~/scala /usr/local/
sudo scp -r /usr/local/spark hadoop@slave1:~/
sudo mv ~/spark /usr/local/
|
来自CODE的代码片
deploy
7. 配置hadoop
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
@H_502_27@
<!-- /usr/local/hadoop/etc/hadoop/core-site.xml -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master_ip:9000</value>
</property>
</configuration>
<!-- /usr/local/hadoop/etc/hadoop/hdfs-site.xml -->
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/usr/local/hadoop/datalog1</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/usr/local/hadoop/data1</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
<!-- /usr/local/hadoop/etc/hadoop/mapred-site.xml -->
<configuration>
<property>
<name>mapred.job.tracker</name>
@H_403_400@
<value>master:9001</value>
</property>
@H_808_404@
</configuration>
<!-- /usr/local/hadoop/etc/hadoop/hadoop-env.sh -->
# The java implementation to use.
export JAVA_HOME=/usr/local/java/jdk1.7.0_79
|
来自CODE的代码片
hadoop
8. 配置spark
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
@H_502_27@
# /usr/local/spark/spark-1.4.1-bin-hadoop2.6/conf/spark-env.sh
#jdk
export JAVA_HOME=/usr/local/java/jdk1.7.0_79
#scala
export SCALA_HOME=/usr/local/scala/scala-2.10.5
#spark master ip
export SPARK_MASTER_IP=192.168.1.1
export SPARK_WORKER_MEMORY=2g
#hadoop config folder
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
# /usr/local/spark/spark-1.4.1-bin-hadoop2.6/conf/slaves
master
slave1
slave2
slave3
#
|