一.环境介绍
实验安装使用hadoop的版本为stable版本2.7.3,下载地址为:
http://www-eu.apache.org/dist/hadoop/common/
实验总共三台机器:
[hadoop@hadoop1 hadoop]$ cat /etchosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4localdomain4::1localdomain localhost6 localhost6localdomain6192.16856.21 hadoop156.22 hadoop256.23 hadoop3
其中:
hadoop1为NameNode,SecondaryNameNode,ResourceManager
hadoop2/3为Datanode,NodeManager
文件系统如下,/hadoop目录准备用来安装hadoop和存放数据:
二.创建hadoop用户
创建hadoop用户来执行hadoop的安装:
useradd hadoopchown R hadoop:hadoop hadoop
三.在hadoop用户下创建ssh免密钥
在hadoop1/2/3上分别运行:
su - hadoop
ssh-keygen -t rsa
ssh-keygen -t dsa
cd /home/hadoop/.ssh
cat *.pub >authorized_keys
在hadoop2上执行:
scp authorized_keys hadoop1:/home/hadoop/.ssh/hadoop2_keys
在hadoop3上执行:
scp authorized_keys hadoop1:/home/hadoop/.ssh/hadoop3_keys
在hadoop1上执行:
su - hadoop
cd /home/hadoop/.ssh
cat hadoop2_keys >> authorized_keys
cat hadoop3_keys >> authorized_keys
再将认证文件拷贝到其它机器上:
scp ./authorized_keys hadoop2:/home/hadoop/.ssh/
scp ./authorized_keys hadoop3:/home/hadoop/.ssh/
注意:查看authorized_keys的权限必须是644,如果不是则需要chmod修改,否则免密钥不成功!
四.添加java环境变量
java下载地址:http://www.oracle.com/technetwork/java/javase/downloads/
exprot JAVA_HOME=/usr/localjdk18.0_131PATH=$JAVA_HOMEbin$PATH$HOMEbin
五.修改hadoop配置文件
将hadoop安装文件解压缩到/hadoop中.
~/hadoop/etc/hadoop/hadoop-env.sh
~/hadoop/etc/hadoop/yarn-env.sh
~/hadoop/etc/hadoop/slaves
~/hadoop/etc/hadoop/core-site.xml
~/hadoop/etc/hadoop/hdfs-site.xml
~/hadoop/etc/hadoop/mapred-site.xml
~/hadoop/etc/hadoop/yarn-site.xml
$cd hadoop$ mkdir data tmp name
1.修改配置文件hadoop-env.sh,yarn-env.sh
cd /hadoop/hadoop/etc/hadoop
2.修改配置文件slaves
$ cat slaveshadoop2hadoop3
3.修改配置文件core-site.xml
4.修改配置文件hdfs-site.xml
$ cat hdfsdfsnamenodesecondaryhttpaddresshadoop1:9001datanodedatareplication1webhdfsenabled@H_962_403@<value>true>
5.修改配置文件mapred-site.xml
$ mv mapred.template mapred$ cat mapred@H_962_403@"1.0"mapreduceframeworkyarnjobhistory10020webapp19888>
6.修改配置文件yarn-site.xml
> $ cat yarn@H_962_403@<configuration> Site specific YARN configuration properties nodemanagerauxservicesmapreduce_shuffleshuffleclassorgapachemapred.ShuffleHandlerresourcemanager8032scheduler8030resourcetracker8031admin80338088>
六.格式化hadoop
在启动namenode和yran之前必须先格式化namenode:
$ binhdfs namenode format htest17/0514235022 INFO namenode.NameNode STARTUP_MSG /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = hadoop1/192.168.56.21STARTUP_MSG: args = [-format,htest]STARTUP_MSG: version = 2.7.3STARTUP_MSG: classpath = /hadoop/hadoop/etc/hadoop:/hadoop/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/hadoop/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/hadoop/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/hadoop/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/hadoop/hadoop/share/hadoop/common/lib/junit-4.11.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/hadoop/hadoop/share/hadoop/common/lib/xz-1.0.jar:/hadoop/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/hadoop/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/hadoop/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/hadoop/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/hadoop/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/hadoop/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/hadoop/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/hadoop/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/hadoop/hadoop/share/hadoop/common/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/hadoop/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/hadoop/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/hadoop/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/hadoop/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/hadoop/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/hadoop/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/hadoop/hadoop/share/hadoop/common/lib/activation-1.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/hadoop/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/hadoop/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/hadoop/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/hadoop/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/hadoop/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/hadoop/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/hadoop/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/hadoop/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/hadoop/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/hadoop/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/hadoop/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/hadoop/hadoop/share/hadoop/common/hadoop-nfs-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/hadoop/hadoop/share/hadoop/hdfs:/hadoop/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/hadoop/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/hadoop/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/hadoop/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/hadoop/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/hadoop/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/hadoop/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/hadoop/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/hadoop/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/hadoop/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/contrib/capacity-scheduler/*.jarSTARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41ZSTARTUP_MSG: java = 1.8.0_131************************************************************/ registered UNIX signal handlers TERM HUP INT] createNameNode [-format htest]Formattingusing clusterid CID-3cf41172e75f4bfb9f8d32877047a551.FSNamesystem No KeyProvider found fsLock fair:true INFO blockmanagement.DatanodeManager dfsblockinvalidatelimit=1000registrationiphostnamecheck=.BlockManagerstartupdelaydeletionsec set to 0000000.000 The block deletion will start around 2017 May22 INFO util.GSet Computing capacity map BlocksMap VM type 64bit2.0 max memory 966.7 MB 19.3 MB capacity ^212097152 entriesaccesstokenenablefalse defaultReplication 1 maxReplication 512 minReplication maxReplicationStreams 2 replicationRecheckInterval 3000 encryptDataTransfer maxNumBlocksToLog fsOwner hadoop authSIMPLE) supergroup supergroup isPermissionEnabled HA Enabled Append Enabled23 map INodeMap1.09.7201048576.FSDirectory ACLs enabled? XAttrs Maximum size of an xattr16384 Caching file names occuring more than 10 times map cachedBlocks0.252.418262144safemodethresholdpct 0.9990000128746033mindatanodes 0extension 30000 INFO metrics.TopMetrics NNTop conftopwindownumbuckets 10users windowsminutes 525 Retry cache on namenode enabled cache will 0.03 of total heap retry cache entry expiry time 600000 millis map NameNodeRetryCache0.029999999329447746297.0 KB1532768.FSImage Allocatednew BlockPoolId BP102874337156.211494777023841 INFO common.Storage Storage directory name has been successfully formatted.FSImageFormatProtobuf Saving image file currentfsimageckpt_0000000000000000000 no compression24 Image file ckpt_0000000000000000000 of size 353 bytes saved seconds.NNStorageRetentionManager Going to retain images txid >=.ExitUtil Exiting status SHUTDOWN_MSGSHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.56.21************************************************************/
七.启动hadoop
$ ./sbinstartshStarting namenodes on starting namenode logging to logsouthadoop2 starting datanodehadoop3outStarting secondary namenodes starting secondarynamenodesecondarynamenodeout
查看hadoop1上的进程:
$ jps8568 NameNode8873 Jps8764 SecondaryNameNode
启动yarn:
shstarting yarn daemonsstarting resourcemanager starting nodemanagerout8930 ResourceManager9187 SecondaryNameNode
检查datanode上的进程:
hadoop@hadoop2 hadoop7909 Datanode8039 NodeManager8139 Jps
关闭hadoop:
./sbin/stopshsh
八,WEB访问接口
Web Interfaces:
Once the Hadoop cluster is up and running check the web-ui of the components as described below:
Daemon
Web Interface
Notes
NameNode
http://nn_host:port/
Default HTTP port is 50070.
ResourceManager
http://rm_host:port/
Default HTTP port is 8088.
MapReduce JobHistory Server
http://jhs_host:port/
Default HTTP port is 19888.
九.参考文档:
@L_301_0@