hadoop1.0.0 安装记录(转强哥)

zwjcyz 2012-02-13

朋友公司在搭建基于PC的hadoop集群,我也照猫画虎了一下,测试通过.

==========================================

操作系统为centos5.4(已经建立几个节点间的信任关系)

一,安装java

1,下载java(以下为下载在/work目录下操作)

wgethttp://download.oracle.com/otn-pub/java/jdk/7u2-b13/jdk-7u2-linux-i586.tar.gz

2,解压下载文件并改名

tar-zxvfjdk-7u2-linux-i586.tar.gz

mvjdk1.7.0_02java

rmjdk-7u2-linux-i586.tar.gz

3,在/etc/profile中加入以下语句:

exportJAVA_HOME=/work/java

exportJRE_HOME=$JAVA_HOME/jre

exportPATH=$PATH:$JAVA_HOME/bin

二,安装hadoop

1,下载hadoop压缩包(下载在/work目录下)

wgethttp://mirror.bit.edu.cn/apache//hadoop/common/hadoop-1.0.0/hadoop-1.0.0.tar.gz

2,解压压缩包并改名

tar-zxvfhadoop-1.0.0.tar.gz

mvhadoop-1.0.0hadoop

rmhadoop-1.0.0.tar.gz

3,修改/etc/profile至

exportJAVA_HOME=/work/java

exportJRE_HOME=$JAVA_HOME/jre

exportHADOOP_HOME=/work/hadoop

exportPATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

三,配置hadoop

1,配置conf/hadoop-env.sh

exportJAVA_HOME=/work/java

exportHADOOP_HEAPSIZE=2000

2,配置conf/core-site.xml

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://da-free-test1:9000</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/work/hadoopneed/tmp</value>

</property>

<property>

<name>dfs.hosts.exclude</name>

<value>/work/hadoop/conf/dfs.hosts.exclude</value>

</property>

</configuration>

3,配置hdfs-site.xml

<configuration>

<property>

<name>dfs.name.dir</name>

<value>/work/hadoopneed/name</value>

</property>

<property>

<name>dfs.data.dir</name>

<value>/work/hadoopneed/data/data</value>

</property>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

<property>

<name>dfs.namenode.handler.count</name>

<value>30</value>

</property>

<property>

<name>dfs.datanode.handler.count</name>

<value>5</value>

</property>

<property>

<name>dfs.datanode.du.reserved</name>

<value>10737418240</value>

</property>

<property>

<name>dfs.block.size</name>

<value>134217728</value>

</property>

</configuration>

4,配置mapred-site.xml

<property>

<name>mapred.job.tracker</name>

<value>da-free-test1:9001/</value>

</property>

<property>

<name>mapred.local.dir</name>

<value>/work/hadoopneed/mapred/local</value>

</property>

<property>

<name>mapred.system.dir</name>

<value>/tmp/hadoop/mapred/system</value>

</property>

<property>

<name>mapred.child.java.opts</name>

<value>-Xmx512</value>

<final>true</final>

</property>

<property>

<name>mapred.job.tracker.handler.count</name>

<value>30</value>

</property>

<property>

<name>mapred.map.tasks</name>

<value>100</value>

</property>

<property>

<name>mapred.tasktracker.map.tasks.maximum</name>

<value>12</value>

</property>

<property>

<name>mapred.reduce.tasks</name>

<value>63</value>

</property>

<property>

<name>mapred.tasktracker.reduce.tasks.maximum</name>

<value>6</value>

</property>

5,配置masters

da-free-test1

6,配置slaves

da-free-test2

da-free-test3

da-free-test4

四,其他节点的安装

1,将hadoo和java目录拷贝到其他三个节点对应目录下

scp-rhadoopda-free-test2:/work

scp-rhadoopda-free-test3:/work

scp-rhadoopda-free-test4:/work

scp-rjavada-free-test2:/work

scp-rjavada-free-test3:/work

scp-rjavada-free-test4:/work

2,修改三个节点的/etc/profile,加入以下语句并执行一遍。

exportJAVA_HOME=/work/java

exportJRE_HOME=$JAVA_HOME/jre

exportHADOOP_HOME=/work/hadoop

exportPATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

至此,算是安装完成,之后遇到的都当问题来处理。

六,格式化文件系统

1,遇到问题:

[root@da-free-test1bin]#./hadoopnamenode-format

Warning:$HADOOP_HOMEisdeprecated.

Error:dlfailureonline875

Error:failed/work/java/jre/lib/i386/server/libjvm.so,because/work/java/jre/lib/i386/server/libjvm.so:cannotrestoresegmentprotafterreloc:Permissiondenied

Error:dlfailureonline875

Error:failed/work/java/jre/lib/i386/server/libjvm.so,because/work/java/jre/lib/i386/server/libjvm.so:cannotrestoresegmentprotafterreloc:Permissiondenied

解决方法:关闭selinux:

修改/etc/selinux/config

SELINUX=disabled

更改其他三个节点,并重启系统。

解决报警

Warning:$HADOOP_HOMEisdeprecated.

将刚才添加到/etc/profile中的关于$HADOOP_HOME的删除并重新登录。

2,成功格式化

[root@da-free-test1~]#hadoopnamenode-format

12/02/0812:01:21INFOnamenode.NameNode:STARTUP_MSG:

/************************************************************

STARTUP_MSG:StartingNameNode

STARTUP_MSG:host=da-free-test1/172.16.18.202

STARTUP_MSG:args=[-format]

STARTUP_MSG:version=1.0.0

STARTUP_MSG:build=https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r1214675;compiledby'hortonfo'onThuDec1516:36:35UTC2011

************************************************************/

12/02/0812:01:21INFOutil.GSet:VMtype=32-bit

12/02/0812:01:21INFOutil.GSet:2%maxmemory=35.55625MB

12/02/0812:01:21INFOutil.GSet:capacity=2^23=8388608entries

12/02/0812:01:21INFOutil.GSet:recommended=8388608,actual=8388608

12/02/0812:01:21INFOnamenode.FSNamesystem:fsOwner=root

12/02/0812:01:21INFOnamenode.FSNamesystem:supergroup=supergroup

12/02/0812:01:21INFOnamenode.FSNamesystem:isPermissionEnabled=true

12/02/0812:01:21INFOnamenode.FSNamesystem:dfs.block.invalidate.limit=100

12/02/0812:01:21INFOnamenode.FSNamesystem:isAccessTokenEnabled=falseaccessKeyUpdateInterval=0min(s),accessTokenLifetime=0min(s)

12/02/0812:01:21INFOnamenode.NameNode:Cachingfilenamesoccuringmorethan10times

12/02/0812:01:22INFOcommon.Storage:Imagefileofsize110savedin0seconds.

12/02/0812:01:22INFOcommon.Storage:Storagedirectory/work/hadoopneed/namehasbeensuccessfullyformatted.

12/02/0812:01:22INFOnamenode.NameNode:SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG:ShuttingdownNameNodeatda-free-test1/172.16.18.202

************************************************************/

七,启动hadoop

1,启动hadoop日志报错

./start-all.sh

WARNorg.apache.hadoop.hdfs.DFSClient:DataStreamerException:org.apache.hadoop.ipc.RemoteException:java.io.IOException:File/tmp/hadoop/mapred/system/jobtracker.infocouldonlybereplicatedto0nodes,insteadof1

解决方法:关闭hadoop安全模式:hadoopdfsadmin-safemodeleave(此时并未关闭hadoop)。

等待一会,hadoop自动恢复成功。

观察日志hadoop-root-jobtracker-da-free-test1.log,可以看到:

2012-02-0812:14:07,804INFOorg.apache.hadoop.ipc.Server:IPCServerResponder:starting

2012-02-0812:14:07,804INFOorg.apache.hadoop.ipc.Server:IPCServerlisteneron9001:starting

2012-02-0812:14:07,805INFOorg.apache.hadoop.ipc.Server:IPCServerhandler0on9001:starting

2012-02-0812:14:07,805INFOorg.apache.hadoop.ipc.Server:IPCServerhandler1on9001:starting

2012-02-0812:14:07,805INFOorg.apache.hadoop.ipc.Server:IPCServerhandler2on9001:starting

2012-02-0812:14:07,805INFOorg.apache.hadoop.ipc.Server:IPCServerhandler3on9001:starting

2012-02-0812:14:07,805INFOorg.apache.hadoop.ipc.Server:IPCServerhandler4on9001:starting

2012-02-0812:14:07,805INFOorg.apache.hadoop.ipc.Server:IPCServerhandler5on9001:starting

2012-02-0812:14:07,806INFOorg.apache.hadoop.ipc.Server:IPCServerhandler6on9001:starting

2012-02-0812:14:07,806INFOorg.apache.hadoop.ipc.Server:IPCServerhandler7on9001:starting

2012-02-0812:14:07,806INFOorg.apache.hadoop.ipc.Server:IPCServerhandler8on9001:starting

2012-02-0812:14:07,806INFOorg.apache.hadoop.ipc.Server:IPCServerhandler9on9001:starting

2012-02-0812:14:07,806INFOorg.apache.hadoop.ipc.Server:IPCServerhandler10on9001:starting

2012-02-0812:14:07,806INFOorg.apache.hadoop.ipc.Server:IPCServerhandler11on9001:starting

2012-02-0812:14:07,806INFOorg.apache.hadoop.ipc.Server:IPCServerhandler12on9001:starting

2012-02-0812:14:07,806INFOorg.apache.hadoop.ipc.Server:IPCServerhandler13on9001:starting

2012-02-0812:14:07,806INFOorg.apache.hadoop.ipc.Server:IPCServerhandler14on9001:starting

2012-02-0812:14:07,807INFOorg.apache.hadoop.ipc.Server:IPCServerhandler15on9001:starting

2012-02-0812:14:07,807INFOorg.apache.hadoop.ipc.Server:IPCServerhandler16on9001:starting

2012-02-0812:14:07,807INFOorg.apache.hadoop.ipc.Server:IPCServerhandler17on9001:starting

2012-02-0812:14:07,807INFOorg.apache.hadoop.ipc.Server:IPCServerhandler18on9001:starting

2012-02-0812:14:07,807INFOorg.apache.hadoop.ipc.Server:IPCServerhandler19on9001:starting

2012-02-0812:14:07,807INFOorg.apache.hadoop.ipc.Server:IPCServerhandler20on9001:starting

2012-02-0812:14:07,807INFOorg.apache.hadoop.ipc.Server:IPCServerhandler21on9001:starting

2012-02-0812:14:07,807INFOorg.apache.hadoop.ipc.Server:IPCServerhandler22on9001:starting

2012-02-0812:14:07,808INFOorg.apache.hadoop.ipc.Server:IPCServerhandler23on9001:starting

2012-02-0812:14:07,808INFOorg.apache.hadoop.ipc.Server:IPCServerhandler25on9001:starting

2012-02-0812:14:07,808INFOorg.apache.hadoop.ipc.Server:IPCServerhandler26on9001:starting

2012-02-0812:14:07,808INFOorg.apache.hadoop.ipc.Server:IPCServerhandler27on9001:starting

2012-02-0812:14:07,808INFOorg.apache.hadoop.ipc.Server:IPCServerhandler28on9001:starting

2012-02-0812:14:07,808INFOorg.apache.hadoop.mapred.JobTracker:StartingRUNNING

2012-02-0812:14:07,808INFOorg.apache.hadoop.ipc.Server:IPCServerhandler29on9001:starting

2012-02-0812:14:07,808INFOorg.apache.hadoop.ipc.Server:IPCServerhandler24on9001:starting

2012-02-0812:14:12,623INFOorg.apache.hadoop.net.NetworkTopology:Addinganewnode:/default-rack/da-free-test4

2012-02-0812:14:12,625INFOorg.apache.hadoop.mapred.JobTracker:Addingtrackertracker_da-free-test4:da-free-test1/127.0.0.1:42182tohostda-free-test4

2012-02-0812:14:12,743INFOorg.apache.hadoop.net.NetworkTopology:Addinganewnode:/default-rack/da-free-test3

2012-02-0812:14:12,744INFOorg.apache.hadoop.mapred.JobTracker:Addingtrackertracker_da-free-test3:da-free-test1/127.0.0.1:53695tohostda-free-test3

2012-02-0812:14:12,802INFOorg.apache.hadoop.net.NetworkTopology:Addinganewnode:/default-rack/da-free-test2

2012-02-0812:14:12,802INFOorg.apache.hadoop.mapred.JobTracker:Addingtrackertracker_da-free-test2:da-free-test1/127.0.0.1:47259tohostda-free-test2

打开浏览器,输入http://172.16.18.202:50030,可以看到节点数量为3。

上传一个文件:dfs-puthadoop-root-namenode-da-free-test1.log/usr/testfile

查看上传的文件:hadoopdfs-ls/usr/

Found1items

-rw-r--r--3rootsupergroup845332012-02-0812:20/usr/testfile

查看http://172.16.18.202:50070,发现已用空间为396k。

暂时验证通过。

===========================

http://i752.photobucket.com/albums/xx166/ntudou/dev/hadoop01.png

http://i752.photobucket.com/albums/xx166/ntudou/dev/02.png

http://i752.photobucket.com/albums/xx166/ntudou/dev/03.png

相关推荐