通过Cloudera Manager来安装Hadoop

hadoop 2013-12-19

通过Cloudera Manager来安装Hadoop

本人笨,装的过程碰了不少东西,其他网站转载的文章也没具体写清楚,以下我实战了下总结下来。

1. ssh登陆要安装管理界面的机器,确定关闭防火墙和selinux,然后安装cloudera-manager-installer.bin

2. 修改host,并复制到所有节点

vim /etc/hosts

##内容————————————————

172.16.1.1x node1

172.16.1.2x node2

172.16.1.3x node3

127.0.0.1 localhost # 这是必须指定为localhost,且必须为第一个127.0.0.1的域

3. 打开管理界面 http://{{host}}:7180/

1)用来安装hadoop组件的帐号必须有ssh 且 root 权限

2)像我们的服务器都使用了key登陆,所以在安装时,必须为选用的帐号设置sudo权限且不需输入密码,以下操作是每一台节点机器必须进行

a. 用root操作,修改文件可写权限: chmod +w /etc/sudoers

b. vim /etc/sudoers 添加如: nic ALL=(ALL) NOPASSWD: ALL

c. 去除可写权限,chmod -w /etc/sudoers

3)给你安装hadoop的账户赋予这些文件的读权限和执行权限

chmod +r /bin/mktemp

chmod +x /bin/mktemp

chmod +r /usr/bin/tee

chmod +x /usr/bin/tee

chmod +r /usr/bin/tr

chmod +x /usr/bin/tr

4)随便进去一个目录wget下载各种hadoop组件安装包 (cloudera-manager所在机器如果不需要安装任何hadoop组件,则不需要下载和安装)

wgethttp://archive.cloudera.com/cm4/RedHat/5/x86_64/cm/4.1.1/RPMS/x86_64/jdk-6u31-linux-amd64.rpm

wgethttp://archive.cloudera.com/cm4/redhat/5/x86_64/cm/4.1.2/RPMS/x86_64/cloudera-manager-agent-4.1.2-1.cm412.p0.428.x86_64.rpm

wgethttp://archive.cloudera.com/cm4/redhat/5/x86_64/cm/4.1.2/RPMS/x86_64/cloudera-manager-daemons-4.1.2-1.cm412.p0.428.x86_64.rpm

wgethttp://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/RPMS/noarch/bigtop-utils-0.4+359-1.cdh4.1.2.p0.34.el5.noarch.rpm

wgethttp://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/RPMS/x86_64/bigtop-jsvc-0.4+359-1.cdh4.1.2.p0.43.el5.x86_64.rpm

wgethttp://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/RPMS/noarch/bigtop-tomcat-0.4+359-1.cdh4.1.2.p0.38.el5.noarch.rpm

wgethttp://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/RPMS/noarch/flume-ng-1.2.0+122-1.cdh4.1.2.p0.7.el5.noarch.rpm

wgethttp://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/RPMS/noarch/oozie-3.2.0+126-1.cdh4.1.2.p0.10.el5.noarch.rpm-O oozie-3.2.0-cdh4.1.2.p0.10.el5.noarch.rpm

### 注意这只是其中一部分,并且各个版本可能不一样,所需的文件可以从以上路径下找,或者用cloudera-manager来自动安装,把整个过程复制下来,慢慢找它自动下载的*.rpm包的路径 ###

### 既然有自动下载和安装功能,为什么还手动下载呢? 因为cloudera-manager安装过程只要一发生失败(安装过程有权限问题、或者下载超时等问题.),一切行为都将回滚,包括下载和安装的文件,

即如果依赖cloudera-manager每次安装都必须重新下载、重新安装。再说,有的rpm包很大,咱们服务器不像国外服务器,咱国内服务器下载这下资源包过程很慢而且还有很大可能下载不了,也就是说很容易出现辛辛苦苦装半天,一下回到解放前。

要一劳永逸,在本地迅雷下好了,再scp上去平均速度也有90kb/s,比服务器下载要快(亲测,不同网络环境可能有不一样), 下载后再从一个节点scp到各个节点 ###

### 要用到的安装包名有如下列表,请各自寻找下载 ###

###

hadoop-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

hadoop-hdfs-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

hadoop-httpfs-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

hadoop-yarn-2.0.0.1.cdh4.1.2.p0.27.el5.x86_64.rpm

hadoop-mapreduce-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

hadoop-0.20-mapreduce-0.20.21.cdh4.1.2.p0.24.el5.x86_64.rpm

hadoop-libhdfs-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

hadoop-client-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

hadoop-hdfs-fuse-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

zookeeper-3.4.31.cdh4.1.2.p0.34.el5.noarch.rpm

hbase-0.92.1-cdh4.1.2.p0.24.el5.noarch.rpm

hive-0.9.0-cdh4.1.2.p0.21.el5.noarch.rpm

oozie-3.2.0-cdh4.1.2.p0.10.el5.noarch.rpm

oozie-client-3.2.0-cdh4.1.2.p0.10.el5.noarch.rpm

pig-0.10.01.cdh4.1.2.p0.24.el5.noarch.rpm

hue-common-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

hue-about-2.1.-cdh4.1.2.p0.9.el5.x86_64.rpm

hue-help-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

hue-filebrowser-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

hue-jobbrowser-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

hue-jobsub-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

hue-beeswax-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

hue-plugins-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

hue-proxy-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

hue-shell-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

hue-useradmin-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

sqoop-1.4.11.cdh4.1.2.p0.21.el5.noarch.rpm

###

5)执行安装

yum install cyrus-sasl-gssapi

rpm -ivh jdk-6u31-linux-amd64.rpm # 竟然还指定装这个,有点莫名奇妙,我给机器自己装了jdk1.7的还配了环境变量,但这个cloudera还是说没找到,还自己下载安装,不知道是不是我没配置好呢?

rpm -ivh cloudera-manager-agent-4.1.2-1.cm412.p0.428.x86_64.rpm

rpm -ivh cloudera-manager-daemons-4.1.2-1.cm412.p0.428.x86_64.rpm

rpm -ivh bigtop-utils-0.4+359-1.cdh4.1.2.p0.34.el5.noarch.rpm

rpm -ivh bigtop-jsvc-0.4-cdh4.1.2.p0.43.el5.x86_64.rpm

rpm -ivh bigtop-tomcat-0.4-cdh4.1.2.p0.38.el5.noarch.rpm

rpm -ivh flume-ng-1.2.0-cdh4.1.2.p0.7.el5.noarch.rpm

rpm -ivh hadoop-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

rpm -ivh hadoop-hdfs-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

rpm -ivh hadoop-httpfs-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

rpm -ivh hadoop-yarn-2.0.0.1.cdh4.1.2.p0.27.el5.x86_64.rpm

rpm -ivh hadoop-mapreduce-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

rpm -ivh hadoop-0.20-mapreduce-0.20.21.cdh4.1.2.p0.24.el5.x86_64.rpm

rpm -ivh hadoop-libhdfs-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

rpm -ivh hadoop-client-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

rpm -ivh hadoop-hdfs-fuse-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm

rpm -ivh zookeeper-3.4.31.cdh4.1.2.p0.34.el5.noarch.rpm

rpm -ivh hbase-0.92.1-cdh4.1.2.p0.24.el5.noarch.rpm

rpm -ivh hive-0.9.0-cdh4.1.2.p0.21.el5.noarch.rpm

rpm -ivh oozie-3.2.0-cdh4.1.2.p0.10.el5.noarch.rpm

rpm -ivh oozie-client-3.2.0-cdh4.1.2.p0.10.el5.noarch.rpm

rpm -ivh pig-0.10.01.cdh4.1.2.p0.24.el5.noarch.rpm

rpm -ivh hue-common-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

rpm -ivh hue-about-2.1.-cdh4.1.2.p0.9.el5.x86_64.rpm

rpm -ivh hue-help-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

rpm -ivh hue-filebrowser-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

rpm -ivh hue-jobbrowser-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

rpm -ivh hue-jobsub-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

rpm -ivh hue-beeswax-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

rpm -ivh hue-oozie-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

rpm -ivh hue-plugins-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

rpm -ivh hue-proxy-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

rpm -ivh hue-shell-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

rpm -ivh hue-useradmin-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm

rpm -ivh sqoop-1.4.11.cdh4.1.2.p0.21.el5.noarch.rpm

6)安装过程有由于顺序问题引起某些包安装不全,请重复执行上述命令4-5次直到所有安装包提示已经安装为止

7)用cloudera-manager进行“安装”,这过程基本上就是完成配置文件和启动cloudera-scm-agent(这个重要,有可能会提示出错,出错原因稍后补充)而已,很快。

8)最后提示成功,则这台机器已经受cloudera-scm-agent管理,基本上安装配置完毕,剩下的就是根据cloudera-manager提示界面按需要完成hadoop组件、节点配置,如选择namenode,datanode等。

这个过程很简单很方便,但当然如果配置完成并启动后再修改某些节点的内容如什么什么mapreduce、什么Tracker之类,可能会出现问题再也启动不了,具体内容出现过但没深入研究,

有的问题还没法解决只能换了机器(除了hadoop安装过程依赖多,配置多,服务器本身环境也复杂。所以官方文档建议用干净的机器)。

相关推荐