mac下virtualbox 安装 zookeeper, hadoop 2.6, hbase 1.0 (安装hadoop)

hadoop2 的配置文件全部在 $HADOOP_HOME/etc/hadoop下
1 修改hadoo-env.sh 只是java 变量
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home

**************************************************************************************
2 修改core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>vm1:2181,vm2:2181</value>
</property>
</configuration>

**************************************************************************************
3 修改hdfs-site.xml

<configuration>
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>

<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>

<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>vm1:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>vm1:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>vm2:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>vm2:50070</value>
</property>

<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://vm1:8485/ns1</value>
</property>

<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/journal</value>
</property>

<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>

<property>
<name>dfs.client.failover.proxy.provider.ns1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>

<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/xyu/.ssh/id_rsa</value>
</property>

<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>

**************************************************************************************
4 修改mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

**************************************************************************************
5 修改yarn-site.xml

<configuration>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>

<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>

<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>

<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>vm1</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>vm2</value>
</property>

<property>
<name>yarn.resourcemanager.zk-address</name>
<value>mac:2181,vm1:2181,vm2:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

**************************************************************************************
6 修改slaves
mac
vm1
vm2

将配置好的hadoop拷贝到其他节点
scp -r hadoop vm1:/usr/local/hadoop

启动journalnode
hadoop-daemon.sh start journalnode
运行jps命令检验,会多出个
JournalNode

格式化 namenode
hdfs namenode -format
把格好的文件copy 到其他机器
scp -r tmp vm1:/data/hadoop/

格式化ZK
hdfs zkfc -formatZK

启动HDFS
sbin/start-dfs.sh

启动YARN
start-yarn.sh

配置完毕,可以浏览器访问 HDFS:
http://192.168.56.8:50070
Screen Shot 2015-06-08 at 20.46.27

YARN 的页面是
http://192.168.56.8:8088
3

根据不同的配置,jps在每台机器上会有不同的显示
4

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s