| Linux主机IP地址 | 原主机名 | 改后主机名 |
| 10.137.169.148 | bigdata-01 | Master(namenode) |
| 10.137.169.149 | bigdata-02 | Slavel(datanode) |
| 10.137.169.150 | bigdata-03 | Slave2(datanode) |
| bigdata-01:~ # hostname master |
| bigdata-02:~ # hostname slave1 |
| bigdata-03:~ # hostname slave2 |
| master:~ # chmod a=rwx /tmp/master:~ # chmod a=rwx /tmp/* |
| master:~ # vi /etc/hosts |
| ## hosts This file describes a number of hostname-to-address # mappings for the TCP/IP subsystem. It is mostly # used at boot time, when no name servers are running. # On small systems, this file can be used instead of a # “named” name server. # Syntax: # # IP-Address Full-Qualified-Hostname Short-Hostname 127.0.0.1 localhosts 10.137.169.148 master 10.137.169.149 slavel 10.137.169.150 slave2 # special IPv6 addresses ::1 bigdata-01 ipv6-localhost ipv6-loopback fe00::0 ipv6-localnet ff00::0 ipv6-mcastprefix ff02::1 ipv6-allnodes ff02::2 ipv6-allrouters ff02::3 ipv6-allhosts |
| slave1:~ # vi /etc/hosts |
| slave2:~ # vi /etc/hosts |
| master:~ #useradd -m -d /home/hadoop -s /bin/bash hadoopmaster:~ #chmod -R a+rwx hadoop |
| master:~ # passwd hadoopChanging password for hadoop.New Password: Password changed. |
| master:~ #su hadoophadoop@master:/> ssh-keygen -t dsaGenerating public/private dsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_dsa)://按下Enter Enterpassphrase (empty for no passphrase): //按下Enter Enter same passphrase again: //按下Enter Your identification has been saved in /home/hadoop/.ssh/id_dsa. Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub. The key fingerprint is: a9:4d:a6:2b:bf:09:8c:b2:30:aa:c1:05:be:0a:27:09 hadoop@bigdata-01 The key’s randomart image is: +–[ DSA 1024]—-+ | | | | | . | |. . . | |E. . S | |o.oo * | |Ooo o o . | |=B .. o | |* o=. | +—————–+ //将id_dsa.pub拷贝到authorized_keys中 hadoop@master:/> cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys |
| ssh-keygen -t dsacat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys |
| //将其他slave节点的公钥拷贝到master节点中的authorized_keys,//有几个slave节点就需要运行几次命令,slave1,slave2是节点主机名hadoop@master:/> ssh slave1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys The authenticity of host ‘slave1 (10.137.169.149)’ can’t be established. RSA key fingerprint is 0f:5d:31:ba:dc:7a:84:15:6a:aa:20:a1:85:ec:c8:60. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘slave1,10.137.169.149′ (RSA) to the list of known hosts. Password: //填写之前设置的hadoop用户密码 hadoop@master:/> ssh slave2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys The authenticity of host ‘slave2 (10.137.169.150)’ can’t be established. RSA key fingerprint is 0f:5d:31:ba:dc:7a:84:15:6a:aa:20:a1:85:ec:c8:60. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘slave2,10.137.169.150′ (RSA) to the list of known hosts. Password: //填写之前设置的hadoop用户密码 //将authorized_keys文件拷贝回每一个节点,slave1,slave2是节点名称 hadoop@master:/> scp ~/.ssh/authorized_keys slave1:~/.ssh/authorized_keys hadoop@master:/> scp ~/.ssh/authorized_keys slave2:~/.ssh/authorized_keys |
| hadoop@master:/>cd homehadoop@master:/home> chmod 755 hadoop |
| hadoop@master:/> ssh slave1Last login: Wed Jul 31 00:13:58 2013 from bigdata-01hadoop@slave1:~> |
| sftp> lcdsftp> put D:/jdk-6u31-linux-x64.bin//本地路径名不要过长,不要含有中文,推荐放在D盘根目录下 Uploading jdk-6u31-linux-x64.bin to /root/jdk-6u31-linux-x64.bin 100% 83576KB 6964KB/s 00:00:12 |
| master:~ # mv jdk-6u31-linux-x64.bin /home/hadoop |
| master:/home/hadoop # chmod u+x jdk-6u31-linux-x64.binmaster:/home/hadoop #./jdk-6u31-linux-x64.bin#生成目录“/home/hadoop/jdk1.6.0_31” |
| hadoop@master:~> chmod u+x hadoop-2.0.1.tar.gzhadoop@master:~> tar -xvf hadoop-2.0.1.tar.gz#生成目录“/home/hadoop/hadoop-2.0.1” |
| hadoop@master:>vi /home/haoop/.profile#在.profile文件末尾添加下面设置代码export JAVA_HOME=/home/hadoop/jdk1.6.0_31/ export HADOOP_HOME=/home/hadoop/hadoop-2.0.1 export PATH=${PATH}:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin: ${JAVA_HOME}/bin:${JAVA_HOME}/jre/bin (注意复制PATH路径时不要换行,即PATH路径添加到.profile时应为一行) |
| hadoop@master:> source /home/haoop/.profile //使环境变量生效 |
| master:/ #vi /etc/profile#在profile文件末尾添加下面设置代码export JAVA_HOME=/home/hadoop/jdk1.6.0_31/ export HADOOP_HOME=/home/hadoop/hadoop-2.0.1 export PATH=${PATH}:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin: ${JAVA_HOME}/bin:${JAVA_HOME}/jre/bin (注意复制PATH路径时不要换行,即PATH路径添加到.profile时应为一行) |
| <configuration><property><name>hadoop.tmp.dir</name> <value>/home/hadoop/hadoop-2.0.1/hadoop_tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://10.137.169.148:9001</value> </property> </configuration> |
| <configuration><property><name>mapreduce.framework.name</name> <value>yarn</value> </property> <property><name>mapreduce.shuffle.port</name><value>8082</value></property> </configuration> |
| <configuration><!– Site specific YARN configuration properties –><property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>10.137.169.148:8050</value> </property> <property> <description>The address of the scheduler interface.</description> <name>yarn.resourcemanager.scheduler.address</name> <value>10.137.169.148:8051</value> </property> <property> <description>The address of the applications manager interface in the RM.</description> <name>yarn.resourcemanager.address</name> <value>10.137.169.148:8052</value> </property> <property> <description>The address of the RM admin interface.</description> <name>yarn.resourcemanager.admin.address</name> <value>10.137.169.148:8053</value> </property> <property> <description>Address where the localizer IPC is.</description> <name>yarn.nodemanager.localizer.address</name> <value>0.0.0.0:8054</value> </property> <!– run mapreduce job need config –> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce.shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <!– default Directory: /tmp/logs is not writable–> <property> <description> Where to store container logs. An application’s localized log directory will be found in ${yarn.nodemanager.log-dirs}/application_${appid}. Individual containers’ log directories will be below this, in directories named container_{$contid}. Each container directory will contain the files stderr, stdin, and syslog generated by that container. </description> <name>yarn.nodemanager.log-dirs</name> <value>/home/hadoop/hadoop-2.0.1/logs</value> </property> <property> <description>Where to aggregate logs to.</description> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/home/hadoop/hadoop-2.0.1/logs</value> </property> <property> <description>List of directories to store localized files in. An application’s localized file directory will be found in: ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}. Individual containers’ work directories, called container_${contid}, will be subdirectories of this. </description> <name>yarn.nodemanager.local-dirs</name> <value>/home/hadoop/hadoop-2.0.1/logs</value> </property> </configuration> |
| hadoop@master:/> cd /home/hadoophadoop@ master:~> vi hadoop-2.0.1/etc/hadoop/ hadoop-env.sh#在该文件中添加下面代码 export JAVA_HOME=/home/hadoop/jdk1.6.0_31/ export HADOOP_HOME=/home/hadoop/hadoop-2.0.1 |
| hadoop@ master:/> cd /home/hadoophadoop@ master:~> vi hadoop-2.0.1/etc/hadoop/slaves#将文件数据改为如下 10.137.169.149 10.137.169.150 |
| hadoop@ master:/> cd /home/hadoop//将master的hadoop-2.0.1打包hadoop@ master:~> tar -zcvf hadoop.tar.gz hadoop-2.0.1 //将打包后的hadoop.tar.gz分别复制到各个slave节点中 hadoop@ master:~> scp /home/hadoop/hadoop.tar.gz hadoop@slave1:/home/hadoop hadoop@ master:~> scp /home/hadoop/hadoop.tar.gz hadoop@slave2:/home/hadoop |
| hadoop@ master:/> ssh hadoop@slave1hadoop@ master:/> cd /home/hadoophadoop@ master:~> tar -zxvf hadoop.tar.gz #在“/home/hadoop”目录下生成“hadoop-2.0.1”文件夹。到此安装完成。 |
| hadoop@ master:/> cd /home/hadoop/binhadoop@ master:~/bin> ./hadoop namenode –format //注意最好要加上执行符号./ |
| hadoop@ master:/> start-all.sh |
| hadoop@ master:/> jps1443 ResourceManager21112 NameNode 8569 Jps |
| hadoop@slave1:/> jps4709 DataNode4851 NodeManager 24923 Jps |
扫码加好友,拉您进群



收藏
