久久国产成人av_抖音国产毛片_a片网站免费观看_A片无码播放手机在线观看,色五月在线观看,亚洲精品m在线观看,女人自慰的免费网址,悠悠在线观看精品视频,一级日本片免费的,亚洲精品久,国产精品成人久久久久久久

分享

Hadoop的安裝

 BIGDATA云 2018-07-13
hadoop的版本:hadoop-2.6.4.tar.gz
1°,、解壓:
]# tar -zxvf /opt/soft/hadoop-2.6.4.tar.gz -C /opt/
2°、重命名:
opt]# mv hadoop-2.6.4/ hadoop
3°,、添加hadoop相關(guān)命令到環(huán)境變量中
vim /etc/profile.d/hadoop-eco.sh
加入以下內(nèi)容:
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
4°、創(chuàng)建數(shù)據(jù)存儲(chǔ)目錄:
1) NameNode 數(shù)據(jù)存放目錄: /opt/hadoop-repo/name
2) SecondaryNameNode 數(shù)據(jù)存放目錄: /opt/hadoop-repo/secondary
3) DataNode 數(shù)據(jù)存放目錄: /opt/hadoop-repo/data
4) 臨時(shí)數(shù)據(jù)存放目錄: /opt/hadoop-repo/tmp
5°,、配置 hadoop-env.sh ,、yarn-env.sh hdfs-site.xml core-site.xml mappred-site.xml yarn-site.xml
1)、配置hadoop-env.sh
export JAVA_HOME=/opt/jdk
2),、配置yarn-env.sh
export JAVA_HOME=/opt/jdk
3),、配置hdfs-site.xml
<configuration>
<property>  
<name>dfs.namenode.name.dir</name>  
<value>file:///opt/hadoop-repo/name</value>  
</property>
<property> 
<name>dfs.datanode.data.dir</name>  
<value>file:///opt/hadoop-repo/data</value>  
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:///opt/hadoop-repo/secondary</value>
</property>
<!-- secondaryName http地址 -->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>uplooking01:9001</value>
</property>
<!-- 數(shù)據(jù)備份數(shù)量-->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<!-- 運(yùn)行通過(guò)web訪問(wèn)hdfs-->
<property> 
<name>dfs.webhdfs.enabled</name>  
<value>true</value>  
</property>
<!-- 剔除權(quán)限控制-->
<property>
<name>dfs.permissions</name>
<value>false</value>
</property> 
</configuration>
4)、配置core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://uplooking01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:///opt/hadoop-repo/tmp</value>
</property>
</configuration>
5),、配置mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property> 
<!-- 歷史job的訪問(wèn)地址-->
<property>  
<name>mapreduce.jobhistory.address</name>  
<value>uplooking01:10020</value>  
</property>
<!-- 歷史job的訪問(wèn)web地址-->
<property>  
<name>mapreduce.jobhistory.webapp.address</name>  
<value>uplooking01:19888</value>  
</property>
<property>
<name>mapreduce.map.log.level</name>
<value>INFO</value>
</property>
<property>
<name>mapreduce.reduce.log.level</name>
<value>INFO</value>
</property>
</configuration>
6),、配置yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>uplooking01</value>
</property> 
<property>  
<name>yarn.resourcemanager.address</name>  
<value>uplooking01:8032</value>  
</property>  
<property>  
<name>yarn.resourcemanager.scheduler.address</name>  
<value>uplooking01:8030</value>  
</property>  
<property>  
<name>yarn.resourcemanager.resource-tracker.address</name>  
<value>uplooking01:8031</value>  
</property>  
<property>  
<name>yarn.resourcemanager.admin.address</name>  
<value>uplooking01:8033</value>  
</property>
<property> 
<name>yarn.resourcemanager.webapp.address</name>  
<value>uplooking01:8088</value>  
</property>
<property>
<name>yarn.log-aggregation-enable</name>  
<value>true</value>  
</property>
</configuration>
格式化hadoop文件系統(tǒng)
hdfs namenode -format
當(dāng)出現(xiàn)Storage directory /opt/hadoop-repo/name has been successfully formatted.則說(shuō)明格式化成功
負(fù)責(zé)失敗,如果失敗的話:就要檢查配置文件,,再次進(jìn)行格式化,,如果要再次進(jìn)行格式化,
必須要把dfs.namenode.name.dir配置目錄下面的數(shù)據(jù)清空,。
啟動(dòng)hadoop
start-all.sh
分為以下
start-dfs.sh
start-yarn.sh
啟動(dòng)成功之后,,通過(guò)java命令jps(java process status)會(huì)出現(xiàn)5個(gè)進(jìn)程:
NameNode
SecondaryNameNode
DataNode
ResourceManager
NodeManager
在啟動(dòng)的時(shí)候,提示需要輸入的密碼,,是因?yàn)闆](méi)有配置ssh免密碼登錄模式,,如何配置?
ssh-keygen -t rsa
一路回車
ssh-copy-id -i root@uplooking01
根據(jù)提示輸入當(dāng)前機(jī)器的密碼
驗(yàn)證:ssh root@uplooking01 不需要再輸入密碼
驗(yàn)證:
1°,、在命令中執(zhí)行以下命令:
hdfs dfs -ls /
2°,、在瀏覽器中輸入http://uplooking01:50070
3°、驗(yàn)證mr
/opt/hadoop/share/hadoop/mapreduce目錄下面,,執(zhí)行如下命令:
yarn jar hadoop-mapreduce-examples-2.6.4.jar wordcount /hello /out
問(wèn)題:
如果要進(jìn)行多次格式化,,那么需要將剛才創(chuàng)建的/opt/hadoop-repo/中的文件夾刪除重建,
才能進(jìn)行二次格式化

    本站是提供個(gè)人知識(shí)管理的網(wǎng)絡(luò)存儲(chǔ)空間,,所有內(nèi)容均由用戶發(fā)布,,不代表本站觀點(diǎn)。請(qǐng)注意甄別內(nèi)容中的聯(lián)系方式,、誘導(dǎo)購(gòu)買(mǎi)等信息,,謹(jǐn)防詐騙。如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,,請(qǐng)點(diǎn)擊一鍵舉報(bào),。
    轉(zhuǎn)藏 分享 獻(xiàn)花(0

    0條評(píng)論

    發(fā)表

    請(qǐng)遵守用戶 評(píng)論公約

    類似文章 更多