修改hadoop/hdfs日志级别

发布时间:2020-06-19 04:26:20 作者:DearArvin
来源:网络 阅读:3173

描述:

If a large directory is deleted and namenode is immediately restarted, there are a lot of blocks that do not belong to any file. This results in a log:

2014-11-08 03:11:45,584 INFO BlockStateChange (BlockManager.java:proce***eport(1901)) - BLOCK* proce***eport: blk_1074250282_509532 on 172.31.44.17:1019 size 6 does not belong to any file.

This log is printed within FSNamsystem lock. This can cause namenode to take long time in coming out of safemode.

One solution is to downgrade the logging level.

解决方案

tail -f /var/log/hadoop/hdfs/hdfs-namenode.log

http://<namenode:50070>/logLevel

Input "BlockStateChange" and Level is "WARN" and then click "Set Log Level" button

wait 2~3 mins. it works and performance is fine.


推荐阅读:
  1. 传统云环境下的CI/CD操作手册(二)配置tomcat
  2. 传统云环境下的CI/CD操作手册(一)系统架构概述

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

log hadoop hdfs

上一篇:Android 5.0新特性

下一篇:java运行原理

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》