您好,登录后才能下订单哦!
密码登录
登录注册
点击 登录注册 即表示同意《亿速云用户服务条款》
小编给大家分享一下如何从指定的网络端口上采集日志到控制台输出和HDFS,希望大家阅读完这篇文章之后都有所收获,下面让我们一起去探讨吧!
需求1:
从指定的网络端口上采集日志到控制台输出和HDFS
负载算法
故障转移:可以指定优先级,数字越大越优先
a1.sinkgroups.g1.processor.type = failover
a1.sinkgroups = g1 a1.sinkgroups.g1.sinks = k1 k2 a1.sinkgroups.g1.processor.type = failover a1.sinkgroups.g1.processor.priority.k1 = 5 a1.sinkgroups.g1.processor.priority.k2 = 10 a1.sinkgroups.g1.processor.maxpenalty = 10000
全部轮询
a1.sinkgroups.g1.processor.type = load_balance
#从指定的网络端口上采集日志到控制台输出和HDFS
a1.sources = r1 a1.sinks = k1 k2 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = netcat a1.sources.r1.bind = 0.0.0.0 a1.sources.r1.port = 44444 # Describe the sink a1.sinkgroups = g1 a1.sinkgroups.g1.sinks = k1 k2 a1.sinkgroups.g1.processor.type = load_balance a1.sinks.k1.type = logger a1.sinks.k2.type = hdfs a1.sinks.k2.hdfs.path = hdfs://192.168.0.129:9000/user/hadoop/flume a1.sinks.k2.hdfs.batchSize = 10 a1.sinks.k2.hdfs.fileType = DataStream a1.sinks.k2.hdfs.writeFormat = Text # Use a channel which buffers events in memory a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # Bind the source and sink to the channel a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1 a1.sinks.k2.channel = c1
检查logger输出:
2018-08-10 18:58:39,659 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.NetcatSource.start(NetcatSource.java:169)] Created serverSocket:sun.nio.ch.ServerSocketChannelImpl[/0:0:0:0:0:0:0:0:44444] 2018-08-10 18:59:17,723 (SinkRunner-PollingRunner-LoadBalancingSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:94)] Event: { headers:{} body: 7A 6F 75 72 63 20 6F 6B 0D zourc ok. } 2018-08-10 19:00:35,744 (SinkRunner-PollingRunner-LoadBalancingSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:94)] Event: { headers:{} body: 61 73 64 66 0D asdf. } 2018-08-10 19:00:35,774 (SinkRunner-PollingRunner-LoadBalancingSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:58)] Serializer = TEXT, UseRawLocalFileSystem = false 2018-08-10 19:00:36,086 (SinkRunner-PollingRunner-LoadBalancingSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://192.168.0.129:9000/user/hadoop/flume/FlumeData.1533942035775.tmp
检查hdfs输出:
[hadoop@hadoop001 flume]$ hdfs dfs -text hdfs://192.168.0.129:9000/user/hadoop/flume/* 18/08/10 19:14:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable zourc1 2 3 4 5 6 7 8 9 10
看完了这篇文章,相信你对“如何从指定的网络端口上采集日志到控制台输出和HDFS”有了一定的了解,如果想了解更多相关知识,欢迎关注亿速云行业资讯频道,感谢各位的阅读!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。