flume-1.6.0 高可用测试&&数据入Kafka

发布时间:2020-07-31 06:32:09 作者:xiaobin0303
来源:网络 阅读:1307

机器列表:

192.168.137.115  slave0     (agent) 
192.168.137.116  slave1     (agent) 
192.168.137.117  slave2     (agent) 
192.168.137.118  slave3     (collector) 
192.168.137.119  slave4     (collector)


在每个机器上创建目录

mkdir -p /home/qun/data/flume/logs

mkdir -p /home/qun/data/flume/data

mkdir -p /home/qun/data/flume/checkpoint


下载flume最新的包:

wget 
tar -zxvf apache-flume-1.6.0-bin.tar.gz


在slave3,slave4配置collectors

touch $FLUME_HOME/conf/server.conf

内容如下

a1.sources = r1
a1.channels = c1
a1.sinks = k1
#set channel
a1.channels.c1.type = file
a1.channels.c1.checkpointDir=/home/qun/data/flume/checkpoint
a1.channels.c1.dataDirs=/home/qun/data/flume/data
# other node,nna to nns
a1.sources.r1.type = avro
a1.sources.r1.bind = slave3
a1.sources.r1.port = 52020
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = static
a1.sources.r1.interceptors.i1.key = Collector
a1.sources.r1.interceptors.i1.value = SLAVE3
a1.sources.r1.channels = c1
#set sink to kafka
a1.sinks.k1.type=org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic=mytopic
a1.sinks.k1.brokerList=kafkahost:9092
a1.sinks.k1.requiredAcks=1
a1.sinks.k1.batchSize=100
a1.sinks.k1.channel=c1

 


在slave0,slave1,slave2配置agent

touch $FLUME_HOME/conf/client.conf

内容如下

agent1.channels = c1
agent1.sources = r1
agent1.sinks = k1 k2
#set gruop
agent1.sinkgroups = g1 
#set channel
agent1.channels.c1.type = file
agent1.channels.c1.checkpointDir=/home/qun/data/flume/checkpoint
agent1.channels.c1.dataDirs=/home/qun/data/flume/data
agent1.sources.r1.channels = c1
agent1.sources.r1.type = spooldir
agent1.sources.r1.spoolDir=/home/qun/data/flume/logs
agent1.sources.r1.fileHeader = false
agent1.sources.r1.interceptors = i1 i2
agent1.sources.r1.interceptors.i1.type = static
agent1.sources.r1.interceptors.i1.key = Type
agent1.sources.r1.interceptors.i1.value = LOGIN
agent1.sources.r1.interceptors.i2.type = timestamp
# set sink1
agent1.sinks.k1.channel = c1
agent1.sinks.k1.type = avro
agent1.sinks.k1.hostname = slave3
agent1.sinks.k1.port = 52020
# set sink2
agent1.sinks.k2.channel = c1
agent1.sinks.k2.type = avro
agent1.sinks.k2.hostname = slave4
agent1.sinks.k2.port = 52020
#set sink group
agent1.sinkgroups.g1.sinks = k1 k2
#set failover
agent1.sinkgroups.g1.processor.type = failover
agent1.sinkgroups.g1.processor.priority.k1 = 10
agent1.sinkgroups.g1.processor.priority.k2 = 1
agent1.sinkgroups.g1.processor.maxpenalty = 10000


在slave3,slave4上启动collecters

flume-ng agent -n a1 -c conf -f /home/qun/apache-flume-1.6.0-bin/conf/server.conf -Dflume.root.logger=DEBUG,console


在slave0,slave1,slave2上启动agent

flume-ng agent -n agent1 -c conf -f /home/qun/apache-flume-1.6.0-bin/conf/client.conf -Dflume.root.logger=DEBUG,console


测试功能


echo "hello flume">>/home/qun/data/flume/logs/test.txt

collector slave3 接收到agent的日志:

16/05/26 12:44:24 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/qun/data/flume/checkpoint/checkpoint, elements to sync = 2
16/05/26 12:44:24 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1464235734894, queueSize: 0, queueHead: 0
16/05/26 12:44:24 INFO file.Log: Updated checkpoint for file: /home/qun/data/flume/data/log-3 position: 786 logWriteOrderID: 1464235734894
16/05/26 12:44:24 INFO file.Log: Removing old file: /home/qun/data/flume/data/log-1
16/05/26 12:44:24 INFO file.Log: Removing old file: /home/qun/data/flume/data/log-1.meta
16/05/26 12:44:54 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/qun/data/flume/checkpoint/checkpoint, elements to sync = 2
16/05/26 12:44:54 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1464235734901, queueSize: 0, queueHead: 0
16/05/26 12:44:54 INFO file.Log: Updated checkpoint for file: /home/qun/data/flume/data/log-3 position: 1179 logWriteOrderID: 1464235734901



测试collecters Failover

杀死slave3的flume进程,kill -9 pid


echo "hello flume">>/home/qun/data/flume/logs/test.txt
collector slave4 接收到agent的日志:
16/05/26 12:08:27 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/qun/data/flume/checkpoint/checkpoint, elements to sync = 2
16/05/26 12:08:27 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1464234987484, queueSize: 0, queueHead: 0
16/05/26 12:08:27 INFO file.Log: Updated checkpoint for file: /home/qun/data/flume/data/log-3 position: 393 logWriteOrderID: 1464234987484
16/05/26 12:08:27 INFO file.LogFile: Closing RandomReader /home/qun/data/flume/data/log-1
16/05/26 12:54:38 INFO client.ClientUtils$: Fetching metadata from broker id:0,host:xiaobin,port:9092 with correlation id 4 for 1 topic(s) Set(mytopic)
16/05/26 12:54:38 INFO producer.SyncProducer: Connected to xiaobin:9092 for producing
16/05/26 12:54:38 INFO producer.SyncProducer: Disconnecting from xiaobin:9092
16/05/26 12:54:38 INFO producer.SyncProducer: Disconnecting from xiaobin:9092
16/05/26 12:54:38 INFO producer.SyncProducer: Connected to xiaobin:9092 for producing
16/05/26 12:54:57 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/qun/data/flume/checkpoint/checkpoint, elements to sync = 2
16/05/26 12:54:57 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1464234987491, queueSize: 0, queueHead: 0
16/05/26 12:54:57 INFO file.Log: Updated checkpoint for file: /home/qun/data/flume/data/log-3 position: 786 logWriteOrderID: 1464234987491
16/05/26 12:54:57 INFO file.Log: Removing old file: /home/qun/data/flume/data/log-1
16/05/26 12:54:57 INFO file.Log: Removing old file: /home/qun/data/flume/data/log-1.meta


一会儿再写····


推荐阅读:
  1. Flume-1.6.0学习笔记(六)kafka source
  2. flume 整合kafka

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

测试 高可用 flume

上一篇:IPy模块使用IP计算子网报错思考

下一篇:内置函数和装饰器

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》