啦啦啦啦啦 2020-05-15
主要是利用命令envsubst能实现变量的替换并生成新得配置文件以及docker命令行的变量输入等。
本次实验主要是编写flume镜像,并在容器启动(docker run)时动态修改配置文件并启动,并向flume发数据,然后发给kafka。
此实验的基础:
会dockerfile的编写,熟悉flume服务,kafka服务。
步骤:
1、dockerfile文件如下:
FROM centos WORKDIR /opt COPY jdk-8u241-linux-x64.rpm /root RUN rpm -i /root/jdk-8u241-linux-x64.rpm # 安装jdk RUN yum install -y gettext # 安装 envsubst命令 COPY flume ./flume/ EXPOSE 5140/udp VOLUME /tmp/logs/ CMD envsubst < /opt/flume/conf/flume.conf.template > /opt/flume/conf/flume.conf && ./flume/bin/flume-ng agent -c ./flume/conf/ -f ./flume/conf/flume.conf --name agent -Dflume.root.logger=INFO,console# envsubst根据flume.conf.template这个配置文件替换变量并生成flume.conf。
flume的配置文件如下:
agent.sources = s1 agent.sinks = k1 agent.channels = c1 agent.sources.s1.type = syslogudp agent.sources.s1.port = 5140 agent.sources.s1.host = ${BIND_IP} agent.channels.c1.type = memory agent.channels.c1.capacity = 100000 agent.channels.c1.transactionCapacity = 5000 agent.sinks.k1.type = logger agent.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink agent.sinks.k1.kafka.topic = log agent.sinks.k1.kafka.bootstrap.servers = ${KAFKA_IP}:${KAFKA_PORT} agent.sinks.k1.flumeBatchSize = 200 agent.sinks.k1.producer.acks = 1 agent.sinks.k1.producer.linger.ms = 1 agent.sinks.k1.producer.compression.type = snappy agent.sources.s1.channels = c1 agent.sinks.k1.channel = c1
2、构建镜像
docker build /root/flume/dockerfile/ -t flume_sink_file:v1
注意:/root/flume/dockerfile/目录下要有flume、jdk-8u241-linux-x64.rpm、dockerfile文件。
3、启动容器:
docker run -d --rm -P -e KAFKA_IP="192.168.174.128",KAFKA_PORT="9092",BIND_IP="0.0.0.0" --name=abcd flume_sink_file:v1
4、而后,向flume发送数据,测试是否能在kafka topic log上接收到数据。