liulin0 2019-06-28
ELK分别表示:Elasticsearch , Logstash, Kibana 。他们组成了一套完整的日志系统的解决方案。
yum install wget -y
yum install zip unzip
去官方网站下载:
jdk-8u181-linux-x64.rpm
或者执行
wget http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.rpm?AuthParam=1532483945_1ce1c40fee9c74cdfb2c8c33ba817e88
下载完毕后重命名rpm包
mv jdk-8u181-linux-x64.rpm?AuthParam=1532483945_1ce1c40fee9c74cdfb2c8c33ba817e88 jdk-8u181-linux-x64.rpm
rpm -ivh jdk-8u181-linux-x64.rpm
上面所有的步骤完成之后,这时候我们需要检查是否安装成功,输入如下命令,如图所示:
java -version echo $JAVA_HOME
wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/zip/elasticsearch/2.4.6/elasticsearch-2.4.6.zip
unzip elasticsearch-2.4.6.zip
若想以以root权限运行Elasticsearch.解决办法是运行时加上参数:
bin/elasticsearch -Des.insecure.allow.root=true
或者修改bin/elasticsearch,加上ES_JAVA_OPTS属性:
ES_JAVA_OPTS="-Des.insecure.allow.root=true"
//临时关闭 systemctl stop firewalld //禁止开机启动 systemctl disable firewalld
cluster.name: mntx-cluster node.name= node-1 network.host:192.168.29.129 http.port:9200
./elasticsearch-2.4.6/bin/elasticsearch
后台运行
./elasticsearch-2.4.6/bin/elasticsearch &
elasticsearch/bin/plugin install mobz/elasticsearch-head
head插件的使用
http://192.168.29.129:9200/_plugin/head/
Wget https://download.elastic.co/logstash/logstash/logstash-2.4.1.zip
unzip logstash-2.4.1.zip
touch ./logstash-2.4.1/config/logstash-es.conf
input { tcp { port => 9601 codec => json_lines } } output { elasticsearch { # 此处配置需要连接的elashticsearch地址 hosts => "localhost:9200" } stdout { codec => rubydebug} }
注意缩进要符合规范
./bin/logstash-plugin install logstash-codec-json_lines
./bin/logstash -f ./config/logback-es.conf ##命令窗形式 ./bin/logstash -f ./config/logback-es.conf & ##后台线程
ps -ef | grep logstash #后台线程关闭 kill -9 4617 ##pid 4617 为查处线程的pid
wget https://download.elastic.co/kibana/kibana/kibana-4.6.6-linux-x86_64.tar.gz
tar -xzvf kibana-4.6.6-linux-x86_64.tar.gz
vi ./config/kibana.yml
server.port: 5601 ##服务端口 server.host: "localhost" ##服务器ip 本机 elasticsearch.url: "http://localhost:9200" ##elasticsearch服务地址 与elasticsearch对应
./bin/kibana #命令窗启动 ./bin/kibana &
<dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>4.11</version> </dependency>
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE configuration> <configuration> <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>192.168.253.6:9601</destination> <!--指定logstash ip:监听端口 tcpAppender 可自己实现如kafka传输等--> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" /> </appender> <include resource="org/springframework/boot/logging/logback/base.xml"/> <!--引用springboot默认配置--> <root level="INFO"> <appender-ref ref="LOGSTASH" /> <!--使用上述订阅logstash数据tcp传输 --> <appender-ref ref="CONSOLE" /> <!--使用springboot默认配置 调试窗口输出--> </root> </configuration>
SpringbootLogbackApplication.java 测试,写一个循环100次的日志记录
@SpringBootApplication public class SpringbootLogbackApplication { private final static Logger logger = LoggerFactory.getLogger(SpringbootLogbackApplication.class); public static void main(String[] args) { new Thread(()->{ for (int i=0;i<100;i++){ logger.info("---test---"+i); } }).start(); SpringApplication.run(SpringbootLogbackApplication.class, args); } }