Logstash 基础12

2022-02-11 16:23:31 浏览数 (1)

多行日志事件

类似于mysql slow log 这一类的日志并非一次一行,而是多行

Logstash 也可以处理,只是目前此功能还比较弱

配置如下

代码语言:javascript复制
[root@h102 etc]# cat logstash-multiline.conf
input {
  stdin {
    codec => multiline {
      pattern => "^# User@Host:"
      negate => true
      what => previous
    }
  }
}

output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}
[root@h102 etc]# time /opt/logstash/bin/logstash -f logstash-multiline.conf -t 
Configuration OK

real	0m18.807s
user	0m30.841s
sys	0m2.290s
[root@h102 etc]# 
  • pattern 为正则匹配
  • negate 为反转,只能为 truefalse , 默认为 false ,代表不反转
  • what 为处理行为,只能为 previousnext ,为 previous 时,代表匹配此模式的行属于前面的事件内容,为 next 时,代表匹配此模式的行属于后面的事件内容

上面的配置表明,如果不以 # User@Host: 开头的行都属于前面的事件内容

开启 Logstash 进行测试

代码语言:javascript复制
[root@h102 etc]# time /opt/logstash/bin/logstash -f logstash-multiline.conf 
Settings: Default filter workers: 1
Logstash startup completed
# Time: 150710 16:37:53
# User@Host: root[root] @ localhost []
{
    "@timestamp" => "2016-01-05T14:01:57.953Z",
       "message" => "# Time: 150710 16:37:53",
      "@version" => "1",
          "host" => "h102.temp"
}
# Thread_id: 113  Schema: mysqlslap  Last_errno: 0  Killed: 0
# Query_time: 1.134132  Lock_time: 0.000029  Rows_sent: 1  Rows_examined: 1  Rows_affected: 0  Rows_read: 1
# Bytes_sent: 2168
SET timestamp=1436517473;
SELECT intcol1,intcol2,intcol3,intcol4,intcol5,charcol1,charcol2,charcol3,charcol4,charcol5,charcol6,charcol7,charcol8,charcol9,charco
l10 FROM t1 WHERE id =  '31';
# User@Host: root[root] @ localhost []
{
    "@timestamp" => "2016-01-05T14:02:03.773Z",
       "message" => "# User@Host: root[root] @ localhost []n# Thread_id: 113  Schema: mysqlslap  Last_errno: 0  Killed: 0n# Query_time: 1.134132  Lock_time: 0.000029  Rows_sent: 1  Rows_examined: 1  Rows_affected: 0  Rows_read: 1n# Bytes_sent: 2168nSET timestamp=1436517473;nSELECT intcol1,intcol2,intcol3,intcol4,intcol5,charcol1,charcol2,charcol3,charcol4,charcol5,charcol6,charcol7,charcol8,charcol9,charconl10 FROM t1 WHERE id =  '31';",
      "@version" => "1",
          "tags" => [
        [0] "multiline"
    ],
          "host" => "h102.temp"
}
# Thread_id: 110  Schema: mysqlslap  Last_errno: 0  Killed: 0
# Query_time: 1.385901  Lock_time: 0.000037  Rows_sent: 1  Rows_examined: 1  Rows_affected: 0  Rows_read: 1
# Bytes_sent: 2167
SET timestamp=1436517473;
SELECT intcol1,intcol2,intcol3,intcol4,intcol5,charcol1,charcol2,charcol3,charcol4,charcol5,charcol6,charcol7,charcol8,charcol9,charco
l10 FROM t1 WHERE id =  '43';
# User@Host: root[root] @ localhost []
{
    "@timestamp" => "2016-01-05T14:02:51.114Z",
       "message" => "# User@Host: root[root] @ localhost []n# Thread_id: 110  Schema: mysqlslap  Last_errno: 0  Killed: 0n# Query_time: 1.385901  Lock_time: 0.000037  Rows_sent: 1  Rows_examined: 1  Rows_affected: 0  Rows_read: 1n# Bytes_sent: 2167nSET timestamp=1436517473;nSELECT intcol1,intcol2,intcol3,intcol4,intcol5,charcol1,charcol2,charcol3,charcol4,charcol5,charcol6,charcol7,charcol8,charcol9,charconl10 FROM t1 WHERE id =  '43';",
      "@version" => "1",
          "tags" => [
        [0] "multiline"
    ],
          "host" => "h102.temp"
}

发现在输入 # User@Host: 之前,所有的行都被进行压栈处理,输入此条信息后,前面的信息进行了一个完结,又重新等待新的输入,直到遇到又一个 # User@Host:

Tip: 暂时没有很好的办法处理诸如 # Time: 150710 16:37:53 的行,这样的行被算在了前一条的事件日志中


命令汇总

  • java -version
  • /opt/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'
  • cat first-pipeline.conf
  • /opt/logstash/bin/logstash -f first-pipeline.conf -t
  • /opt/logstash/bin/logstash -f first-pipeline.conf
  • curl -XGET 'localhost:9200/logstash-2015.12.23/_search?q=response=404'
  • curl -XGET 'localhost:9200/logstash-2015.12.23/_search?q=response=304&pretty'
  • curl -XGET 'localhost:9200/logstash-2015.12.23/_search?q=geoip.city_name=Buffalo&pretty'
  • grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
  • /etc/init.d/filebeat start
  • /etc/init.d/filebeat status
  • cat logstash-filebeat-es-simple.conf
  • /opt/logstash/bin/logstash -f logstash-filebeat-es-simple.conf
  • curl localhost:9200/_cat/indices?v
  • curl -XGET 'localhost:9200/filebeat-2016.01.05/_search?q=message=2935&pretty'
  • cat logstash-syslog.conf
  • /opt/logstash/bin/logstash -f logstash-syslog.conf
  • telnet localhost 5000
  • curl -XGET 'localhost:9200/logstash-2016.12.23/_search?q=message=louis&pretty'
  • cat logstash-multiline.conf
  • time /opt/logstash/bin/logstash -f logstash-multiline.conf

原文地址

0 人点赞