Analyze and visualize csv logs with Excel Elastic Stack (docker-compose) --Receive input from multiple beats with Pipeline-to-Pipeline of Logstash

Introduction

Thanks! An engineer in charge of the product inspection process in the production engineering department. Analyzing and visualizing csv logs with Excel Elastic Stack (docker-compose) --Elastic Stack is a continuation.

Target audience

This article is intended for those who are new to Elastic Stack and who are thinking about trying it out.

Content of this article

It was accepted on a specific port (5044) of Logstash, and input from multiple beats was sent to another pipeline.

I have put a set of configuration files in GitLab, so please refer to it. Click here for repository-> elastic-stack

Pipeline-to-Pipeline settings

Prepare a beats-server pipeline and receive it on port 5044. [source] branches with filebeat or metricbeat and flows to each pipeline. The official documentation is here. It is judged by the source, but I think that it should be judged by the type as officially.

logstash/config/pipelines.yml


- pipeline.id: beats-server
  config.string: |
    input { beats { port => 5044 } }
    output {
        if [source] == 'filebeat' {
          pipeline { send_to => filebeatlog }
        } else if [source] == 'metricbeat' {
          pipeline { send_to => metricbeatlog }
        }
    }

- pipeline.id: filebeat-processing
  path.config: "/usr/share/logstash/pipeline/{input/filebeat_in,filter/filebeat_filter,output/filebeat_out}.cfg"
  pipeline.batch.size: 50
  pipeline.batch.delay: 50

- pipeline.id: metricbeat-processing
  path.config: "/usr/share/logstash/pipeline/{input/metricbeat_in,filter/metricbeat_filter,output/metricbeat_out}.cfg"
  pipeline.batch.size: 50
  pipeline.batch.delay: 50

input settings

Change what was received on port 5044 so that it will be received on the pipeline address.

logstash/pipeline/input/filebeat_in.cfg


input {
#  beats {
#    port => 5044
#  }

  pipeline {
    address => filebeatlog
  }
}

Beats settings (different from the official method)

Set the source field to identify where you came from in logstash. (I think that the type is set according to the formula.) Fields_under_root By setting, you can store in the top level field when outputting from filebeat. If you do not make this setting, it will not work properly.

beats/filebeat/config/filebeat.yml


  fields:
    source: 'filebeat'
  fields_under_root: true

Finally

Now you can build complex pipelines as well. In the future, I would like to introduce metric beat and so on.

Recommended Posts

Analyze and visualize csv logs with Excel Elastic Stack (docker-compose) --Receive input from multiple beats with Pipeline-to-Pipeline of Logstash
Analyze and visualize csv logs with Excel Elastic Stack (docker-compose) --Set up with docker-compose
Analyzing and visualizing csv logs with Excel Elastic Stack (docker-compose) --What is Elastic Stack?
Analyzing and visualizing csv logs with Excel Elastic Stack (docker-compose) --Two ways to deal with Logstash OutOfMemoryError
Analyzing and visualizing csv logs with Excel Elastic Stack (docker-compose) --Dividing PipelineFilter into 3 files [input / filter / output] to improve maintainability and reusability
Analyze and visualize csv logs with Excel Elastic Stack (docker-compose)-(1st line: date, 2nd and subsequent lines: csv data) date is added to each line after the 2nd line as a timestamp field.
Analyze and visualize csv logs with Excel Elastic Stack (docker-compose) --Set up with docker-compose
Analyze and visualize csv logs with Excel Elastic Stack (docker-compose) --Receive input from multiple beats with Pipeline-to-Pipeline of Logstash
Analyzing and visualizing csv logs with Excel Elastic Stack (docker-compose) --What is Elastic Stack?
Analyze and visualize csv logs with Excel Elastic Stack (docker-compose)-(1st line: date, 2nd and subsequent lines: csv data) date is added to each line after the 2nd line as a timestamp field.
Analyzing and visualizing csv logs with Excel Elastic Stack (docker-compose) --Dividing PipelineFilter into 3 files [input / filter / output] to improve maintainability and reusability
Analyzing and visualizing csv logs with Excel Elastic Stack (docker-compose) --Two ways to deal with Logstash OutOfMemoryError