This is the article on the 23rd day of Elasticsearch Advent Calendar (2020). Paste the IIS log into Excel to create a pivot table and make it a pivot graph ~~ Primitive ~~ I don't want to do any more work, so I built Elastic Stack and it became easy to analyze, so I wrote it in the article I did.
-Overview of each software -Processing flow -Building Elastic Stack with Docker Compose -Configuration
--Elastic Stack is a general term for a group of products consisting of Elasticsearch, Kibana, Beats, and Logstash. --Beats are called data shippers and are used as a data transfer tool. --Automatically detects file updates and transfers differences. --This time, we will use Filebeats. --Logstash is called a data processing pipeline and can take data, convert it, and store it in Elasticsearch. --Elasticsearch is a well-known full-text search engine. By creating an inverted index internally when data is input, a large number of documents can be searched at high speed. --Kibana is used as a tool to visualize Elasticsearch data.
--The processing flow is [Filebeat-> Logstash-> Elasticsearch-> Kibana] --Filebeat monitors the IIS log, and if it detects an update, it forwards it to Logstash. --Convert to json with Logstash and submit it to Elasticsearch. --Visualize Elasticsearch data in Kibana.
Build with Docker Compose. Start the container with docker-compose up -d in the directory where docker-compose.yml is located.
> docker-compose up -d
--Store IIS logs in ./filebeat/log. --If the container is running, it will be automatically submitted to Elasticsearch.
.
├─docker-compose.yml
├─.env
├─elasticsearch
│ └─data
├─filebeat
│ ├─conf
│ │ └─filebeat.yml
│ └─log
│ └─u_exyyyymmdd.log
└─logstash
└─pipeline
└─logstash.conf
docker-compose.yml
--Build Elasticsearch, Kibana, Logstash, Filebeat. --Elasticsearch is built with a single node. --Mount the volume locally to hold Elasticsearch data. --Graphs and dashboards created in Kibana are also stored here. --Logstash reads the local config file. --Filebeat reads the local config file. --Mount the volume so that Filebeat can see the local logs. --It seems that Filebeat refers to the Docker socket, so mount it.
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
environment:
- discovery.type=single-node
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms4096m -Xmx4096m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
kibana:
image: docker.elastic.co/kibana/kibana:7.2.0
ports:
- 5601:5601
logstash:
image: docker.elastic.co/logstash/logstash:7.2.0
ports:
- 5044:5044
environment:
- "LS_JAVA_OPTS=-Xms4096m -Xmx4096m"
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline
filebeat:
image: docker.elastic.co/beats/filebeat:7.2.0
volumes:
- ./filebeat/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./filebeat/log:/usr/share/filebeat/log
- /var/run/docker.sock:/var/run/docker.sock
user: root
.env
Allows Docker for Windows to mount /var/run/docker.sock.
COMPOSE_CONVERT_WINDOWS_PATHS=1
logstash.conf
--Set input to accept transfers from Filebeat. --Process IIS logs. --Set output so that it can be input to Elasticsearch.
input {
# input from Filebeat
beats {
port => 5044
}
}
filter {
dissect {
# log format is TSV
mapping => {
"message" => "%{ts} %{+ts} %{s-ip} %{cs-method} %{cs-uri-stem} %{cs-uri-query} %{s-port} %{cs-username} %{c-ip} %{cs(User-Agent)} %{cs(Referer)} %{sc-status} %{sc-substatus} %{sc-win32-status} %{time-taken}"
}
}
date {
match => ["ts", "YYYY-MM-dd HH:mm:ss"]
timezone => "UTC"
}
ruby {
code => "event.set('[@metadata][local_time]',event.get('[@timestamp]').time.localtime.strftime('%Y-%m-%d'))"
}
mutate {
convert => {
"sc-bytes" => "integer"
"cs-bytes" => "integer"
"time-taken" => "integer"
}
remove_field => "message"
}
}
output {
elasticsearch {
hosts => [ 'elasticsearch' ]
index => "iislog-%{[@metadata][local_time]}"
}
}
filebeat.yml
--Set input to reference/usr/share/filebeat/log. --Actually, ./filebeat/log is mounted in/usr/share/filebeat/log, so if you store the IIS log in ./filebeat/log, Filebeat will automatically refer to it. --Set output to forward to Logstash.
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/share/filebeat/log/*.log
exclude_lines: ['^#','HealthChecker']
output.logstash:
hosts: ["logstash:5044"]
Place the IIS log file in ./filebeat/log and Filebeat will detect it and send it to Logstash. The transmitted data is processed by Logstash and submitted to Elasticsearch.
Go to http: // localhost: 5601.
Click the gear icon, then click Elasticsearch/Index Management.
Make sure that the IIS log is indexed.
Click Kibana/Index Patterns, then click Create Index pattern.
Enter the Index pattern and click Next step.
Select @timestamp for the Time Filter field name and click Create index pattern.
Select the Index pattern created here and create a graph.
Specify the Index pattern you created earlier.
Narrow down the display period on the upper right.
Specify the X axis. Set Aggregation to Date Histogram, Field to @timestamp, Minimum interval to Minute, and click ▷.
The graph now shows the number of requests per minute. To display the number of requests per minute for each function, click Add filter and specify Field (the item you want to filter), Operator (operator), and Value (value).
You can arrange the created graphs on the dashboard.
Recommended Posts