Three powerful tools for migrating Java logs to the cloud: Log4J, LogBack, Producer Lib

In this article, I will introduce three powerful tools for migrating ** Java ** logs to the cloud. ** Log4J, LogBack, Producer Lib **.

Road to centralized logs

In recent years, the advent of stateless programming, containers, and serverless programming has greatly improved the efficiency of software delivery and deployment. Two changes can be seen in the evolution of architecture:

image.png

--Application architecture is changing from a single system to microservices. The business logic then turns into calls and requests between microservices. --In terms of resources, traditional physical servers have faded out and turned into invisible virtual resources.

The two changes mentioned above show that behind the elastic and standardized architecture, operational maintenance (O & M) and diagnostic requirements are becoming more complex. Ten years ago, I was able to log on to a server and fetch logs quickly. However, the attach processing mode no longer exists. We are currently facing a standardized black box.

image.png

In order to respond to these changes, diagnostic and analysis tools specialized for DevOps are appearing one after another. These include centralized monitoring, centralized logging systems, implementation of various SaaS, monitoring and so on.

By centralizing the logs, you can solve the problems that precede you. To do this, after the application generates the log, the log is sent in real time (or near real time) to a central node server. Syslog, Kafka, ELK, and HBase are often used to run centralized storage.

Benefits of centralization

--Easy to use: Querying stateless application logs using Grep is a hassle. In centralized storage, the long previous process is replaced by running a search command. --Storage and compute separation: You don't have to worry about storage space for logs when customizing your machine's hardware. --Cost reduction: Centralized log storage allows load shifts to reserve more resources. --Security: Important data is retained as evidence in the event of a hacker intrusion or disaster.

image.png

Collector (Java series)

The log service is compatible with servers, mobile terminals, embedded devices, and various development languages more than 30 data collection methods. We provide a comprehensive access solution. Java developers need a familiar logging framework. Log4j, Log4j2, Logback Appender.

Java applications currently have two mainstream log collection solutions.

--Java programs flush logs to disk and use Logtail for real-time collection. --The Java program directly configures the Appender provided by the Log Service. When the program runs, the logs are sent to the Log Service in real time. Difference between the two:

image.png

With Appender, you can use Config to easily complete real-time log collection without changing your code. The Java series Appender provided by Log Service has the following advantages.

--Config changes take effect without changing the program. --Asynchronous + Breakpoint Transfer: I / O does not affect the main thread and can tolerate certain network or service failures. --High currency design: Meets large log write requirements. --Supports contextual queries: Supports the original process of the log service to accurately restore the log context (N logs before and after the log).

Overview and usage of appenders: The appenders provided are as follows. All the underlying layers use aliyun-log-producer-java to write the data To use.

aliyun-log-log4j-appender aliyun-log-log4j2-appender aliyun-log-logback-appender 4 differences:

image.png

Appender integration

Integrate Appender by following the setup steps of aliyun-log-log4j-appender can do.

The contents of the configuration file log4j.properties are as follows.

log4j.rootLogger=WARN,loghub

log4j.appender.loghub=com.aliyun.openservices.log.log4j.LoghubAppender

# Log Service project name (required parameter)
log4j.appender.loghub.projectName=[your project]
# Log Service LogStore name (required parameter)
log4j.appender.loghub.logstore=[your logstore]
#Log Service HTTP address (required parameter)
log4j.appender.loghub.endpoint=[your project endpoint]
# User identity (required parameter)
log4j.appender.loghub.accessKeyId=[your accesskey id]
log4j.appender.loghub.accessKey=[your accesskey]

Query and analysis

When you configure the appender as described in the previous step, the logs generated by your Java application are automatically sent to the Log Service. You can use LogSearch / Analytics to query and analyze these logs in real time. See the sample log format below. Log format used in this example:

Log that records logon behavior:

level: INFO
location:  com.aliyun.log4jappendertest.Log4jAppenderBizDemo.login(Log4jAppenderBizDemo.java:38)
message: User login successfully. requestID=id4 userID=user8 
thread: main
time:  2018-01-26T15:31+0000  

Logs that record buying behavior:

level: INFO
location:  com.aliyun.log4jappendertest.Log4jAppenderBizDemo.order(Log4jAppenderBizDemo.java:46)
message: Place an order successfully. requestID=id44 userID=user8 itemID=item3 amount=9
thread: main
time: 2018-01-26T15:31+0000 

Enable query parsing

You must enable the query and analysis features before you can query and analyze the data. Follow the steps below to enable the feature.

-[Log Service Console](https://account.aliyun.com/login/login.htm?oauth_callback=https%3A%2F%2Fsls.console.aliyun.com%2F%3Fspm%3Da2c65.11461447.0.0.2b7a65cbO0qZgq&lang= Log on to ja # /). --On the project list page, click the project name or click Manage on the right. --Select the log store and click Search in the Log Search column. --Select Set LogSearch and Analytics> Settings. --Go to the settings menu and enable queries in the following fields.

image.png

Log analysis

Let's look at five analysis examples.

  1. Count the top three places with the most errors in the last hour. Syntax example:
level: ERROR | select location ,count(*) as count GROUP BY  location  ORDER BY count DESC LIMIT 3
  1. Count the number of logs generated for each log level in the last 15 minutes. Syntax example:
| select level ,count(*) as count GROUP BY level ORDER BY count DESC 
  1. Query the log context.

For any log, you can accurately reconstruct the log context information in the original log file.

For more information, see Context Query (https://www.alibabacloud.com/help/ja/doc-detail/48148.html).

  1. Count the top 3 users who logged on most frequently in the last hour.

Syntax example:

login | SELECT regexp_extract(message, 'userID=(? <userID>[a-zA-Z\d]+)', 1) AS userID, count(*) as count GROUP BY userID ORDER BY count DESC LIMIT 3 
  1. Compile the total payment statistics for each user over the last 15 minutes.

Syntax example:

order | SELECT regexp_extract(message, 'userID=(? <userID>[a-zA-Z\d]+)', 1) AS userID, sum(cast(regexp_extract(message, 'amount=(? <amount>[a-zA-Z\d]+)', 1) AS double)) AS amount GROUP BY userID 

Recommended Posts

Three powerful tools for migrating Java logs to the cloud: Log4J, LogBack, Producer Lib
Introduction to java for the first time # 2
About the procedure for java to work
[Java] (for MacOS) How to set the classpath
Setting up SAP Cloud Platform Tools for Java
How to use Alibaba Cloud LOG Java Producer
What Java engineers need to prepare for the Java 11 release