Name: logback-s3-rolling-policy
Owner: AOE
Description: Logback RollingPolicy to store logs in S3
Forked from: SiteFlo/logback-s3-rolling-policy
Created: 2017-06-01 09:25:04.0
Updated: 2017-06-01 09:25:06.0
Pushed: 2017-06-01 09:51:02.0
Homepage: null
Size: 117
Language: Shell
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
logback-s3-rolling-policy automatically uploads rolled log files to S3.
There are 2 rolling policies which can be used:
S3FixedWindowRollingPolicy
S3TimeBasedRollingPolicy
logback-s3-rolling-policy was forked from logback-s3 (https://github.com/shuwada/logback-s3) but transfered into a new project because changes were getting too big.
Add the linkID repositories to your pom file:
REMOTE ARTIFACT REPOSITORIES -->
ositories>
<repository>
<id>repo.linkid.be.release</id>
<name>LinkID Public Repository</name>
<url>http://repo.linkid.be/releases</url>
<snapshots>
<enabled>false</enabled>
<updatePolicy>never</updatePolicy>
</snapshots>
<releases>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</releases>
</repository>
<repository>
<id>repo.linkid.be.snapshot</id>
<name>LinkID Public Repository</name>
<url>http://repo.linkid.be/snapshots</url>
<snapshots>
<enabled>true</enabled>
<updatePolicy>always</updatePolicy>
</snapshots>
<releases>
<enabled>false</enabled>
<updatePolicy>never</updatePolicy>
</releases>
</repository>
positories>
Add the logback-s3-rolling-policy
dependency to your pom file:
endency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-s3-rolling-policy</artifactId>
<version>1.5</version>
pendency>
Whether you implement one of any available S3 policies, the following extra variables (on top of Logback's) are mandatory:
awsAccessKey
Your AWS access key.awsSecretKey
Your AWS secret key.s3BucketName
The S3 bucket name to upload your log files to.There are few optional variables:
s3FolderName
The S3 folder name in your S3 bucket to put the log files in. This variable supports dates, just put your pattern between %d{}
. Example: %d{yyyy/MM/dd}
.s3Endpoint
The endpoint to use. If you want to store the logs somewhere different than AWS S3 you need to provide this. Example value: http://minio-host:9000
shutdownHookType
Defines which type of shutdown hook you want to use. This variable is mandatory when you use rolloverOnExit
. Defaults to NONE
. Possible values are:NONE
This will not add a shutdown hook. Please note that your most up to date log file won't be uploaded to S3!JVM_SHUTDOWN_HOOK
This will add a runtime shutdown hook. If you're using a webapplication, please use the SERVLET_CONTEXT
, as the JVM shutdown hook is not really safe to use here.SERVLET_CONTEXT
This will register a shutdown hook to the context destroyed method of RollingPolicyContextListener
. Don't forget to actually add the context listener to you web.xml
. (see below)rolloverOnExit
Whether to rollover when your application is being shut down or not. Boolean value, defaults to false
. If this is set to false
, and you have defined a shutdownHookType
, then the log file will be uploaded as is.prefixTimestamp
Whether to prefix the uploaded filename with a timestamp formatted as yyyyMMdd_HHmmss
or not. Boolean value, defaults to false
.prefixIdentifier
Whether to prefix the uploaded filename with an identifier or not. Boolean value, defaults to false
. If running on an AWS EC2 instance, the instance ID will be used. If not running on an AWS EC2 instance, the hostname address will be used. If the hostname address can't be used, a UUID will be used.If you're using the shutdown hook SERVLET_CONTEXT
as defined above, you'll need to add the context listener class to your web.xml
:
tener>
listener-class>ch.qos.logback.core.rolling.shutdown.RollingPolicyContextListener</listener-class>
stener>
As of version 1.3
you can set run-time variables. For now you can only add an extra S3 folder.
Just use CustomData.extraS3Folder.set( "extra_folder_name" );
somewhere in your code before the upload occurs. You can always change this value during run-time and it will be picked up on the next upload. set to null
to ignore.
An example logback.xml
appender for each available policy using RollingFileAppender
.
ch.qos.logback.core.rolling.S3FixedWindowRollingPolicy
:
ender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
e>logs/myapp.log</file>
oder>
tern>[%d] %-8relative %22c{0} [%-5level] %msg%xEx{3}%n</pattern>
coder>
lingPolicy class="ch.qos.logback.core.rolling.S3FixedWindowRollingPolicy">
eNamePattern>logs/myapp.%i.log.gz</fileNamePattern>
AccessKey>ACCESS_KEY</awsAccessKey>
SecretKey>SECRET_KEY</awsSecretKey>
ucketName>myapp-logging</s3BucketName>
olderName>logs/%d{yyyy/MM/dd}</s3FolderName>
loverOnExit>true</rolloverOnExit>
tdownHookType>SERVLET_CONTEXT</shutdownHookType>
fixTimestamp>true</prefixTimestamp>
fixIdentifier>true</prefixIdentifier>
llingPolicy>
ggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
FileSize>10MB</maxFileSize>
iggeringPolicy>
pender>
In this example you'll find the logs at myapp-logging/logs/2015/08/18/
.
ch.qos.logback.core.rolling.S3TimeBasedRollingPolicy
:
ender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
e>logs/myapp.log</file>
oder>
tern>[%d] %-8relative %22c{0} [%-5level] %msg%xEx{3}%n</pattern>
coder>
lingPolicy class="ch.qos.logback.core.rolling.S3TimeBasedRollingPolicy">
Rollover every minute -->
eNamePattern>logs/myapp.%d{yyyy-MM-dd_HH-mm}.%i.log.gz</fileNamePattern>
AccessKey>ACCESS_KEY</awsAccessKey>
SecretKey>SECRET_KEY</awsSecretKey>
ucketName>myapp-logging</s3BucketName>
olderName>log</s3FolderName>
loverOnExit>true</rolloverOnExit>
tdownHookType>SERVLET_CONTEXT</shutdownHookType>
fixTimestamp>false</prefixTimestamp>
fixIdentifier>true</prefixIdentifier>
eBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
axFileSize>10MB</maxFileSize>
meBasedFileNamingAndTriggeringPolicy>
llingPolicy>
pender>
In this example you'll find the logs at myapp-logging/log/
.
It is a good idea to create an IAM user only allowed to upload S3 object to a specific S3 bucket. It improves the control and reduces the risk of unauthorized access to your S3 bucket.
The following is an example IAM policy.
ersion": "2012-10-17",
tatement": [
{
"Action": [
"s3:PutObject"
],
"Sid": "Stmt1378251801000",
"Resource": [
"arn:aws:s3:::myapp-logging/log/*"
],
"Effect": "Allow"
}
This project uses the following libraries:
com.amazonaws:aws-java-sdk:1.11.7
ch.qos.logback:logback-classic:1.1.3
com.google.guava:guava:18.0
javax.servlet:servlet-api:2.4
(scope provided)org.jetbrains:annotations:7.0.2
(scope provided)For now we only have a manual way to publish this to Artifactory
<version>
number in the pom.xml
mvn package
(this adds a jar file to the /target folder)Generate Default POM / Deploy Jar's Internal POM
when uploading)