Skip to main content

Auditing WSO2 Micro-integrator with audit logs

 WSO2 Micro-integrator is an integration product that is widely used in integrating services in microservices environments. Integration is an essential component of inter-service communication in microservices. WSO2 Micro-integrator provides thousands of features to solve integration requirements.

Micro-integrator is available as a Docker container where you can directly pull the Docker image to the target platform and start the Micro-integrator service. Micro-integrator provides observability to observe system status. It includes all three main pillars of observability which are logs, traces, and metrics. Engineers can get the health status of the Micro-integrator as well as the system status.

The audit log is a recent feature coming along with WSO2 Micro-integrator 4.1.0 to log the changes applied on the Micro-integrator via management API. management API lets you perform changes on the Micro-integrator such as changing log levels, getting artifacts status, etc.

Why are Audit logs important?

When you are running a micro-integrator on a microservices environment, administrators who have admin access to the micro-integrator are able to change its configurations via management API. When someone needs to debug the system and find out which person did what change, then the micro-integrator needs to keep a log of activities performed on the micro-integrator. Audit logs are simply a set of logs that let you find what are the changes performed on the micro-integrator instance.

Audit logs are enabled by default in Micro-integrator. However, you can configure logging configurations by editing log4j2.properties file which is located in the product home “conf” directory. By default, logs are appended to the audit.log file located on the product home “repository/logs/” directory. The default audit log configuration is as follows:

As mentioned earlier, the audit log let you log administration operations. Following are the list of operation that track and details it prints:

logging in to the management services

You can execute the following command to login into the MI management API:

Make sure you set the current Authorization credentials to log into the MI instance. Here we have given admin:admin default credentials. This will respond to you with a token that you can use to access MI management API. While it also prints the audit logs into the audit log you have specified. With the default log4j2.properties configuration files, you can find the following logline on the <MI_HOME>/repository/logs/audit.log file:

Now you can perform different operations on the MI with the token you have. For example, if you need to deactivate a proxy service, you can use the following curl command with the token you have received in the previous step.

This will generate the following logline in the audit.log file:

In the same way, you can perform operations on the MI instance and check what changes happened to the MI instance through the MI audit logs files. You can use separate log collection stacks, such as ELK to collect those logs and analyze them. The audit log simply, let you identify what changes happened to the MI instance, who changed, and when it changed.

For more information, check the following WSO2 document:

https://apim.docs.wso2.com/en/4.1.0/observe/micro-integrator/classic-observability-logs/monitoring-mi-audit-logs/

Comments

Popular posts from this blog

Database Internel Architecture: SQLite

Introduction A database is an essential part of building a software system which used to store and read data efficiently. Here, We are going to discuss some architectural details of database implementation by using an early version of SQLite. SQLite is a small database application which used in millions of software and devices. SQLite invented by D.Richard Hipp in August 2000. SQLite is a high performance, lightweight relational database. If you are willing to learn internal of a database in coding level, then SQLite is the best open source database available out there with highly readable source code with lots of documentation. Reading later versions of SQLite become a little harder since it contains lots of new features. In order to understand the basic implementation of database internals, You should have good knowledge about data structures, some knowledge about Theory of Computing and how an operating system works. Here we are looking into the SQLite 2.5.0 version. Here

Weird Programming Languages

There are thousands of programming languages are invented and only about hundred of programming languages are commonly used to build software. Among this thousands of programming languages, there are some weird type of programming languages can be also found. These programming languages are seems to be called weird, since their programming syntax and the way it represent its code. In this blog we will look into some of these language syntax. Legit Have you ever wonder, when you come to a Github project that print hello world program, but you cannot see any codes or any content. Check this link  https://github.com/blinry/legit-hello  and you will see nothing in this repository. But trust me, there is hidden code in this project. If you see the  commit  section, you can reveal the magic. Yeah, you are right. Its storing hello world code in inside the git commit history. If you clone this project and run the following command, then you can see the hidden code in this project. g

Basic Concepts of the Kubernetes

Handling large software which has multiple services is a tedious, time-consuming task for DevOps engineer. Microservices comes into the rescue DevOps engineers from all these complicated deployment processes. Simply, each microservice in the system has it own responsibility to handle one specific task. The container can be used to deploy each of these micro-tasks as a unit of service. If you are not that familiar with Containers, read this article to get to know about Docker, Which is the most popular and widely used container technology to deploy microservices. As I described early, we can use single container to deploy a single service and container contain all required configurations and dependencies. Single service always faces a common problem of a single point of failure. In order to avoid single point failure, we need to set up another service such that if one service is getting down, next available service takes that load and continue to provide the service. Another requi