Skip to main content

Posts

Database Internel Architecture: SQLite

Introduction A database is an essential part of building a software system which used to store and read data efficiently. Here, We are going to discuss some architectural details of database implementation by using an early version of SQLite. SQLite is a small database application which used in millions of software and devices. SQLite invented by D.Richard Hipp in August 2000. SQLite is a high performance, lightweight relational database. If you are willing to learn internal of a database in coding level, then SQLite is the best open source database available out there with highly readable source code with lots of documentation. Reading later versions of SQLite become a little harder since it contains lots of new features. In order to understand the basic implementation of database internals, You should have good knowledge about data structures, some knowledge about Theory of Computing and how an operating system works. Here we are looking into the SQLite 2.5.0 version. Here
Recent posts

Auditing WSO2 Micro-integrator with audit logs

  WSO2 Micro-integrator is an integration product that is widely used in integrating services in microservices environments. Integration is an essential component of inter-service communication in microservices. WSO2 Micro-integrator provides thousands of features to solve integration requirements. Micro-integrator is available as a Docker container where you can directly pull the Docker image to the target platform and start the Micro-integrator service. Micro-integrator provides observability to observe system status. It includes all three main pillars of observability which are logs, traces, and metrics. Engineers can get the health status of the Micro-integrator as well as the system status. The audit log is a recent feature coming along with WSO2 Micro-integrator 4.1.0 to log the changes applied on the Micro-integrator via management API. management API lets you perform changes on the Micro-integrator such as changing log levels, getting artifacts status, etc. Why are Audit logs i

Don’t let anyone sneak peek into your Micro Integrator: Security guide for WSO2 MI

  WSO2 Micro Integrator is an integration solution that is widely used in enterprise integration. You can use MI(For short let's use Micro Integrator as MI) to implement mediation policies, message transformation, security, and many more. In this article, we are going to focus on how you can securely place MI on your deployment. First thing first: WSO2 MI comes with a default Keystore that is used in many cryptography-related features including SSL, mutual SSL, password encryption, and so on. Since this is a public one, you should make sure that you have generated a new Keystore and truststore for the MI. If you are looking for steps on how you do it, then refer to the following document: https://apim.docs.wso2.com/en/latest/install-and-setup/setup/mi-setup/security/configuring_keystores Avoid having the default H2 database and use a proper one By default WSO2 MI ships with H2 embedded database to store data for the following purposes: Cluster coordination RDBMS user store Transact

[Short Blog] Apache Kafka consumer with WSO2 Micro integrator

  This document contains steps that I have followed to setup inbound endpoint with Kafka Avro messages. Setting up Kafka server and UI First download theKafka kafka_2.11–2.2.1 for Zookeeper and Kafka server.  https://archive.apache.org/dist/kafka/2.2.1/kafka_2.11-2.2.1.tgz Start Zookeeper with kafka_2.11–2.2.1. Go to the kafka home folder and execute: bin/zookeeper-server-start.sh config/zookeeper.properties` The same way, start Kafka with : bin/kafka-server-start.sh config/server.properties Download confluent-5.2.1 for Schema registry  https://packages.confluent.io/archive/5.2/confluent-community-5.2.1-2.11.tar.gz Start Schema registry with following command on the confluent home folder: bin/schema-registry-start ./etc/schema-registry/schema-registry.properties As the UI for Kafka I used  https://github.com/provectus/kafka-ui . Optionaly you can use it to confirm messages are publishing correctly. You can use prebuilt jar file and start the UI with following command java -Dspring.conf