Sunday, December 9, 2018

MongoDB Walkthrough

INTRODUCTION

MongoDB is a quite popular no-SQL database system. In this article, I am going to write something about this database system. Yes, no-SQL databases are meant for performance and scalability, however, like relational databases, there exists no relation between tables or collections in the database.

MongoDB is super easy, can be run in a single host, and data is stored in document format(as JSON objects) which makes very easy to analyze or export. It provides a nice client GUI tool called "MongoDB Compass".  The storage of data is transparent in the sense that we can copy the database and reuse it. Also, there are flexibilities regarding the log files, data files, and config files.

I have used another alternative no-SQL database called Cassandra, which is also powerful and popular. Cassandra, however, requires more resources than MongoDB. Also, I could not get any better GUI client tools. Configuration is also a bit different compared to MongoDB. Cassandra is Java-based, MongoDB is C++ based.

INSTALLATION:

There are methods to install:

1) Using Debian file available from

https://repo.mongodb.org/apt/ubuntu/dists/xenial/mongodb-org/4.0/multiverse/binary-amd64/mongodb-org-server_4.0.4_amd64.deb

2) Using Terminal

sudo apt install mongodb-server
sudo apt install mongodb-clients

3) Using installation files from

http://downloads.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1604-v4.0-latest.tgz

CONFIGURATION

The above mentioned two methods are quite straightforward, do not basically need any configuration.
I am writing the configuration of installation step 3 mentioned above.

# After downloading, extract the file and place contents into /opt/mongodb. Create a short link of the original extracted folder into /etc/mongodb.

# Set /opt/mongodb/bin as PATH variable so that we can directly run MongoDB commands.

# Create a folder /data/db
sudo mkdir -p /data/db

# Start Server

sudo mongod --dbpath /data/db --logpath /var/log/mongodb/mongod.log --fork

or create a config file /etc/mongod.conf with the following contents:

# mongod.conf
systemLog:
 destination: file
 path: "/var/log/mongodb/mongod.log"
 logAppend: true
storage:
 journal:
  enabled: true
processManagement:
 fork: true 
net:
 bindIp: 0.0.0.0
 port: 27017

And run the following command:

sudo mongod -f /etc/mongod.conf


# Stop Server

sudo mongod --shutdown

# Change bind address

The default is localhost can connect to the server, and any connections from outside are blocked. So, we need to change the bind address. 

We change the file /etc/mongod.conf created above step. 

Default is:

net:
  bindIp: 127.0.0.1

Change it to

net:
  bindIp: 0.0.0.0 # allow connections from anywhere

# Configure autostart

Everytime system restarts, the server should also start. We define a cronjob for that(/etc/cron.d/mongodb) with the following content:

@reboot root sleep 60 && /opt/mongodb/bin/mongod -f /etc/mongod.conf

We have defined one minute(60 seconds) sleep before starting mongodb server.

An alternative will be to create a service, and we can enable service to start at the time system starts.

CLIENTS:

1. Command Line Tool : mongo 

We can connect using mongo. Mongo uses default values to connect to the server. There are several options for this command.

After running the command, we go into environment we can run mongodb commands to access database and carry out sevaral dabtase related operations.

2. GUI Administration: MongoDB Compass

This is a nice GUI tool, we can carry out operations on collections and documents using this nice GUI tool.

3. Create user

Using mongo command line tool, we create a user.
use admin
db.createUser(
  {
    user: "myUserAdmin",
    pwd: "abc123",
    roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]
  }
)

We have to provide option --auth in mongod command to enable authorizaton. 

Or add the following line in the /etc/mongod.conf file:

security:

 authorization: enabled

Saturday, December 8, 2018

Path Variable in Ubuntu System


The paths defined in Ubuntu or any operating systems are searched to find out any commands given in command line terminal. If a program is installed from Debian(.deb) or windows installer(.exe), then the installer defined the path automatically. 

If we have not packaged installer, we get all files in a folder with a bin directory inside it with all executables. The bin folder contains executables should be available from the terminal, otherwise, we have to go into the executable location to execute the binary files.

There are two methods we can define path variable in Linux system (Actually, I tested it with Ubuntu 18.04 and 16.04, other Linux systems should also work.)

Method 1: User-specific path variable. 

Each user can define their own path variable. The path to define path variable is 
~/.bashrc

export PATH=/path/to/bin:$PATH

Then the following command makes it effective. 

$ source ~/.bashrc 

Or we can restart the terminal to take effect. 


Method 2: System path variable 

 This method is used to set the path variable globally, that means these path variables are defined systemwide. The path to define the path variable is 

/etc/environment

Just open it as a root user, add the path variable as follows:

PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/opt/cassandra/bin:/path/to/bin"

Then run the following command to carry out effect:

$. /etc/environment

I hope this is helpful to you guys!

Friday, May 11, 2018

Tweak Ubuntu 18.04 desktop

Ubuntu 18.04 is another long-term upgrade with a lot of changes. I have freshly installed it in my system and it is running very smoothly. To let you know, I have replaced Linux Mint 18 which is also a great operating system.

Ubuntu has made GNOME desktop back, that means, it looks like unity but it is modified GNOME which I really appreciated. (Note: I only install LTS versions, so I have no experience with the short-term versions)

Although it looks really cool with GNOME desktop and I have no comments. Still, for windows or mint users, the left vertical bar looks a bit confusing. So, my first attempt would be to move this vertical bar at the bottom.


Dock Positioning

What basically I did it: I went to settings->Dock where we can define the dock position on the screen. There are three possibilities: LEFT, BOTTOM, RIGHT as shown below:

 
Yes, we can define the icon size also by dragging the range selector. So, basically, I selected the Dock position as bottom and my window looks like this:


Ok, so far so good, it looks like windows or mint system, is really comfortable for me. Did you notice the application's icon is on the right bottom corner? It is okay, for me, I prefer it on the left bottom corner like conventional start icon in windows and applications icon in the mint system.

So, I, now wanted to move this. Is it possible? Yes, because Google knows everything. I found this article:

https://medium.com/@amritanshu16/move-show-applications-button-to-top-of-the-dock-in-ubuntu-17-10-5530beeaeef2

We need to run just the following command:

gsettings set org.gnome.shell.extensions.dash-to-dock show-apps-at-top true

If you don't like commands, just install deconf-tools as follows:

sudo apt install dconf-tools

and then run the program dconf-editor.

Now, everything is as expected.  Look below:



Enjoy GNOME! 

Monday, April 30, 2018

Docker Installation & Configuration

Introduction

Docker is nowadays a buzz word, I heard this everywhere in software development sector. I went through it to learn what it actually it and why do we need it. First of all, before we go into need of docker, we have to know about virtual machines. Yes, docker is kind of virtual machine, but virtual machines are bloated, need more resources. That means we can run many instances of docker compared to virtual machines in the same system.

The lightweight nature of docker instances has several advantages such as more customized configuration, and also application portability. The application can be deployed into docker and can be packed and shipped anywhere.  Because of this, the developers prefer docker to deploy their applications in a cloud.

Installation

The installation in Linux system is quite easy. I have just installed it into my Ubuntu 16.04 system using the following commands:

  • Add public key into your system
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

  • Add repository
     sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable"

  • Update packages
        sudo apt update

  • Install docker community edition ( I am afraid to install enterprise edition, I hope it is also free, but I like the word community)
    sudo apt install docker-ce

  • Test installation
    sudo docker info
    sudo docker run hello-world

Do Interesting Stuffs

After successful installation of docker, it is good idea to do something interesting.  Docker comes with default image hello-world which does nothing but print a message.  The image are blueprints of application which form the basis of containers. After we run images, creates a container which runs the actual application. 

So, we need to download (or pull) the image first before we create instance of it. We  do that by using "docker pull" command.

sudo docker pull busybox 

Then, after pull completes, we can see the image using the following command:

sudo docker images

Now, we run the container 

sudo docker run busybox 

This creates container instance, since no command is given, it does nothing, and terminates the instance.  If we do something like this:

sudo docker run busybox echo "hello world"

This will print out the hello world in the console. 

Now, if we run the image using -it parameter (-it stands for interactive), then the instance does not terminate.  

sudo docker run -it busybox

We can verify the running of this using the following command:

sudo docker container ls

This shows the running instances.

 Remove instance

sudo docker rm container_id1, container_id2

or  to remove all exited instances:

sudo docker rm $(sudo docker ps -a -q -f status=exited)


Remove image 

sudo docker rmi image_id1, image_id2

or to remove all images

sudo docker rmi $(docker images -a -q)


Ubuntu 18.04

Now my interest is to pull ubuntu 18.04 image and create a container of it.

sudo docker pull ubuntu:18.04 (pull)
sudo docker images (check)
sudo docker run -it ubuntu:18.04 (run)
sudo docker container ls (verify container)

So, after running container you are in the bash terminal, where you have possibilities to install commands and tools from scratch.

There basic image does not come with all necessary command or tools. So, we have to install or configure ourselves. 

Sunday, April 8, 2018

Run programs as services in Ubuntu System

BRIEF INTRODUCTION

Running jobs from bash terminal is really easy. But what if we want to reduce user interaction to implement automation, we define services which run on its own. We do not need to run or click to start the program. Once we define the service and enable it, then the program runs when operating system boots. The service programs in the background, and they do not terminate when user logs out. That means, these service programs are running in the background and users do not notice them.  And they are quite handy to start and stop from remote system, or from terminal. After the job is started, we can safely disconnect remote system or close the terminal.


PROCEDURE

Now, we start creating a service that runs on the background. The perfect example would be running tomcat as a service because it need it always running, at the same time, we need to start or restart from time to time. Also we need to auto-start when system is rebooted.

So, we create a service that starts and stops tomcat server. To implement that, we first of all install Tomcat Server. I will not talk about tomcat installation here, it really straight forward. Just download packaged tomcat installer and extract the files into /opt/tomcat.

We could manually start and stop from the command in bin directory of tomcat folder. But that is not what we want. Basically we create a service and configure it so that it starts automatically when system reboots.  So, first task we do is, to create a tomcat.service file

The file looks something like this:

File: tomcat.service

[Unit]
Description=Tomcat Service
After=network.target

[Service]
Type=forking
ExecStart=/opt/tomcat/bin/catalina.sh start
ExecStop=/opt/tomcat/bin/catalina.sh stop
RestartSec=10
Restart=always

[Install]
WantedBy=multi-user.target
               

So, after this file is created, we copy this file into /lib/systemd/system/ directory and load daemon

sudo systemctl  daemon-reload

TESTING

So,  to just start service, we use the command

sudo systemctl start tomcat (starts  service)
sudo systemctl status tomcat (gets status)
sudo systemctl is-active tomcat
sudo systemctl is-enabled tomcat
sudo systectl enable tomcat (!Enables the service)

And we can simply start or stop service  in traditional way too as follows:

sudo service tomcat start
sudo service tomcat stop
sudo service tomcat status


In case I need to start service after sytem boot, I enable the service. Otherwise, I just use the command start and stop to start and stop service.

The file uses the basic configuration, we can extend it and add more configuration.



References:

https://wiki.ubuntu.com/SystemdForUpstartUsers

Friday, April 6, 2018

Spring with Vaadin

Spring and Vaadin Integration and a sample code 

Friday, March 2, 2018

Execute local bash scripts in remote system

Why do we need this? 

We normally login to the remote system and execute the scripts located in the remote system. What if we execute the local scripts in local system so that we need not copy the scripts into remote system which makes it easy to test also. The use case for this problem is when we have a job that has local tasks and remote tasks and they are dependent each other. While backing up remote files locally, first we need to create backup files and save those files in the remote system. After backup creation is successful, we can go ahead and copy those files into local system.

So, what do we need? Because I have successfully solved this problem and implemented in my company. So I am gonna write my achievement here.

1) Secure Login

This is the most important part because we are connecting from local system to remote system which can be located anywhere in the world. That means your traffic goes outside of your company and proper security mechanism should be implemented. We never send plain text! We always encrypt the text and we use secure shell communication and the traffic is always encrypted, no one can understand.

Because we automate this backup job, i.e. it do not need any human interaction. Normal ssh login need user-name and password to login, but we instead create SSH keys and install those in both systems so that we can carry out secure communications between two systems.

Creating ssh keys:

$ ssh-keygen -t rsa -b 2048

 It creates two files (private and public keys). If already created, then we do not need to create.

Copy these keys to the remote host.

$ssh-copy-id root@remote.frietec.com


Now we can connect(ssh) without need  of password.

$ssh root@remote.frietec.com

2) Executing scripts into remote system

$ssh root@remote.frietec.com 'bash -s' < SCRIPT_TO_RUN 

The script runs on the remote system. We can send the parameters if we need after the script. 

Until now, we executed local script into the remote system. In our case, the script creates backup files and these files are in remote system.



3) Transfer backup files into your local system

This is important because we need to securely transfer our files into our local system. So, we use secure copy tool (scp) to securely download the files. 

$scp root@remote.frietec.com:/opt/backups/*.bkp /backups/

Thats it. We have to implement the combined script as a cronjob, then we can get periodic remote backup into your local system. We can implement notification system also.

Notes

1) There are several possibilities to carry out this task. Here you have more control over your work, but there is another tool called rsync which is also promising which can replace scp tool which we have used here.

2) Since we have implement autologin to the remote system, you have to be sure the the locale system is secure enough, otherwise, anybody can reach to the remote system.

 3) If you have Jenkins, then it is quite easier to define task and run it periodically. And we can do more using jenkins.