Wednesday, October 25, 2017

hashCode and equals, why they are needed in Java?

All object in java are inherited from Object class, this Object has two methods hashCode() and equals(Object obj). We normally do not care about why these are made and what is the significance of these methods to derived objects.

To begin with, if we have Java objects and wanted to compare those object whether they are same or not. Primitive type values can be directly compared for equality with == operator. What if we use the same operator to check equality to Java object, then we are messed up. That is because the == operator compares the reference values of the objects. That means if two objects reference to the same memory address, they are equal. To clearify this, lets take example:

int a=10;
int b=10;

a==b  //true

Integer a=new Integer(10);
Integer b=new Integer(10);

a==b //false

First example give true because we are comparing values and they are same. In case example, they are object and the references of the objects are compared, although the values are same, they are NOT equal!!!


In many problem scenario, we need some mechanism so that the objects are evaluated based on their properties. In the above example, the evaluation of a and b should give the same result because they represent same, although they are different objects. For that we need to use equals() method of Object.


public boolean equals(Object object)

So, we override and implement this method to our object so that we can evaluate objects for equality based on their contents. So, two objects are evaluated as equal if this method return true.  

public native int hashCode()

When we override equals method, we MUST override hashCode method also. Why?

[Because a violation of the general contract for Object.hashCode will occur, which can have unexpected repercussions when your class is in conjunction with all hash-based collections.]

So, the rule is if equals() returns true, then their hashCode should ALWAYS be equal. But the other way might not be true. That is, if equals() returns false, it is not necessary that the two objects have different hashCodes.


How to create hash code?


How to implement equals method?



Wednesday, September 27, 2017

Enabling SSL in Tomcat


To install and start tomcat server is a really straight forward, but to run it securely needs some extra configuration. In this article I am going to describe the steps needed to enable encryption in tomcat server so that the communication between client and server is being carried by encrypting the data traffic, and nobody in between client and server can read the information.

Creation of KeyStore

The first and foremost requirement to implement SSL is creation of keystore file. The documentation says only three formats are supported  (JKS, PKCS11 or PKCS12) and I am gonna use JKS format because it is java standard keystore and can be created using keytool commands that comes with Java installation. 

So, lets create keystore. Just execute the command, it creates a jks file with private key and certificate. 

keytool -genkey -alias tomcat -keyalg RSA -keystore tomcat.jks -storepass ***** -validity 3650

Please the note keystore password used while creation. This is needed in tomcat configuration. Yes, tomcat.jks should be placed in a very secured location in the server. 


Configuration

After creation of keystore file, the next step is to copy this file to the server. It is best practice to copy it in conf folder of tomcat installation directory. 

So, we go to the tomcat installation directory. In the conf folder there, we open the server.xml file where can enable SSL and provide the keystore file location along with keystore password. 


So, basically, we add the following connector element in  service element:
<Service name="Catalina">
.
.
<Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"
        maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="conf/tomcat.jks" keystorePass="****" /> 
.
.
.
</Service>


Limiting SSL Usage

Obviously, we want to disable plaintext communication after enabling SSL. So far we have configured, supports both encrypted and plain communication. So, we disable plain text communication. 

Now, we add the following lines at the end of the file inside tags of the web.xml file.

   
    <security-constraint>
    <web-resource-collection>
        <web-resource-name>secure-tomcat-app</web-resource-name>
        <url-pattern>/*</url-pattern>
    </web-resource-collection>
    <user-data-constraint>
        <transport-guarantee>CONFIDENTIAL</transport-guarantee>
    </user-data-constraint>
</security-constraint>

   


Restart Tomcat server and now the connection to the tomcat server is always secure. 



Monday, September 18, 2017

Ubuntu Basics

In this article, I am going to write some basic Ubuntu operations which we need day today. I am gonna describe everything like a list and this list goes on updated.

Font Installation

One of the frequently facing problems we get is the required fonts are not installed and we have to install by ourselves. Yes, there are several ways to install fonts in Ubuntu system. First of all, we have to know where the fonts are located and what is the purpose of the fonts.

First of all, we have to know about user-defined fonts i.e. every defines their fonts in their home directory:

~/.fonts

The fonts in this directory are only for the specific user and not available globally.

If we have to make the fonts globally available, then we have to copy the fonts into other locations.The locations can be defined in

/etc/fonts/fonts.conf 

The default directories are
/usr/share/fonts, 
/usr/local/share/fonts 
and
~/.fonts

So, if we copy directly into the /usr/share/fonts or /usr/local/share/fonts to make the font available for all users. Of course, you have to be an administrator to copy the fonts into the above-mentioned directories.

Here is the sample fonts to test. 

Sample Fonts


After copying into the corresponding directories, we have to run the following commands:

sudo fc-cache -fv

If the system is rebooted, we do not need to execute the above command, fonts are loaded automatically.

After installation is complete, we check if the fonts have been successfully installed.

sudo fc-list |grep verdana

If the font is successfully installed, then it shows the newly installed font.

Note: we need to restart the application which is using the font to reflect the newly installed fonts.

Localization

This is one of the common problems I have faced. Basically, when using the German alphabets with umlauts, they are not properly displayed because of unicode related problems.

Here, I will try to explain as simple as possible to work around with that:

  • Check the current local settings:
      $ locale

  • See the available locales
      $ locale -a  

  • If locale is not in the list, then it should be generated(installed)
     $ locale-gen fr_FR.UTF-8


  • To regenarate locales, 
     $ locale-gen  
  • The default settings are stored in /etc/default/locale file. 
      We can directly change the contents of this file. Or we can use the command update-locale.
      $ update-locale LANG=de_DE.UTF-8

Note: the supported locales are located in the file /usr/share/i18n/SUPPORTED.

Shortcut Method:  

From a terminal, run the following command, and select the required locales. That does everything we needed!

$ sudo dpkg-reconfigure locales


Yes, it is recommended to restart the system to properly load the locales.

Quickly Test USB Boot

  • Install qumu

            sudo apt install qemu

  • Test ISO
      qemu-system-x86_64 -cdrom filename.iso
  • Test USB
      qemu-system-x86_64 -hda /dev/sdx



Date Time Settings


This section describes how can we set date and time in the Ubuntu system from the command terminal. The auto-update of date-time is carried out through the NTP server, the configuration of datetime sync server is a different topic. We simply set the date and time here.

First of all, there are two clocks: 1) System clock, 2) Hardware clock

Here, the date-time set in the hardware clock is what we see the time in Bios, and if Bios time is not correct, then the system time could be also incorrect because when system boots, it gets time from Hardware clock.

So, if the hardware clock is wrong by any chance, the system time also gets wrong.

1) See system date and time
$ date
2) Set system date and time
$ sudo date -s '2017-10-04 16:31:32'
3) See hardware clock time
$ sudo hwclock
4) Set hardware clock time from system time
$sudo hwclock -w
5) Set system time from hardware clock time
$sudo hwclock -s

In the above commands, -w can be replaced with --systohc and -s can be replaced with --hctosys.

Enable Remote Desktop in Ubuntu Server  

If we install a standalone Ubuntu server and want it to be accessible via remote desktop, we have to do some extra tasks. Since, ubuntu-server comes without any desktop application, i.e. no GUI possible, only terminal. That's cool if you are familiar with the command line terminal. If you still want to make your server available via remote desktop, we have to install the desktop application on the server. The program we need for remote desktop is xrdp.

So, we install it using the terminal as follows:

sudo apt update
sudo apt upgrade
sudo apt install xrdp
sudo apt install ubuntu-mate-core ubuntu-mate-desktop 
echo mate-session >~/.xsession
sudo service xrdp restart


Then we are ready to connect using rdesktop from Linux and remote desktop from windows based systems.



Saturday, August 19, 2017

More on Software Testing

In this article, I am going to write more about software testing. The quality control of a software product is carried out in different software testing methodologies. The software testing is a phase that should not be neglected, because if a defect is found at the time of software delivery, the cost will be increased by 10 times, and will be 20 times more at the maintenance phase. So, it is recommended to carry out testing when the software development begins.

Software Phases

There are 5 main phases of software development:

1) Documentation: Requirement analysis, design document, test documents
2) Coding/Execution: The development phase
3) Testing: Different testing methodologies
4) Deployment : The software is delivered to the customer
5) Maintenance: After deployment, if any failure is detected.

Software Testing

The software testing can be broadly categorized into two categories:

A. Blackbox Testing: Tested overall functionalities of the software without knowing the details of the implementation, or design.
B. Whitebox Testing: White box testing also considers the implementation details, software design, database design.


Defect & Failure: If a defect reached to end-user, it is failure. 60% defects are prone to exist in design phase and 40% defects in development phase.

Testing Process

The testing process

Unit Testing => Integration Testing => System Testing => Acceptance Testing

Acceptance testing is carried out alpha-testing(carried out in development site) and beta-testing(tested by actual customer).


When a bug is found or some module is changed/added, we have to carry out
1) Confirmation Testing: Confirm that the bug/defect is fixed.
2) Regression Testing: Testing if all other parts of software( or module) are working

More...

We must be also concerned with

1) Defect cascading: A defect can be propagated to other modules and affects those modules too, this is called cascading.
2) Cohabiting Software: When software is installed in actual end-user machine, there might be other software installed which using the same shared libraries or resources. In this scenario, the other software need to be carried out testing.







Monday, July 31, 2017

Create a Reactjs Application


Upgrade Nodejs

The default installed version of Nodjs in Ubuntu 16.04 is 4.x and I could not upgrade it to the latest 6.x from normal update using apt update. So, we need to add sources for nodejs to upgrade to the latest version. We need latest version which functions better and the create-react-app works perfectly.

So, to upgrade, we need to create a new sources list file for nodejs.

  1. Create a new file /etc/apt/sources.list.d/nodesource.list with the following contents: deb https://deb.nodesource.com/node_6.x xenial main
    deb-src https://deb.nodesource.com/node_6.x xenial main
  2. Add public key which is needed:
    curl -s https://deb.nodesource.com/gpgkey/nodesource.gpg.key | sudo apt-key add -
  3. Update Repository
    sudo apt update
  4. Check
    sudo apt-cache policy nodejs
    You will see which version gonna install
  5. Install
    sudo apt install nodejs   

Alternative 1 for the above mentioned method will be NVM (Node Version Manager). The installation of nvm is carried out as follows:


1) cd /tmp
2) wget https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh
3) chmod +x install.sh
4) sudo bash install.sh
5) Restart terminal
6) Now, you can check the available versions:
   nvm ls-remote
7) Finally, install the version wanted (I prefer LTS one!)
    sudo nvm install v6.11.2

[
In case, if nvm ls-remote returns N/A message, you have to insert the following tow lines as the environmental variables.

export NVM_NODEJS_ORG_MIRROR=http://nodejs.org/dist
export CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt

So far so good, if multiple versions are installed, then we have to select the one as default using
(*Lists the versions installed *)
nvm ls  
(*Select the one as default*)
use [Version]

]

Alternative 2
Another simple approach is to install npm package manager.
$sudo apt install npm

Then install node
$sudo npm install -g n
$sudo n latest 
or
$sudo n 8.9.0
 
Create Reactjs Project

After installation of nodejs, now we create a reactjs application. The easiest method is install create-react-app package and start with a new scratch project.

npm install -g create-react-project
create-react-app my-app
cd my-app
npm start

Thats it! Now you can see the running react application in localhost:8000

After the application is ready for deployment, we use the following command to build.

npm run build

This process creates a build directory with all files we need to deploy.

Sunday, June 18, 2017

Make your own git push repository

Git, the most popular source control system, is one of the important concept every programmer should know nowadays. In my initial career, I stated used Visual Source Safe, now it is outdated, later, I started using subversion, and currently I am learning git which sounds very promising and powerful for me. In this article, I am going to write something about your own git push server configuration so that you do not need the public git server anymore(also you can store you private projects which you do not like to make your source code publicly visible). Another advantage is automation, if you have dedicated git server and then running build service (Jenkins, e.g.), then we can automate the task of deployment using build automation tool.

But I wont explore everything here, I would only explore how can you create a push server, create a new project and commit and push this project to this newly created git push server. Normally, there are two server, one server to commit, which is located in the same machine. So, basically, all source control related operations are carried out from this local git server(commit, differences etc). So a local git server can perform all tasks a subversion server do, the only difference is the subversion server can be located externally. Now, git provides the facility of pushing the project which means, the tasks you have done so far is complete and wanted other users to get your changes. 

So far we became familiar with some theoretical background, which is necessary. Now, I am gonna start how we can carry out the processes to prepare a git server(local) and git push server(remote).

REMOTE SERVER

1) Prerequisites

The first and foremost thing to think before begin is the server which should be running and accessible from your development computer.  I prefer Ubuntu-Server which makes the tasks easy as compared to windows.

If git is not installed, please install it. 

sudo apt install git-core

2) Create user git

sudo useradd git

Create password for git:

passwd git


[Alternative way is to create user certificates so that user can directly connect to the git server without even needing passwords]

3) Create folder to store project repositories

Now, I define a folder where I want all git based projects. For that, I have created a folder /opt/git where all project repositories are stored.

su git
mkdir /opt/git


Important: change the owner of /opt/git to git, if this folder was created by other users.

sudo chown -R git:git /opt/git

4) Create repository
su git
cd /opt/git
sudo mkdir project.git
cd project.git
git init --bare


So far, we have created an empty repository where we can push our changes from our development computers.

Please not that, the location of this git repository is represented by:

git@server:/opt/git/project.git

So other users can simply clone this project using

clone git@server:/opt/git/project.git



LOCAL REPOSITORY

For any git project, git prepares local repository where it stores project commits. We can even compare changes, restore to previous versions. Everything we can do even without needing remote server. A developer can manage all changes locally even without letting other developers notice. The remote push server mentioned above is need to push your changes when everything from your side is ready and other developers can implement it.

This section focuses on how can we prepare git client so that we can go on developing.

1) Simple Method:

This is simple method:

cd /home/krishna/workspace
git clone git@server:/opt/git/project.git

It creates an empty project with git repositories prepared. Now we can carry out any changes in this project(create a new file), we can commit it, and then push it to our own server prepared above.

2) Lengthy Method:

This method is a bit lengthy, in case some people interested, they are free to do. We create the repository by ourselves and add remote server to push.

cd /home/krishna/workspace/project
git init
git add .
git commit -m "first commit" .

So far so good, everything is done locally. 

Now we define the remote server

git remote add origin git@server:/opt/git/project.git
git push origin master 


More Tasks:

1) Disable git to access remote  shell

We have a new git user created, and this users can also login to shells to access the remote computer which is undesirable. So, we now enable git use only push and pull using ssh connection which is obtained for enabling git-shell to the git user.
a) Add git-shell in /etc/shells
cat /etc/shells
If git-shell does not exist here, we have to insert git-shell location here
which git-shell
copy this location to the end of /etc/shells. The file should look something like this:

/bin/sh
/bin/dash
/bin/bash
/bin/rbash
/usr/bin/tmux
/usr/bin/screen
/usr/bin/git-shell

Now we enable only git-shell for git user :

sudo chsh git -s $(which git-shell)

Here chsh means change shell. 

That's it, now we can not be able to login to remote server using ssh as git user.

2) Enable password-less login for git users

It is not always appropriate to provide password to authenticate to git remote server, so we create certificates for users and the specified user can login without need of passwords.


Now lets create a private and a public key for myself.

















Tuesday, June 6, 2017

Users Management in Cassandra

In my previous article, I have introduced on how can we begin with installation and configuration, and this article more focused on further Cassandra management. I will also share my experiences regarding this.

Change Password of Super User

This article mainly focuses on users management. In previous chapter I have discussed up-to network connection, we just used the command

cqlsh [SERVER-ADDRESS]

Magically it got connected, I did not have any idea regarding users. Because of security, database comes with users with different roles and the defined user only has access the database. It was my fault that I could notice that at first, actually cassandra comes with default super user cassadra with password cassandra. The first step is to change the cassandra password as soon as possible.

root user: cassandra
default password:cassandra

So, first begin with changing password. I tried login without password, and executed the following command:

cqlsh [SERVER-ADDRESS]
alter user cassandra with password '**********';

I got error  saying "..CassandraRoleManager does not support PASSWORD", initially that sound weird, but later I noticed that we have to further modify configuration in conf/cassandra.yaml file.

Find the line with authenticator, and modify it to

authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer

Restart cassandra service to activate these configurations, now we need user name and password to connect to database. And, good news, is we can now alter password of default superuser (cassandra).

alter user cassandra with password '*****************';

Create Custom User

We have to first note that, there are two types of users in cassandra, supseruser and nonsuperuser. The difference is clear, superuser has some elevated roles compared to nonsuperuser. Superusers can create new users, delete users, change passwords of users, while normal users can only change their own password.

So, to create a custom user you have to be a superuser. First we login with superuser:

csqlsh [SERVER-ADDRESS] -u cassandra

Provide password for cassandra. Then, after successful login,  we provide the following command:

create user if not exists frietec with password '********' supseruser;

Thats it! Now, we created users and defined roles to access.

Thursday, May 25, 2017

Camera Module Raspberry PI

One of the top uses of Rapi is to use it as a motion detector device by using camera module. It is not that complicated, it is really easy though.

I simplify the process into steps to make it easier to understand.

1) Update and upgrade package management system.

sudo apt-get update
sudo apt-get upgrade

2) Activate and enable camera module
 sudo raspi-config

It shows selection window, and we select Interfaces and then camera. After confirmation, it installs camera module. Now, we can capture still image or video if camera is properly installed.


3) Installation of camera



The camera is connected as shown above. The camera interface is located beside the network port. Note that the shiny part of the ribbon should always point outward from the network port. (See the pic above).

raspistill -o image.jpg
raspivid -o video.h264 -t 10000 

The time is in milliseconds. First command capture an image while second captures a 10 seconds video. 

4) Streaming and Motion Detection

Now, we go further and implement our Raspberry Pi device as a motion detector device. A very nice package is there called motion  which is highly recommended.

sudo apt-get install motion
sudo systemctl status motion.service
sudo systemctl start motion.service 


After completion, we have to modify some settings according to our requirements. We also need to enable video module for video device to work.

sudo modprobe bcm2835-v4l2

Everytime, to automaticlly enable video module, we put this command in /etc/rc.local(sudo is not needed there)

Again we need to enable motion daemon in
/etc/default/motion file.

start_motion_daemon=yes

Now, we carry out settings in /etc/motion/motion.conf. The file is really big, but we have to focus on a few things:

width 1200
height 800
framerate 20
target_dir /var/lib/motion
picture_filename %Y_%m_%d/%v-%Y%m%d%H%M%S-%q
movie_filename %Y_%m_%d/%v-%Y%m%d%H%M%S
stream_port 8081
stream_localhost off # stream connections to other systems
webcontrol_localhost off

The above mentioned parameters quite easy to understand. After change, we need to restart motion.


Now, we see the motion detected pictures stored in the directory /var/lib/motion and the define directory and file structure. And also we can access the live stream from the link:
http://IP_ADDRESS_OF_PI:8081

Thats it!

It is interesting, is not it?

Raspberry Pi Settings


Hello guys, today, I am gonna share with you guys something different from programming, however, would be an interesting. Yes, Raspberry Pi is a computer, somebody would rather say "mini-computer". This is tiny but really powerful machine, on the other hand, consumes less power and resources. So, the best application of this device is to run 24/7 as embedded. I have looked around the online markets and the latest version is pi 3 (2015) costs around 38.00 €. This is powerful enough to carry out general tasks. It is suggested to buy memory card and housing also for better installment and other accessories based on requirements.

1) Operating System Installation

I have installed Raspbian Jessie Lite which is mini operating system and needs less space to install. I prefer mini version because we can later install whatever we want, rather than having unwanted application installed. The downloads are located here

https://www.raspberrypi.org/downloads/raspbian/

 The extraction of zip file gives an image which we have to write into our memory card. (Raspberry pi 3 needs micro SD card).

Raspberry Pi recommends Etcher tool to write images into the SD card, it is really cool tool for writing to SSD cards and it is available for Linux system also.

https://etcher.io/

After installation, we can run Etcher, then select image and drive and flash button. Just three step, and wait for a couple of minutes until flashing finishes,



2) Configuration

After image writing completes, we are ready to run Raspbian in our Raspberry Pi. Please note that the installed operating system has not GUI and we need to remotely access it and use secure connection tool (SSH) to connect it. To carry out this,we have two challenges, first, by default, the new operating systems have disabled SSH, second, for wifi connection, we need to set the wifi credentials also. So, lets begin:

a) Set WIFI

Via NIC card, the connection is automatic, we need to do nothing. The are nice tools (I used from http://angryip.org/) available and we can find out the IP address of the device. The default host-name is raspberry.

For WLAN, we have to provide WIFI information. We create a file wpa_supplicant.conf with the following information:


network={
    ssid="YOUR_SSID"
    psk="YOUR_PASSWORD"
    key_mgmt=WPA-PSK
}

b) Enable SSH

There is a boot folder in the written image and we just create a file "ssh" (WITHOUT extension) in this folder, at the time of booting, it enables SSH so that use pi can access this with Secure Shell.

Default user name: pi
Default password: raspberry

For security reasons, it is advisable to change the password after you logged in.


Some Questions:

1) What is the power supply for Raspberry PI?
Raspberry Pi is really really cool, because it is suitable for everywhere regarding power supply. The smartphone charging cable completely fits. We can use our computer USB port or separate device(that comes with mobile charger) to provide power . That is enough!

2) Is there any chances of overheating CPU?
The short answer is no, because Raspberry PI manages itself to save from overheating and burning. It automatically detect the overheating and accordingly manages the frequency of CPU to reduce the temperature. So, basically saying, the temperature of RaPi never reach above 65 degree and you need not worry about temperate of Rapi which is above normal. The normal temperature is assumed to be 40-45 degrees. My RaPi always around 50 degree with normal operation and it reaches to 60 degree when I implement camera streaming application(motion). However, somebody prefers some cooling mechanism( head sink or fans). Heat Sink is okay, not recommended though, but use of fan makes simple Raspberry complicated, even a bit noisy.


Disclaimer: I have tested with Raspberry PI 3 version 1.3 (2015), may be the process is not exact for other Pi devices. 

Saturday, May 13, 2017

Unit Testing in Java

Why Testing? 

Testing is necessary to deliver a good quality product. Nobody is perfect and the development phase of a software is prone to error. So, if we find the software bugs at the beginning phase, we can reduce the maintenance cost. So, we need proper testing while we do programming and the developer can himself carry out some tests whether his function is working as desired. Basically, it is tedious to manually test all possibilities and the developer himself has tendency to test best case scenarios. But the output product is used by users who never know the details about it. So, a third party user has higher possibilities to discover bugs. But we do not get this chance until we deliver the product, and the customer also do not like buggy software, so the final delivered product should be Bug-Free.

JUnit Testing

In this article, I am sharing the steps on how we can use JUnit framework to test java classes.  To use JUnit testing, we need to include library file in class path. It can be downloaded from www.junit.org and include it as reference library.

How?

1) The unit testing files should be normally separated, you know why? These files are testing files only needed to developer, not the user. So, it is good practice not to include these test files in the production mode. Thus we create a separate folder to store test files.

2) Packaging makes it everything clear. So, we create the same package, and test classes with suffix Test.

Suppose, we create a class

package com.krishna.math;

public class MathFunction{
       private  int A;
       private int B;
       public MathFunction(int A, int B){
         this.A=A;
         this.B=B;
     }
    
public int getSum(){
     return this.A+this.B;
}

public int getDiff(){
    return this.A-this.B;
}  




Now, we create a test class. The above mentioned class MathFunction resides in src folder, while we create another folder test to store test files. Gradle and maven projects default structure is:

source: /src/main/java
test: /src/test/java

So, under test folder, we define a class to test the above-mentioned class functions:

package com.krishna.math;

import static org.junit.Assert.*;

import org.junit.Test;

public class MathFunctionTest{
     //Write test methods here
     @Test 
     public void testSum(){
       MathFunction mathFunction=new MathFunction(12,34);
       assertEquals(46, mathFunction.getSum());
     }

   @Test
   public void testDIff(){
    MathFunction mathFunction=new MathFunction(13,12);
    assertEquals(1, mathFunction.getDiff());
    }
}


So far so good, we can run it directly from eclipse, just right click and run as JUnit. That shows the test result. 

Running Multiple Tests (TestSuite Class )

The above mentioned method runs one test case (test class) at at time. But we are interested to test all test cases as a whole. So, the need of test suite class arises.

We define a test suite class as follows:

import org.junit.runner.RunWith
import org.junit.runners.Suite;

import org.junit.runners.Suite.SuiteClasses;

@RunWith(Suite.class)
@SuiteClasses({ LinkedListTest.class, MyStackTest.class })
public class AllTests {

}

After, we run it from eclipse IDE, it works like a charm. You see the result with proper coloring also, Eclipse helps you!

Test Automation 
So far, we have created test cases and test suite which is run by eclipse. What if you wanna run and get result and do some process. For that you need a test runner. It is an independent runner and you have control over the result of the test. You may create a separate class for test runner or just add it in TestSuite class. 


I create a separate class as follows:

import org.junit.runner.JUnitCore;
import org.junit.runner.Result;
import org.junit.runner.notification.Failure;
public class TestRunner{
public static void main(String[] args) {
Result result = JUnitCore.runClasses(AllTests.class);
for (Failure failure : result.getFailures()) {
System.out.println(failure.toString());
}

System.out.println(result.wasSuccessful());

}
}


The examples presented are just sample, the real test cases tend to have many methods. 





Wednesday, April 12, 2017

MySQL Server Installation

1. INTRODUCTION

Installation of mySQL Server in linux system is not that difficult. We just need to run a couple of commands and thats it!

2. INSTALLATION
 
Here are the required commands to install and prepare mysql server:

sudo apt-get update
sudo apt-get install mysql-server
sudo mysql_secure_installation 

Check the status of mysql server

sudo systemstl status mysql.service

This above command should show Active: active (running)  as output which means it is running.

The administration of mysql is carried out with mysqladmin command which can be run from terminal as follows:

mysqladmin -p -u root version

This will show the version information along with other information. 

3. LOCAL CONNECTION

We can use mysql to connect to the server as follows:

mysql -u root -p -h localhost

After connection, we need some basic commands as follows:

show databases;
use db_name;  
show tables; 
select * from [TABLE_NAME];
exit

We can see all the command by typing \h. 

 4. REMOTE CONNECTION

Local connection is not the problem we can directly use local server. But for security reasons, we can not connect if server is located remotely. We have to define some configuration settings to carry out remote connection. 

cd /etc/mysql/mysql.conf.d
sudo vi /mysql.conf.d

Check the section [mysqld] and modify the bind-address with the ip address of the server as follows:
bind-address =10.8.102.62

After restarting the server, we can now connect remotely.

Note: mysql uses port 3306, so this port should not be blocked from firewall. 

5. REMOTE USER CONFIGURATION 

Yes, the default root user is so configured that, only can connect locally. So, next task is to define this user so that it can connect remotely. That we do after carrying out local connection.

mysql -u root -p localhost
 
SELECT host FROM mysql.user WHERE User='root';

It shows localhost or 127.0.0.1 which means this user can not connect remotely. 

CREATE USER 'root'@'%' IDENTIFIED BY '[PASSORD]’;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%';

Reload permissions:

FLUSH PRIVIOLEGES;


Now, we can connect remotely with root user. % signifies the acceptance of connection from any external sources. 



Tuesday, February 28, 2017

Cassandra Walkthrough

INTRODUCTION

Today I am gonna write something about Cassandra database system.  Yes, first question comes in mind is why on earth do we need this database while already there are other more popular databases. Yes, that is true as far as our relational database has less amount of data. Since Company grows, so the database, and the conventional relational databases do not sufficiently fulfill our requirements to hold such a big data. We have to think about some alternative databases which overcomes this problem i.e. these no-SQL databases can store and access efficiently huge data.

WHY CASSANDRA?

Cassandra is no-SQL database which is used by large companies to store big data. The storage structure of cassandra is different from that of relational databases and provides faster data access with support of data replication in clusters. So, basically, cassandra can store huge data and faster access of data.

INSTALLATION

The installation of cassandra is very simple. We need to download cassandra from here

http://cassandra.apache.org/

in *.tar.gz format. We have to extract it somewhere (e.g. /opt/cassandra) and this is the root directory of cassandra.

So, to directly access cassandra related commands, we have to define /opt/cassandra/bin ad PATH directory. Just add this to /etc/environment file and restart the system.


STARTING THE SERVER

The cassandra/bin directory contains all the executables needed. To start server

cassandra -f (starts server in background, that makes it easier to close the server, just with CTRL+C)
cassandra This will start server in background. To close server, we have to kill the cassandra process id.

pgrep -f CassandraDaemon  (gets pid of cassandra)
kill pid or pkill -f CassandraDaemon

After server is started we can use the cassandra client to connect. Cassandra comes with a very handy client tool called cqlsh which connects to the cassandra server and we can execute queries to carry out database operations.

nodetool status
(Checks if cassandra is running )

Connection and running commands:
cqlsh localhost
cqlsh localhost -u cassandra -p cassandra 
describe keyspaces
use [KEYSPACE]
describe tables
exit  



NETWORK CONNECTION

Cassandra, by default, can be connected locally. That means, we have to do some changes in conf/cassandra.yaml file to carry out connection from network.
We have to not that, to carry out connection from network, the  port 9042 should not be blocked by firewall applications.

To carry out the changes, we open the file:

conf/cassandra.yaml

and fine the line listen_address line, and comment out, and define the IP address of the server.  e.g.

listen_address: 10.8.102.62

Again, we change the rpc_address also,

rpc_address: 0.0.0.0

and finally, we change the broadcast address 

broadcast_rpc_address: 10.8.102.255

If rpc_address is defined as a fixed address, then we can leave broadcast_rpc_address blank or commented. 

And, finally, we add seed provider also, here we have to add the ip-address as a seed provider. 

Just find the line which seed_provider and go to parameters, and there is already one seed defined, and you have to add the ip-address of the cassandra server. 

- seeds: "127. 0.0.1,10.8.102.62"

We save this configuration and restart server. Now, we can connect to the server from another computer in the network:


cqlsh 10.8.102.62