Total Pageviews

Translate

Showing posts with label Redhat Enterprise Linux. Show all posts
Showing posts with label Redhat Enterprise Linux. Show all posts

April 3, 2018

Using Shell and Command modules in Ansible

by 4hathacker  |  in Redhat Enterprise Linux at  8:06 AM
Hi folks...

Welcome to MyAnsibleQuest !!!



After a long break, I am writing on my blog and this is in continuation with other Ansible posts. In this post, we will observe the usage of shell and command module with simple examples particularly to find out the differences between them.

Before starting, I would like to share some information about the environment setup for this post. 

1. I have an inventory file in /etc/ansible/hosts which consists of 3 servers (node 218 and node 227 are in webservers group while node 222 is in dbservers group).



2. I am using PyCharm as a code view editor for defining yaml files and running them in the terminal of PyCharm only.

3. Ansible 2.5 came in the beginning of this year only. We will be using the same in a virtual environment (Ansible_Shell_Command_Script) setup in python 2.7.



So, let's start with the shell module. Before starting let us check which shell is there in RHEL7.2 OS. To check this, type "echo $0" or "file -h /bin/bash" or "file -h /bin/sh" in the shell,



For me it came as '/bin/bash'. It may be something different if you are using some other OS.

Now, the shell module, almost similar to command module, accepts a command name with a list of space separated arguments. The prime concern here is, whatever the command we are passing, it goes through the shell (/bin/bash) on the remote node. The shell by default is /bin/sh and the option can be changed by using argument as 'executable: /bin/bash'.

Case 1: We will check a simple cat command for all the log files present in the /tmp directory.

Playbook:

Output:


Case 2: We will print some environment variables like $HOME, $ JAVA_HOME in a text file.

Playbook:

Output:


Conclusion: 

1. The command module fails to do so because of * wildcard. In case of operators like “<”, “>”, “|”, ”;” and “&” it will not work.
2. The command module will remain unaware of the environment variables yet it seems like there is no error and playbook runs well. If you look at the output, the state is seen as changed for all the tasks, even though in the second task command module did nothing.

This can be checked if we  look for the .txt files in both node218 and node227.




So, its important to use command and shell module carefully. The way we can access environment variables is given below. In my view, sometimes it will be better to take help from Ansible docs and look for similar modules for performing tasks, rather to rely on command or shell module.

To access local environment variables we can use either gather_facts or env look up.



That's all for this post.

January 21, 2018

Ansible Vault - Lets encrypt sensitive data while automation

by 4hathacker  |  in Redhat Enterprise Linux at  11:17 AM
Hello Everyone !

This is MyAnsibleQuest!!!

In the previous posts, I have discussed a lot of information and practical usage of Ansible automation and its workflow using simple examples to understand concepts in Ansible. While engaging with MySql server automated installation in one of the previous post, I have mentioned the database password and other datacenter vars in the "/etc/ansible/hosts" file.



I would like to make it clear, that for doing experiments in your lab test environment, it is not a critical issue. But while doing a large cluster management, engaging a lot of different departments together, hard coded passwords in a file comes under bad practices. Its dangerous to quote secret passwords and critical information in files. One solution in this respect will be the usage of good quality encryption standards to randomize/hide the information such that no other person will be able to understand the same without your permission. This extra layer of security can be provided to our Ansible playbooks using Ansible-Vault. Ansible-Vault is a command line tool, used to encrypt the sensitive content and while doing the automation it will intelligently decrypt the same using a vault-password provided by the user.

In this post, I will be covering some basic usage of Ansible-Vault commands by creating a playbook to fetch the key content of an AWS S3 bucket. It also demonstrates the Ansible roles and file structure for Ansible automation.

Scenario:

We have an access to AWS account and being an S3 admin, I would like to fetch the bucket key content using the bucket names that will be provided by some other team of my company. I will write a small Ansible playbook for the same.

A brief introduction to AWS S3:

Amazon Web Services is one of the most popular on-demand cloud services and S3 stands for Simple Storage Service, an AWS service particularly for object storage. Here the key content, we would like to access is nothing but the files inside a folder. I have already installed "awscli" and configured the same with the "aws configure" command. This is a mandatory step in order to access the S3 content over AWS Cloud.

1. The file structure for our Ansible playbook lies in a directory named as vault_example. According to this structure, I have defined a main .yml file as my.yml. The roles will define the distribution of control and thus tasks could be easily manageable. So, I have one role as s3_admin which has one of its tasks as fetching a particular bucket data. And vars folder, contains all the necessary variables required for the completion of task. In vars folder, aws_creds.yml consists of my aws_access_key_id and aws_secret_access_key along with the bucket name.


Note: However, I have mentioned the AWS credentials in aws_creds.yml, the connection to S3 service completely rely on "~/.aws.cfg", which is automatically generated by running "aws configure" command. For accessing EC2 and other services, it may be required.

2. Lets have a look at main.yml file in tasks folder. In main.yml, I have include the aws_creds file, and accessed the bucket_name variable from the aws_creds.yml to list the bucket keys.


3. Now, we have to do our major stuff here, i.e., how to use Ansible-Vault for encrypting aws_creds.yml. For that, create a vault-password.txt file and quote ome random password of your choice. This password will be referenced for encryption and decryption of our aws_creds.yml file. Use "ansible-vault encrypt" command with location of file to be encrypted and the vault-password file with "--vault-password-file" option.


4. Check whether the given file is encrypted successfully or not. You can see in the file the encryption standard used for encrypting the file.e.g. AES-256.




5. Lets test our my.yml file which contains only the role entry as:

---
# file: my.yml
- hosts: localhost

  roles:
    - { role: s3_admin }

I wrote the usual command for playing ansible playbook.



Oops!!! I got an error. It is looking for a secrets file to decrypt. Let me try this again, this time with our vault-password.txt file.


Bingo!!! Now our encryption as well as playbook both are working fine. Let us look at some other things we can do with Ansible-Vault.

6. I am looking to change my vault-password. This we can do with "ansible-vault rekey" command.



We can see that it will ask for New Vault Password twice to confirm. If the passwords fail to match, it shows error and rely on the previous password file. If the passwords match, it shows a messages for successful rekey.

7. To run the my.yml with the new password, we have to enter it manually. Because we haven't saved the same in any kind of passsword file.


With the "--ask-vault-pass" option, it asks for a vault password. If entered correctly, we can check the bucket keys as "hdfs-site.xml" and "logo.png".

8. Finally, we will look how to decrypt our aws_creds.yml file with the "ansible-vault decrypt" option.


This is a short practical introduction to Ansible-Vault in RHEL7 with Ansible v2.48 installed. There are some important best practices I would like to mention:

1. The vault variables should be written starting with "vault_". This will help in differentiating easily the vault variables and normal variables.

2. Do not take all the variables in vault encryption, otherwise it will be difficult for reviewing in case of errors, if occurred.

3. Ansible-Vault should only be used for encrypting sensitive information. Encrypting whole lot of .yml files unnecesarily without any requirement, will create more problems.

4. Following a proper directory structure for Ansible variables, vaults, main tasks within a proper role assigned will help in easy understanding and incur less time consumption.


January 6, 2018

Configuring MongoDb High Availability Replication Cluster in RHEL7

by 4hathacker  |  in Redhat Enterprise Linux at  6:58 PM
Hi everyone...

This is one off-the-beat post for MyAnsibleQuest.

MongoDb is an open-source document based database that provides high performance, high availability, and automatic scaling. We all know about the power of NoSQL in which a schema is given less important over high rate of transactions and reliability. And MongoDb is one of its kind. Its document like data structure is based on key value pairs, which makes it easy to understand and elegant. Replication facility in mongo provides high availability and data redundancy which simply means that we do not have to rely on one instance/server of mongo in a large cluster for read/write of data.



In this post, we will be going through the steps for installing mongodb 3.4 in RHEL7, and then will configure a mongodb HA cluster with 4 mongo instances. Lets start with installing mongodb 3.4 in RHEL7.

Step 1 - Add mongo repository: After checking the RHEL7/CentOS repositories, I found mongo version 2.6. But in this post I will be covering it with mongo 3.4. To install the mongo 3.4, create a repo in your machine as described below. 

vi  /etc/yum.repos.d/mongo.repo

In this repo, add the following lines,

[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc

Step 2 - Install mongodb: To install mongodb type the usual yum install command as provided,

yum -y install mongodb-org

This will install the following, 
mongodb-org-server – The server daemon (mongod) with init scripts and configurations.
mongodb-org-mongos – The MongoDB Shard daemon
mongodb-org-shell – The MongoDB shell, a command line interface.
mongodb-org-tools – Contains MongoDB tools for import, export, restore, dump, and other functions.
Step 3 - Check whether installation is fine or not by running the version command for mongod (server daemon) as well as for mongo (client).

Step 4 - Create admin and siteRootAdmin users with appropriate password. And create a mongo_ha database with mongo_ha_admin user.


Since we are defining a mongo HA cluster, we need the above three steps to be done on all the mongo instances/servers. But before that, its time to know about the Replication in mongodb. I have one mongo cluster diagram which will make it easy to understand.
MongoDB replication is based on replicaSet configuration in the cluster. The replicaSet includes several members with a definite role to play in the HA Cluster. 
According to the structure, there are four instances/servers of mongodb viz.,
1. Primary Instance (node218) :  The primary instance is the base/default access point for transactions.  It is the only member that can accept write operations. The primary's operations log are then copied on secondary's dataset.
2. Secondary Instance (node227 and node228): The secondary instances in a cluster can be multiple in number. They reproduce the changes from primary's oplog. A secondary instance becomes primary if the primary crashes or seems to be unavailable. This decision is based on  failure of communication between primary and secondary for more than 10-30 seconds.
3. Arbiter (node229): This arbiter is only required when failover occurs and a new primary is to be elected. In case of even number of secondaries, it will play a significant role for the election of new primary. It is clear that no dedicated hardware is required for arbiter, although its a part of replicaSet but no data ever went to arbiter.
4. The blue arrows in the structure represent the replicaSet instances involved in data replication.
5. The black arrows represent that a continuous communication(heartbeat) is taking place between all the members since the cluster started.
After understanding the same, lets move on and install mongodb in all the four instances/servers following the steps 1 to 3 and come back to our primary server. 

Step 4 - Create a keyFile for authentication among the mongo instances present in the cluster. This could be easily done with OpenSSL as described in the images.


I have created a long key encrypting a random number in base64 encoding. I gave the necessary permissions to the same and then securely copied it in all the members of cluster.

Step 5 - Create a directory for dbpath in every mongo instance. 

mkdir /etc/mongodata

Step 6 - Now edit the configuration file at /etc/mongod.conf  for enabling replication, setting authentication and providing the dbpath. Although we still have to provide the same info while turning on the mongod.

vim /etc/mongod.conf

And the file will look like,
 
# mongod.conf

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log
  quiet: true

# Where and how to store data.
storage:
  dbPath: /etc/mongodata
  journal:
    enabled: true
#  engine:
#  mmapv1:
#  wiredTiger:

# how the process runs
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile

# network interfaces
net:
  port: 27017
  bindIp: [127.0.0.1, 10.0.0.227, 10.0.0.228, 10.0.0.229]  # Listen to local interface only, comment to listen on all interfaces.


security:
  keyFile: /etc/mongo-key

#operationProfiling:

replication:
  replSetName: mongo-HA

#sharding:

## Enterprise-Only Options

#auditLog:

#snmp:

Step 7 - Start mongod in each mongo instance of cluster by the command,

mongod --dbpath /etc/mongodata --port 27017 --replSet mongo_HA 

Step 8 - Start mongo on Primary instance (node218), authenticate with siteRootAdmin and password. Use command rs.initiate() to include the first member of HA Cluster. If you run rs.conf(), its visible in members that the "_id" is given zero to this host.


Step 9 - Add other members in the cluster and check the status,

rs.add(10.0.0.227:27017);
rs.add(10.0.0.228:27017);
rs.addArb(10.0.0.229:27017);
rs.status()

To confirm that replication is working fine, create a s3 database in primary insatnce and you can see them in secondary instances automatically.

Important Notes:
1. In my case, I have DNS configured properly. If you do not have DNS, your member servers can't resolve themselves amongst each other. Do proper entries in /etc/hosts file of each member about the member servers.
2. If error occurs in replicaSet commands, check whether mongod was started with --replSet option or not, or the configuration file entries are there or not.
3. If you are getting warning messages like this, /sys/kernel/mm/transparent_hugepage/defrag is 'always', visit this link.

Finally, I would like to say that this setup could be easily achieved with the help of Ansible or other management tools. I will definitely cover the same with Ansible in one of the upcoming blog posts.

March 23, 2016

Practice questions for basic linux

by 4hathacker  |  in System Administration at  12:15 AM

 your ip is 192.168.x.y ,you may use ssh,
all password should be 4hathacker


Q.1 -  Make swap of 512Mib , make sure it should be permanent.

Q.2 -  Make an LVM partition ,follow the instuctions below.
   
         - volume group name should be 4hathackervg 
         - lvm name should be conglv1  , size of 1Gib , formated by ext3 file system.
         - mount it at /mnt/4hathacker  ,make sure it is permanent.

Q.3 -  Make an thin provisioned LVM partition ,follow the instuctions below.
   
         - volume group name should be 4hathackervg
         - size of volume group should  be 1Gib .
         - lvm name should be 4hathackerthinlv1  ,size of 3Gib 
         - formated by ext4 file system.
         - mount it at /mnt/hathacker  ,make sure it is permanent.

Q.4 - Add a user hacker  having default group manager.

Q.5 - find and copy the all files genrated for user hacker in the /root/yadav.txt
        - it should be in proper order .

Q.6 -  find the all the lines having keystone in /root/Desktop/uu.txt .
          - and put these lines in proper order in  /root/Desktop/yadav.txt  .

Q.7  - follow the instructions:
        - make an directory /project of group named company
        - use sticky bit on it .
        - file or dir. under the /project should be having group as company .
        - make three users named harry , nitin , yadav in this group .
        - harry should have only read & write permission .
        - group company should have full  permission on it  .
        - yadav should have only read only permission .
Q.8 -  Disable root access on your machine via ssh but should be enable                   through telnet . Make it permanent .
Q.9 -  Link two text files for some security reasons.Data coming in one text
          file should automatically copy in second text file.
Q.10 -Brake your linux password at booting time . Let you don't know       
          password. Make it 4hathacker .
Q.12- Find out how many IP's are login to your machine.

September 13, 2015

How to make partitions using "fdisk" command in linux

by 4hathacker  |  in System Administration at  3:33 AM
Step 1 : Check partitions by this command 

[root@4hathacker Desktop]# fdisk -cul

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0xed10b684

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048    31789055    15893504    c  W95 FAT32 (LBA)
/dev/sda2   *    31791104    32507903      358400    7  HPFS/NTFS
/dev/sda3        32507904   242228069   104860083    7  HPFS/NTFS
/dev/sda4       242228073   976766975   367269451+   f  W95 Ext'd (LBA)
Partition 4 does not start on physical sector boundary.
/dev/sda5       242228136   705349889   231560877    7  HPFS/NTFS
/dev/sda6       705353728   706377727      512000   83  Linux
/dev/sda7       706379776   962379775   128000000   83  Linux
/dev/sda8       962381824   968689663     3153920   82  Linux swap / Solaris
step 2 : See, what is name of your device like /dev/sda or /dev/vda or /dev/sdb 
Step 3 : See other information by the " df " command  ; It will show ,where your  devices are mounted .

[root@4hathacker Desktop]# df -hT
Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/sda7      ext4   121G   74G   41G  65% /
tmpfs          tmpfs  1.9G   84K  1.9G   1% /dev/shm
/dev/sda6      ext4   477M   35M  417M   8% /boot
Step 4 : My device name is /dev/sda , so i will use it .
 Note : when you run "fdisk /dev/sda" command then you are going in the fdisk's command line.
 
[root@4hathacker Desktop]# fdisk /dev/sda

The device presents a logical sector size that is smaller than
the physical sector size. Aligning to a physical sector (or optimal
I/O) size boundary is recommended, or performance may be impacted.

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help):

Note : these are the fdisk commands which we will use.

Command (m for help): help
h: unknown command
Command action
   a   toggle a bootable flag
   b   edit bsd disk label
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disk label
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

Command (m for help):

Step 5 : Now i am making a partition ; use these keywords to make partitions :
1)       p   print the partition table  
2)       n   add a new partition

see added partiton by using "p" . If you see right then save and close by the "w" ,otherwise use "q" to close without saving . These two close :
1)       q   quit without saving changes
2)       w  quit with saving changes OR write table to disk and exit

 Now we have done with making of partitions.

***Note *** : These partitions will work after reboot . If you want to use it at live time then please use this command :
for rhel6  use "partx -a /dev/sda "
for rhel7 use "partprobe "
And check by "cat /proc/partitions" .


To use this partition: Format this partion and then mount  OR permanent mount .

[root@4hathacker Desktop]# mkfs.ext4 /dev/sda8

***Note*** : To make it permanent , make it entry in "/etc/fstab" 
[root@4hathacker Desktop]# mkdir /media/xyz
[root@4hathacker Desktop]# gedit /etc/fstab

Add this line in  /etc/fstab (here /dev/sda8 is my partition name and /media/xyz is my mount point)

/dev/sda8 /media/xyz  ext4  defaults  0  0 

And then always use "mount   -a" to check ; if no error then you are right.

Troubleshooting

1) In fdisk command line, use "d" to delete partition .
 2) In fdisk command line, use   "l"  to list known partition types.

*****NOTE***** 
In order for more queries please comment below:

September 2, 2015

Finding files and directories according to linux permissions

by 4hathacker  |  in Server Hardening at  1:54 AM
Here some tricks to find files and directories as per the linux DAC  permissions

1. finding all files and directories with 0777 permission

[root@4hathacker mail]# find  /  -perm  0777 -print

2.  find all files only with 0777 permission

[root@4hathacker mail]# find  /  -type f -perm  0777 -print

Note: for directories you can -type  -d

3. files and directories without   777 permission

[root@4hathacker mail]# find  /   ! -perm  0777 

** 4. files and directories with any special permission 7---

[root@4hathacker mail]# find  /   -perm  /7000

/dev Importance in linux or rhel7

by 4hathacker  |  in System Administration at  1:21 AM
As you all know linux treating  everything as a File so today we will discuss about devices management in linux based operating system.

Modern linux  is managed by devices and all the distributions are capable of detecting devices running time of os.



Hotplugging  is  achieved in linux distro by using  three most popular components .

1.   udev 
2.   Hal
3.   dbus    

Important: 

Udev:-  create and delete dynamic directory nodes in all linux distribution

              under /dev  directory  when ever you plug or unplug any device.

Dbus:  this is like system bus which is used for interprocess communication

Hal   :  Hal gets information from udev service ,  when devices is connected

             it creates XML representation of  the devices. It then notifies the   current desktop notification application like (nautilus) with the help of Dbus  and nautilus will the open the mount devices for user.

==============
More about Udev:
==============

Udev is the device manager for linux kernel which create and remove devices nodes in  /dev  directory dynamically .  it is the successor of devfs and hotplug  . It  run in user space and user can change devices name using udev rules.

Note: Udev depends on the proc and sys file systems and they must be mounted on /proc and /sys

you can check  from  /etc/fstab  file they must be mounted persistently.
you can find udev rules at /etc/udev/rules.d

===============
More about /dev  
 ===============

1.  /dev/autofs  :-  Normally used in autofs server where this  file is used to mount remote directories locally. This is done automatically when user tries to login by mounting remote directory. The mounting is done by using this hardware file. With out this hardware file we can not do automount in a Linux distro.

2.  /dev/console, /dev/tty, /dev/tty1 to /dev/tty63, /dev/ttyS, /dev/ttyS0 to /dev/ttyS31 files and /dev/pts 

these device file are called terminal or console which is generally used in run level 1  , also used for pseudo terminal

3.  /dev/loop  this directory is used for managing  CD/DVM in linux KVM and VMWare os image files.

4.  /dev/randam  and /dev/urandom
    used for generating  random chars for kernel purpose.

5.  /dev/null and /dev/zero

     used for generating empty file and  observing unwanted outputs.

Note:  /dev/zero  is used to create files with no data but it require size (a file filled with Zeros)

dd if=/dev/zero of=/root/abc.txt bs=4096 count=1000

OR 

[root@4hathacker rules.d]# strace  cat  /dev/zero  

6. /dev/ppp 

This file is used to connect your mobile or GPRS/3G enabled devices to connect and communicate with linux based system.

March 31, 2015

Hacking root password of Redhat7 Linux

by Anonymous  |  in System Administration at  6:07 PM
Root password is everything in any linux. Once you get or break root user password then you have all power on that system. Now you can change any user password or you can login any other user without its password from root . PID (process id ) of root user is 0 . And anybody who have PID - 0 , have all powers at that system . Root password breaking is a very easy process step by step , but once you mistake then please do it from start . Otherwise you will waste your time in this process . To set new password , we don't need the older password in this way .

Steps:-

1) Just start your system . It will show you two lines on welcome screen. Click on upper arrow key and go to the 1st line.
2) Now press    e     .
3) Then go down and find a line which is starting from " linux16 " .
4) Go to the end of this line and give a space and write rd.break   

5) Now press Ctrl-x    , Now system is rebooting in background. 
6) Now it will ask for commands.
7) Write these commands in this order

       mount  -o remount,rw sysroot
       chroot   /sysroot/
       passwd   root

Give password of root as you want . And follow commands for reboot.

      exit
      exit  

Now system is rebooting.  When you see the welcome screen and those two lines , please go to the 1st line and press e for edit again.

8) Go down and find a line which is starting from "linux16 " .
9) Again go to the end of this line and give space and then write enforcing=0   
10) Then press Ctrl-x
11) System will show you login page . Give username as root and write your password , And now you are logged In.
12) Once you are logged in ,open the terminal and put a command-

      restorecon    /etc/shadow

NOW ITS DONE , YOU HACKED RHEL7 !!!


March 14, 2015

Increase RAM at real time by giving SWAP from HARD DISK in redhat linux 7

by Anonymous  |  in System Administration at  5:38 PM

SWAP at real time:

This is very easy to increase RAM of your system at real time when you need. Red hat Linux7 gives this opportunity to increase RAM from your hard disk/pen drive/partition of hard disk. you can also change its priority . we can say it addition RAM . If we don't  give any priority then its default last .

Requirement:

Partition of hard disk / pen drive / Any hard disk


Steps :

1) Go in the partition table and make a partition if you don't have.

fdisk -l          - to check the name of hard disk it can be /dev/sda or /dev/vda
if  it is /dev/sda then
fdisk /dev/sda    
Now you are in fdisk so your normal commands will not work .
press n for make a new partition
then leave first sector and type your partition size with + sine  for last sector.
now your partition is added .
you can see by pressing p .
press w to save the partition and come out.
If you will press q then it will close fdisk without saving the partition.
Now your common commands will work .
write command partprobe  - for update hard disk at real time.
Now your partition is ready. Example your partition is /dev/sda3

2) Make swap and ON this swap.

mkswap /dev/sda3
swapon /dev/sda3
Now you can check it by these these commands
swapon -s 
free -m

3) Make it permanent so give its entry  in /etc/fstab 

write command to open and edit and then press i to write something.
vim /etc/fstab
go to the last line and write
/dev/sda3   swap   swap   defaults   0    0
now close it with saving by press Esc then :wq 
To check  write command
mount -a

NOTE :- IF you have any partition or pen drive , then you don't need step 1 .
 Just find its name by pressing   fdisk -l and apply step 2 and 3 .
Name of pen drive can be like this /dev/sdb or /dev/sdc
Name of partition can be like this /dev/sda5  or /dev/vda5 

March 12, 2015

How to configure you as a NTP (Network Time Protocol) client in the redhat7 linux / rhel7 for clock synchronization

by Anonymous  |  in servers and security at  12:28 AM

Configuration of NTP in rhel7:

 Requirements:

1) you need a active NTP sever for client configuration.
2) conform that you you have prepared your YUM for installation.
3) Ping server to check the server is active or not.( For example your server is :  station.rhel7.lab.com)

Commands and description in Red hat 7 for ntp:

1) Install NTP : yum install ntp
2) Configure NTP : Open ntp file - vim /etc/ntp.conf
Now edit this file. Comment all four severs lines and write your server line like this- 
   #server 0.rhel.pool.ntp.org iburst
   #server 1.rhel.pool.ntp.org iburst
   #server 2.rhel.pool.ntp.org iburst
   #server 3.rhel.pool.ntp.org iburst
   server       station.rhel7.lab.com 
3) Stop ntpd service and match your time from server - 
   systemctl stop ntpd
   ntpdate -b  station.rhel7.lab.com
   systemctl restart ntpd
   systemctl enable ntpd
4)Write this restart command in /etc/rc.d/rc.local 
   vim /etc/rc.d/rc.local  
and write in last line
   systemctl restart ntpd
5) Make this file executable and restart rc-local  service
   chmod /etc/rc.d/rc.local 
   systemctl restart rc-local

About NTP :

Network Time Protocol (NTP) is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use.

Like Our Facebook Page

Nitin Sharma's DEV Profile
Proudly Designed by 4hathacker.