Total Pageviews

Translate

Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

April 3, 2018

Using Shell and Command modules in Ansible

by 4hathacker  |  in Redhat Enterprise Linux at  8:06 AM
Hi folks...

Welcome to MyAnsibleQuest !!!



After a long break, I am writing on my blog and this is in continuation with other Ansible posts. In this post, we will observe the usage of shell and command module with simple examples particularly to find out the differences between them.

Before starting, I would like to share some information about the environment setup for this post. 

1. I have an inventory file in /etc/ansible/hosts which consists of 3 servers (node 218 and node 227 are in webservers group while node 222 is in dbservers group).



2. I am using PyCharm as a code view editor for defining yaml files and running them in the terminal of PyCharm only.

3. Ansible 2.5 came in the beginning of this year only. We will be using the same in a virtual environment (Ansible_Shell_Command_Script) setup in python 2.7.



So, let's start with the shell module. Before starting let us check which shell is there in RHEL7.2 OS. To check this, type "echo $0" or "file -h /bin/bash" or "file -h /bin/sh" in the shell,



For me it came as '/bin/bash'. It may be something different if you are using some other OS.

Now, the shell module, almost similar to command module, accepts a command name with a list of space separated arguments. The prime concern here is, whatever the command we are passing, it goes through the shell (/bin/bash) on the remote node. The shell by default is /bin/sh and the option can be changed by using argument as 'executable: /bin/bash'.

Case 1: We will check a simple cat command for all the log files present in the /tmp directory.

Playbook:

Output:


Case 2: We will print some environment variables like $HOME, $ JAVA_HOME in a text file.

Playbook:

Output:


Conclusion: 

1. The command module fails to do so because of * wildcard. In case of operators like “<”, “>”, “|”, ”;” and “&” it will not work.
2. The command module will remain unaware of the environment variables yet it seems like there is no error and playbook runs well. If you look at the output, the state is seen as changed for all the tasks, even though in the second task command module did nothing.

This can be checked if we  look for the .txt files in both node218 and node227.




So, its important to use command and shell module carefully. The way we can access environment variables is given below. In my view, sometimes it will be better to take help from Ansible docs and look for similar modules for performing tasks, rather to rely on command or shell module.

To access local environment variables we can use either gather_facts or env look up.



That's all for this post.

January 21, 2018

Ansible Vault - Lets encrypt sensitive data while automation

by 4hathacker  |  in Redhat Enterprise Linux at  11:17 AM
Hello Everyone !

This is MyAnsibleQuest!!!

In the previous posts, I have discussed a lot of information and practical usage of Ansible automation and its workflow using simple examples to understand concepts in Ansible. While engaging with MySql server automated installation in one of the previous post, I have mentioned the database password and other datacenter vars in the "/etc/ansible/hosts" file.



I would like to make it clear, that for doing experiments in your lab test environment, it is not a critical issue. But while doing a large cluster management, engaging a lot of different departments together, hard coded passwords in a file comes under bad practices. Its dangerous to quote secret passwords and critical information in files. One solution in this respect will be the usage of good quality encryption standards to randomize/hide the information such that no other person will be able to understand the same without your permission. This extra layer of security can be provided to our Ansible playbooks using Ansible-Vault. Ansible-Vault is a command line tool, used to encrypt the sensitive content and while doing the automation it will intelligently decrypt the same using a vault-password provided by the user.

In this post, I will be covering some basic usage of Ansible-Vault commands by creating a playbook to fetch the key content of an AWS S3 bucket. It also demonstrates the Ansible roles and file structure for Ansible automation.

Scenario:

We have an access to AWS account and being an S3 admin, I would like to fetch the bucket key content using the bucket names that will be provided by some other team of my company. I will write a small Ansible playbook for the same.

A brief introduction to AWS S3:

Amazon Web Services is one of the most popular on-demand cloud services and S3 stands for Simple Storage Service, an AWS service particularly for object storage. Here the key content, we would like to access is nothing but the files inside a folder. I have already installed "awscli" and configured the same with the "aws configure" command. This is a mandatory step in order to access the S3 content over AWS Cloud.

1. The file structure for our Ansible playbook lies in a directory named as vault_example. According to this structure, I have defined a main .yml file as my.yml. The roles will define the distribution of control and thus tasks could be easily manageable. So, I have one role as s3_admin which has one of its tasks as fetching a particular bucket data. And vars folder, contains all the necessary variables required for the completion of task. In vars folder, aws_creds.yml consists of my aws_access_key_id and aws_secret_access_key along with the bucket name.


Note: However, I have mentioned the AWS credentials in aws_creds.yml, the connection to S3 service completely rely on "~/.aws.cfg", which is automatically generated by running "aws configure" command. For accessing EC2 and other services, it may be required.

2. Lets have a look at main.yml file in tasks folder. In main.yml, I have include the aws_creds file, and accessed the bucket_name variable from the aws_creds.yml to list the bucket keys.


3. Now, we have to do our major stuff here, i.e., how to use Ansible-Vault for encrypting aws_creds.yml. For that, create a vault-password.txt file and quote ome random password of your choice. This password will be referenced for encryption and decryption of our aws_creds.yml file. Use "ansible-vault encrypt" command with location of file to be encrypted and the vault-password file with "--vault-password-file" option.


4. Check whether the given file is encrypted successfully or not. You can see in the file the encryption standard used for encrypting the file.e.g. AES-256.




5. Lets test our my.yml file which contains only the role entry as:

---
# file: my.yml
- hosts: localhost

  roles:
    - { role: s3_admin }

I wrote the usual command for playing ansible playbook.



Oops!!! I got an error. It is looking for a secrets file to decrypt. Let me try this again, this time with our vault-password.txt file.


Bingo!!! Now our encryption as well as playbook both are working fine. Let us look at some other things we can do with Ansible-Vault.

6. I am looking to change my vault-password. This we can do with "ansible-vault rekey" command.



We can see that it will ask for New Vault Password twice to confirm. If the passwords fail to match, it shows error and rely on the previous password file. If the passwords match, it shows a messages for successful rekey.

7. To run the my.yml with the new password, we have to enter it manually. Because we haven't saved the same in any kind of passsword file.


With the "--ask-vault-pass" option, it asks for a vault password. If entered correctly, we can check the bucket keys as "hdfs-site.xml" and "logo.png".

8. Finally, we will look how to decrypt our aws_creds.yml file with the "ansible-vault decrypt" option.


This is a short practical introduction to Ansible-Vault in RHEL7 with Ansible v2.48 installed. There are some important best practices I would like to mention:

1. The vault variables should be written starting with "vault_". This will help in differentiating easily the vault variables and normal variables.

2. Do not take all the variables in vault encryption, otherwise it will be difficult for reviewing in case of errors, if occurred.

3. Ansible-Vault should only be used for encrypting sensitive information. Encrypting whole lot of .yml files unnecesarily without any requirement, will create more problems.

4. Following a proper directory structure for Ansible variables, vaults, main tasks within a proper role assigned will help in easy understanding and incur less time consumption.


January 6, 2018

Configuring MongoDb High Availability Replication Cluster in RHEL7

by 4hathacker  |  in Redhat Enterprise Linux at  6:58 PM
Hi everyone...

This is one off-the-beat post for MyAnsibleQuest.

MongoDb is an open-source document based database that provides high performance, high availability, and automatic scaling. We all know about the power of NoSQL in which a schema is given less important over high rate of transactions and reliability. And MongoDb is one of its kind. Its document like data structure is based on key value pairs, which makes it easy to understand and elegant. Replication facility in mongo provides high availability and data redundancy which simply means that we do not have to rely on one instance/server of mongo in a large cluster for read/write of data.



In this post, we will be going through the steps for installing mongodb 3.4 in RHEL7, and then will configure a mongodb HA cluster with 4 mongo instances. Lets start with installing mongodb 3.4 in RHEL7.

Step 1 - Add mongo repository: After checking the RHEL7/CentOS repositories, I found mongo version 2.6. But in this post I will be covering it with mongo 3.4. To install the mongo 3.4, create a repo in your machine as described below. 

vi  /etc/yum.repos.d/mongo.repo

In this repo, add the following lines,

[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc

Step 2 - Install mongodb: To install mongodb type the usual yum install command as provided,

yum -y install mongodb-org

This will install the following, 
mongodb-org-server – The server daemon (mongod) with init scripts and configurations.
mongodb-org-mongos – The MongoDB Shard daemon
mongodb-org-shell – The MongoDB shell, a command line interface.
mongodb-org-tools – Contains MongoDB tools for import, export, restore, dump, and other functions.
Step 3 - Check whether installation is fine or not by running the version command for mongod (server daemon) as well as for mongo (client).

Step 4 - Create admin and siteRootAdmin users with appropriate password. And create a mongo_ha database with mongo_ha_admin user.


Since we are defining a mongo HA cluster, we need the above three steps to be done on all the mongo instances/servers. But before that, its time to know about the Replication in mongodb. I have one mongo cluster diagram which will make it easy to understand.
MongoDB replication is based on replicaSet configuration in the cluster. The replicaSet includes several members with a definite role to play in the HA Cluster. 
According to the structure, there are four instances/servers of mongodb viz.,
1. Primary Instance (node218) :  The primary instance is the base/default access point for transactions.  It is the only member that can accept write operations. The primary's operations log are then copied on secondary's dataset.
2. Secondary Instance (node227 and node228): The secondary instances in a cluster can be multiple in number. They reproduce the changes from primary's oplog. A secondary instance becomes primary if the primary crashes or seems to be unavailable. This decision is based on  failure of communication between primary and secondary for more than 10-30 seconds.
3. Arbiter (node229): This arbiter is only required when failover occurs and a new primary is to be elected. In case of even number of secondaries, it will play a significant role for the election of new primary. It is clear that no dedicated hardware is required for arbiter, although its a part of replicaSet but no data ever went to arbiter.
4. The blue arrows in the structure represent the replicaSet instances involved in data replication.
5. The black arrows represent that a continuous communication(heartbeat) is taking place between all the members since the cluster started.
After understanding the same, lets move on and install mongodb in all the four instances/servers following the steps 1 to 3 and come back to our primary server. 

Step 4 - Create a keyFile for authentication among the mongo instances present in the cluster. This could be easily done with OpenSSL as described in the images.


I have created a long key encrypting a random number in base64 encoding. I gave the necessary permissions to the same and then securely copied it in all the members of cluster.

Step 5 - Create a directory for dbpath in every mongo instance. 

mkdir /etc/mongodata

Step 6 - Now edit the configuration file at /etc/mongod.conf  for enabling replication, setting authentication and providing the dbpath. Although we still have to provide the same info while turning on the mongod.

vim /etc/mongod.conf

And the file will look like,
 
# mongod.conf

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log
  quiet: true

# Where and how to store data.
storage:
  dbPath: /etc/mongodata
  journal:
    enabled: true
#  engine:
#  mmapv1:
#  wiredTiger:

# how the process runs
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile

# network interfaces
net:
  port: 27017
  bindIp: [127.0.0.1, 10.0.0.227, 10.0.0.228, 10.0.0.229]  # Listen to local interface only, comment to listen on all interfaces.


security:
  keyFile: /etc/mongo-key

#operationProfiling:

replication:
  replSetName: mongo-HA

#sharding:

## Enterprise-Only Options

#auditLog:

#snmp:

Step 7 - Start mongod in each mongo instance of cluster by the command,

mongod --dbpath /etc/mongodata --port 27017 --replSet mongo_HA 

Step 8 - Start mongo on Primary instance (node218), authenticate with siteRootAdmin and password. Use command rs.initiate() to include the first member of HA Cluster. If you run rs.conf(), its visible in members that the "_id" is given zero to this host.


Step 9 - Add other members in the cluster and check the status,

rs.add(10.0.0.227:27017);
rs.add(10.0.0.228:27017);
rs.addArb(10.0.0.229:27017);
rs.status()

To confirm that replication is working fine, create a s3 database in primary insatnce and you can see them in secondary instances automatically.

Important Notes:
1. In my case, I have DNS configured properly. If you do not have DNS, your member servers can't resolve themselves amongst each other. Do proper entries in /etc/hosts file of each member about the member servers.
2. If error occurs in replicaSet commands, check whether mongod was started with --replSet option or not, or the configuration file entries are there or not.
3. If you are getting warning messages like this, /sys/kernel/mm/transparent_hugepage/defrag is 'always', visit this link.

Finally, I would like to say that this setup could be easily achieved with the help of Ansible or other management tools. I will definitely cover the same with Ansible in one of the upcoming blog posts.

December 25, 2017

Interacting with Scripts using Ansible

by 4hathacker  |  in Python at  10:56 PM
Hello Everyone !

This is MyAnsibleQuest!!!

Sorry for the late post. In the previous post, we built a custom Ansible module in Python. This post is relatively more interesting because it deals with a different perspective for using Ansible. 

We have seen a lot of programs which needs human intervention for specific result oriented tasks. This intervention is making automation of tasks very difficult. I said difficult but not impossible. If we know the steps going to be asked by a program, we can automatically arrange for the set of answers. Its like you know the questions, you know the answers, and you want everything automatically taking place. Here I am just giving a glimpse of such automation as a small project using Python and Ansible.

During my college days, I have used a python script for scanning ports of a Linux System providing the ip/hostname and number of ports to be scanned. In network programming, a communication end point is created which allows a server to listen for requests. Once a communication end point has been established, our listening server can now enter its infinite loop, waiting for clients to connect, and responding to requests. Sockets are the "communication end point". 

My complete python code 'myfirstpexp.py' looks like this:

#!/usr/bin/python
 
import sys, time, subprocess, re, os
from socket import *
from datetime import datetime

host=' '
max_port = 5000 # default max port either ways you must enter a value
min_port = 1        # default min port either ways you must enter a value


def scan_host(host, port, returnval = 1):
''' This function is used for checking whether port is open or not. '''
    try:
        s = socket(AF_INET, SOCK_STREAM)
        code = s.connect_ex((host, port))
        if code == 0:
            returnval = code
        s.close()
    except Exception, e:
        pass
    return returnval


def host_check(host):
''' This function is used to check whether the host is alive or not. '''
''' The output of the ping command is set to null, and displays whether up or not. '''
        devnull = open(os.devnull, 'w')
        res = subprocess.call(["ping", "-c", "1", host], stdout=devnull, stderr=devnull)
       
        if res == 0:
                print host, 'is up!'
        else:
                print host, 'is down!'
                sys.exit(1)
 
 
def main():
''' This is the main function which asks for three values viz. '''
''' host: IP address of the host '''
''' Maximum Port: the max value of port to be scanned '''
''' Minimum Port: the min value of port to start for the scanning'''
        try:
                host = raw_input("(*) Enter Host Address: ")
                max_port = int(raw_input("(*) Enter Max Port: "))
                min_port = int(raw_input("(*) Enter Min Port: "))
        except KeyboardInterrupt:
                print "\n\n(*) Interruption by User Occured."
                print "(*) Shutting down the Application."
                sys.exit(1)
        
        host_check(host)
        
        hostip = gethostbyname(host)
        print "\n(*) Host: %s IP: %s" % (host, hostip)
        print "\n\n(*) Scanning started at %s...\n" %(time.strftime("%H:%M:%S"))   
        start_time = datetime.now()
        
        for port in range(min_port, max_port):
            try:
                response = scan_host(host, port)
                if response == 0:
                    print("(*) Port %d: Open" % (port))
            except Exception, e:
                pass
       
        stop_time = datetime.now()
        duration = stop_time - start_time
        print "\n(*) Scanning done at %s ..." % (time.strftime("%H:%M:%S"))
        print "(*) Scanning Duration: %s ..." % (duration)
        print "(*) Have a nice day !!! ... 4hathacker_Ansible_Case"    


if __name__ == "__main__":
    main()

Its a very simple port scanning code which includes three functions viz. host_check(), scan_host() and main(). All functions are explained within the multiple line comments.

Lets see how it looks when you run the code.


Now the actual task for us is to automate the following script using Ansible. To achieve the same, I have used a python module - Pexpect. Its a pure Python module which matches a pattern after watching the output and then respond as if a human were typing responses. We can install Pexpect with pip and you can seek any help from this link

To use Pexpect in Ansible, we have to strictly follow the Ansible documentation otherwise I have seen a lot of problems while dealing with it. There is an Expect module in Ansible to do things like this, and it uses Pexpect behind the scene. I have created a 'firstpexp.yml' playbook which will automate the above python script.

 

1. In this playbook, I have used three variables viz., nmap_ip, max_port_number,  min_port_number as vars. 

2. While using expect module, firstly I ran the command module to run the myfirstpexp.py script.

3. In the responses, I have provided the already known output patterns in .yml format with their respective options to be filled at  runtime via the vars.

4. echo is optional just for the sake of checking whether the script is running fine or not. This I have even confirmed with debug module also.



Note: Pexpect works fine only if the pattern matches the response asked. We must escape special characters. In order to work for automatic server setups, like mysql_secure_installation, ambari_setup, etc. this works very effectively.

This is how you can make use of expect module in Ansible and interact with scripts in bash, python, php, etc. 

Merry Christmas !!!



December 5, 2017

Extending Ansible using Python

by 4hathacker  |  in Python at  11:39 PM
Hi everyone!

This is MyAnsibleQuest!!!



In the previous post, I have discussed about Ansible playbooks for provisioning database and web servers. In this post, we will unleash the power of Ansible in Linux by writing our own module. We can write Ansible modules in many languages, but I would like to use Python here. The reasons for selecting Python are:

1. All the modules of Ansible are written in Python.
2. Easy and direct integration with Ansible is possible.
3. Python reduces the amount of code required, as we can use boiler plate code.
4. Handling JSON output is easy.

Most Importantly, I love to code in python. 

Lets start with creating a setup for module development. Its very simple. We need a directory in which we place our playbook file and inside that a 'library' folder so that our playbook automatically look for the ansible module.

[root@server hands_on_ansible]# mkdir custom_module
[root@server hands_on_ansible]# cd custom_module
[root@server custom_module]# mkdir library
[root@server custom_module]# touch custom.yml
[root@server custom_module]# touch library/custom2.py


Now we will define our playbook.
There must be a reason to define a custom module. It may be a case that you want a custom function/task to accomplish. There might not be sufficient  modules available for your task or something like that.

For this post, I would like to create a trivial module for monitoring cpu usage of my Linux servers for 'n' peak processes. There are several commands in linux for monitoring the cpu usage with ram memory e.g. top, htop, ps, free, etc. Here I will be using 'ps' command with a set of arguments to be passed to display process id, parent process id, command, etc, in a meaningful sorted manner for every server in my inventory.

As we know about ansible-playbook writing with .yml extension, we will first write it.

1. My custom.yml file is defined to operate on 'hosts' as 'dbservers'.
2. I have included an entry of 'gather_facts' as 'no' because I don't want any kind of delay in output.
3. The name of my task is 'Get top cpu consuming process'
4. In my 'custom2' named module, I have passed 7 parameters viz. pid (process id), ppid (parent process id), cmd (command), mem(memory info), cpu (cpu info), sort (to define sorting basis), num (number of peak processes to show in output).
5. At last, I have used a 'result' variable with 'register' module to save the output and displayed the result with 'debug' module.


[root@server Desktop]# cat hands_on_ansible/custom_module/custom.yml
---

- hosts: dbservers
  gather_facts: no

  tasks:
    - name: Get top cpu consuming process
      custom2: 
        pid: pid
        ppid: ppid
        cmd: cmd
        mem: mem
        cpu: cpu
        sort: mem
        num: '17'    
      register: result

    - debug:
         var: result


Secondly, we will focus on the ansible module - 'custom2.py'. Ansible module must contain some basic information like metadata, documentation, examples, return values, etc. For more enhanced details of writing ansible module follow the documentation link. I have included only the documentation, for understanding the module.

#!/usr/bin/python

DOCUMENTATION = '''
---
module: my_monitoring_module
short_description: This is my server cpu-memory monitoring module.
version_added: "2.4"
description:
    - "This is my cpu-memory monitoring module to show 'n' peak processes at the time of module call."
options:
    pid:
        description:
            - This is the value same as pid denoting process id.
        required: true
    ppid:
        description:
            - This is the value same as ppid denoting parent process id.
        required: true
    cmd:
        description:
            - This is the value same as cmd denoting the command in process.
        required: false
    mem:
        description:
            - This is the value same as mem denoting the memory in percent for a process.
        required: true
        alias: memory
    cpu:
        description:
            - This is the value same as cpu denoting the cpu usage in percent for a process.
        required: true
    sort:
        description:
            - This is the value as either cpu or mem to sort by the order of cpu usage or memory usage.
        required: true
    num:
        description:
            - This is the value to output the number of peak processes.
        required: true
author:
    - Nitin (@4hathacker)
'''

from ansible.module_utils.basic import *
import subprocess

def main():
  # defining the available arguments/parameters
  # the user must pass to module
  module = AnsibleModule(
      argument_spec = dict(
          pid  = dict(required=True, type='str'),
          ppid = dict(required=True, type='str'),
          cmd  = dict(required=False, type='str'),
          mem  = dict(aliases=['memory'], required=True, type='str'),
          cpu  = dict(required=True, type='str'),
          sort = dict(required=True, type='str'), 
          num  = dict(required=True, type='str')
      ),
  # module supports check_mode
  # but value at exit remain unchanged
  # as its for monitoring pusrpose only
      supports_check_mode=True
  )

  if module.check_mode:
    module.exit_json(changed=False)
 
  params = module.params
 
  # passing the params to a shell command
  # command = 'ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head -n num'
  # passing the command in subprocess module


  if params['cmd'] is None:
      process = subprocess.Popen("ps -eo" + params['pid'] + "," + params['ppid'] + ",%" + params['mem'] + ",%" + params['cpu'] + " --sort=-%" + params['sort'] + " | head -n " + params['num'],shell=True, stdout=subprocess.PIPE, close_fds=True)
  else:
    process = subprocess.Popen("ps -eo" + params['pid'] + "," + params['cmd'] + "," + params['ppid'] + ",%" + params['mem'] + ",%" + params['cpu'] + " --sort=-%" + params['sort'] + " | head -n " + params['num'],shell=True, stdout=subprocess.PIPE, close_fds=True)


  exists = process.communicate()[0]
 
  # getting result if process is not None
  if exists:
        result = exists.split('\n')
        module.exit_json(changed=True, meminfo=result)
  else:
        err_info = "Error Occured: Not able to get peak cpu info"
        module.fail_json(msg=err_info)

if __name__ == '__main__':
        main()

With respect to the above mentioned module,

1. Its clear in the module, that I have used 'ps' command to accomplish the task.
2. User can use 'memory' as an alias for 'mem' in custom.yml file.
3. Python's subprocess module is used to run the ps command.
4. Result is displayed after splitting lines by '\n'.



Its confession time...

There is no need to write a module to find the top 'n' peak processes on the basis of cpu and memory usage. We can accomplish the same task by passing the ps command with same arguments in the shell module of Ansible. This check.yml file will look as given below.

[root@server Desktop]# cat check.yml
---
- hosts: dbservers
  tasks:
    - name: check memory and cpu usage in dbservers
      shell: "ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head -n 17"
      register: result

    - debug: var=result


Just have a look at result of this file. You can observe few more things like rc, stdout, stdout_lines, etc. and explore the ansible docs to add the following in your module.


 

November 26, 2017

Database Server provisioning using Ansible

by 4hathacker  |  in Python at  2:27 PM
Hello everyone in MyAnsibleQuest!

I went through webserver provisioning in the previous post. In this post, I am going to provision my database server in my dbservers machine [10.0.0.228] as defined in my /etc/ansible/hosts file.



I have used some modules like 'yum', 'apt', 'block', etc. during webserver installation. In addition to them, I will be using some modules to install mysql server in dbservers and then securing the server with the help of deleting default test databases, blank password accounts, etc. using ansible playbook. 

[root@server Desktop]# vim hands_on_ansible/mysql.yml

---
- hosts: dbservers

  tasks:
     - name: To install mysql
       action: yum name=http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
       action: yum name={{ item }}
       with_items:
           - MySQL-python
           - mysql
           - mysql-server

     - name: Start the MySQL service
       action: service name=mysqld state=started

     - name: Changing root password for all root accounts
       mysql_user: name=root host={{ item }} password={{ mysql_root_password }}
       with_items:
           - $ansible_hostname
           - 127.0.0.1
           - ::1
           - localhost

     - name: copy config file of mysql (.my.cnf) with root credentials
       template: src=templates/my.cnf.j2 dest=/root/.my.cnf owner=root mode=0600

     - name: delete anonymous MySQL server user for $server_hostname
       action: mysql_user user="" host=$server_hostname  state="absent"

     - name: delete anonymous MySQL server user for localhost
       action: mysql_user user="" state="absent"

In the above playbook, I have 6 tasks to accomplish on host dbservers.

1. Here I have used old legacy format, action: module options. According to ansible docs, it is not recommended but still prevailing in ansible playbook. I found it more readable but it depends on individual choice. My first task is to install mysql for which I need rpm packages of mysql. So it is same as:

wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
rpm -ivh  http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm

Post that, it will install three packages for mysql(client), mysql-server, MySQL-Python using the same 'yum' module. 'with_items' is used for repeated tasks over a list defined. My list contains three packages and 'yum' module is installing them one by one.

2. In the second task, it will start the mysql service using 'service' module. 

3. In the upcoming four tasks, it will do some security stuff to enhance the level of security for mysql. We know that, MySQL server installs with default login_user of ‘root’ and no password. To secure this user as part of an idempotent playbook, we must create at least two tasks: the first must change the root user’s password, without providing any login_user/login_password details. The second must drop a ~/.my.cnf file containing the new root credentials. Subsequent runs of the playbook will then succeed by reading the new credentials from the file. So, I have created a variable mysql_root_password in the /etc/ansible/hosts file. This will be set for $ansible_hostname, 127.0.0.1, ::1 and localhost using 'mysql_user' module. This confirms that  whosoever want to interact with mysql must enter the same password as defined by mysql_root_password.

[root@server Desktop]# vim /etc/ansible/hosts

node218 ansible_ssh_host=10.0.0.218
node227 ansible_ssh_host=10.0.0.227
node228 ansible_ssh_host=10.0.0.228
node229 ansible_ssh_host=10.0.0.229

[webservers]
node218
node227

[dbservers]
node228

[lbservers]
node229

[datacenter:children]
webservers
dbservers
lbservers

[datacenter:vars]
ansible_ssh_user=root
ansible_ssh_pass=redhat123
mysql_root_password=redhat123

4. I have created a jinja template my.cnf.j2 to set client credentials and copy that config file to nodes.

[root@server Desktop]# vim hands_on_ansible/templates/my.cnf.j2

[client]
user=root
password={{ mysql_root_password }}
5. In the final two tasks, I have removed the anonymous user accounts for mysql.

Finally, we can just check whether mysql is configured properly or not.


Like Our Facebook Page

Nitin Sharma's DEV Profile
Proudly Designed by 4hathacker.