Total Pageviews

Translate

Showing posts with label Ansible. Show all posts
Showing posts with label Ansible. Show all posts

April 3, 2018

Using Shell and Command modules in Ansible

by 4hathacker  |  in Redhat Enterprise Linux at  8:06 AM
Hi folks...

Welcome to MyAnsibleQuest !!!



After a long break, I am writing on my blog and this is in continuation with other Ansible posts. In this post, we will observe the usage of shell and command module with simple examples particularly to find out the differences between them.

Before starting, I would like to share some information about the environment setup for this post. 

1. I have an inventory file in /etc/ansible/hosts which consists of 3 servers (node 218 and node 227 are in webservers group while node 222 is in dbservers group).



2. I am using PyCharm as a code view editor for defining yaml files and running them in the terminal of PyCharm only.

3. Ansible 2.5 came in the beginning of this year only. We will be using the same in a virtual environment (Ansible_Shell_Command_Script) setup in python 2.7.



So, let's start with the shell module. Before starting let us check which shell is there in RHEL7.2 OS. To check this, type "echo $0" or "file -h /bin/bash" or "file -h /bin/sh" in the shell,



For me it came as '/bin/bash'. It may be something different if you are using some other OS.

Now, the shell module, almost similar to command module, accepts a command name with a list of space separated arguments. The prime concern here is, whatever the command we are passing, it goes through the shell (/bin/bash) on the remote node. The shell by default is /bin/sh and the option can be changed by using argument as 'executable: /bin/bash'.

Case 1: We will check a simple cat command for all the log files present in the /tmp directory.

Playbook:

Output:


Case 2: We will print some environment variables like $HOME, $ JAVA_HOME in a text file.

Playbook:

Output:


Conclusion: 

1. The command module fails to do so because of * wildcard. In case of operators like “<”, “>”, “|”, ”;” and “&” it will not work.
2. The command module will remain unaware of the environment variables yet it seems like there is no error and playbook runs well. If you look at the output, the state is seen as changed for all the tasks, even though in the second task command module did nothing.

This can be checked if we  look for the .txt files in both node218 and node227.




So, its important to use command and shell module carefully. The way we can access environment variables is given below. In my view, sometimes it will be better to take help from Ansible docs and look for similar modules for performing tasks, rather to rely on command or shell module.

To access local environment variables we can use either gather_facts or env look up.



That's all for this post.

January 21, 2018

Ansible Vault - Lets encrypt sensitive data while automation

by 4hathacker  |  in Redhat Enterprise Linux at  11:17 AM
Hello Everyone !

This is MyAnsibleQuest!!!

In the previous posts, I have discussed a lot of information and practical usage of Ansible automation and its workflow using simple examples to understand concepts in Ansible. While engaging with MySql server automated installation in one of the previous post, I have mentioned the database password and other datacenter vars in the "/etc/ansible/hosts" file.



I would like to make it clear, that for doing experiments in your lab test environment, it is not a critical issue. But while doing a large cluster management, engaging a lot of different departments together, hard coded passwords in a file comes under bad practices. Its dangerous to quote secret passwords and critical information in files. One solution in this respect will be the usage of good quality encryption standards to randomize/hide the information such that no other person will be able to understand the same without your permission. This extra layer of security can be provided to our Ansible playbooks using Ansible-Vault. Ansible-Vault is a command line tool, used to encrypt the sensitive content and while doing the automation it will intelligently decrypt the same using a vault-password provided by the user.

In this post, I will be covering some basic usage of Ansible-Vault commands by creating a playbook to fetch the key content of an AWS S3 bucket. It also demonstrates the Ansible roles and file structure for Ansible automation.

Scenario:

We have an access to AWS account and being an S3 admin, I would like to fetch the bucket key content using the bucket names that will be provided by some other team of my company. I will write a small Ansible playbook for the same.

A brief introduction to AWS S3:

Amazon Web Services is one of the most popular on-demand cloud services and S3 stands for Simple Storage Service, an AWS service particularly for object storage. Here the key content, we would like to access is nothing but the files inside a folder. I have already installed "awscli" and configured the same with the "aws configure" command. This is a mandatory step in order to access the S3 content over AWS Cloud.

1. The file structure for our Ansible playbook lies in a directory named as vault_example. According to this structure, I have defined a main .yml file as my.yml. The roles will define the distribution of control and thus tasks could be easily manageable. So, I have one role as s3_admin which has one of its tasks as fetching a particular bucket data. And vars folder, contains all the necessary variables required for the completion of task. In vars folder, aws_creds.yml consists of my aws_access_key_id and aws_secret_access_key along with the bucket name.


Note: However, I have mentioned the AWS credentials in aws_creds.yml, the connection to S3 service completely rely on "~/.aws.cfg", which is automatically generated by running "aws configure" command. For accessing EC2 and other services, it may be required.

2. Lets have a look at main.yml file in tasks folder. In main.yml, I have include the aws_creds file, and accessed the bucket_name variable from the aws_creds.yml to list the bucket keys.


3. Now, we have to do our major stuff here, i.e., how to use Ansible-Vault for encrypting aws_creds.yml. For that, create a vault-password.txt file and quote ome random password of your choice. This password will be referenced for encryption and decryption of our aws_creds.yml file. Use "ansible-vault encrypt" command with location of file to be encrypted and the vault-password file with "--vault-password-file" option.


4. Check whether the given file is encrypted successfully or not. You can see in the file the encryption standard used for encrypting the file.e.g. AES-256.




5. Lets test our my.yml file which contains only the role entry as:

---
# file: my.yml
- hosts: localhost

  roles:
    - { role: s3_admin }

I wrote the usual command for playing ansible playbook.



Oops!!! I got an error. It is looking for a secrets file to decrypt. Let me try this again, this time with our vault-password.txt file.


Bingo!!! Now our encryption as well as playbook both are working fine. Let us look at some other things we can do with Ansible-Vault.

6. I am looking to change my vault-password. This we can do with "ansible-vault rekey" command.



We can see that it will ask for New Vault Password twice to confirm. If the passwords fail to match, it shows error and rely on the previous password file. If the passwords match, it shows a messages for successful rekey.

7. To run the my.yml with the new password, we have to enter it manually. Because we haven't saved the same in any kind of passsword file.


With the "--ask-vault-pass" option, it asks for a vault password. If entered correctly, we can check the bucket keys as "hdfs-site.xml" and "logo.png".

8. Finally, we will look how to decrypt our aws_creds.yml file with the "ansible-vault decrypt" option.


This is a short practical introduction to Ansible-Vault in RHEL7 with Ansible v2.48 installed. There are some important best practices I would like to mention:

1. The vault variables should be written starting with "vault_". This will help in differentiating easily the vault variables and normal variables.

2. Do not take all the variables in vault encryption, otherwise it will be difficult for reviewing in case of errors, if occurred.

3. Ansible-Vault should only be used for encrypting sensitive information. Encrypting whole lot of .yml files unnecesarily without any requirement, will create more problems.

4. Following a proper directory structure for Ansible variables, vaults, main tasks within a proper role assigned will help in easy understanding and incur less time consumption.


December 25, 2017

Interacting with Scripts using Ansible

by 4hathacker  |  in Python at  10:56 PM
Hello Everyone !

This is MyAnsibleQuest!!!

Sorry for the late post. In the previous post, we built a custom Ansible module in Python. This post is relatively more interesting because it deals with a different perspective for using Ansible. 

We have seen a lot of programs which needs human intervention for specific result oriented tasks. This intervention is making automation of tasks very difficult. I said difficult but not impossible. If we know the steps going to be asked by a program, we can automatically arrange for the set of answers. Its like you know the questions, you know the answers, and you want everything automatically taking place. Here I am just giving a glimpse of such automation as a small project using Python and Ansible.

During my college days, I have used a python script for scanning ports of a Linux System providing the ip/hostname and number of ports to be scanned. In network programming, a communication end point is created which allows a server to listen for requests. Once a communication end point has been established, our listening server can now enter its infinite loop, waiting for clients to connect, and responding to requests. Sockets are the "communication end point". 

My complete python code 'myfirstpexp.py' looks like this:

#!/usr/bin/python
 
import sys, time, subprocess, re, os
from socket import *
from datetime import datetime

host=' '
max_port = 5000 # default max port either ways you must enter a value
min_port = 1        # default min port either ways you must enter a value


def scan_host(host, port, returnval = 1):
''' This function is used for checking whether port is open or not. '''
    try:
        s = socket(AF_INET, SOCK_STREAM)
        code = s.connect_ex((host, port))
        if code == 0:
            returnval = code
        s.close()
    except Exception, e:
        pass
    return returnval


def host_check(host):
''' This function is used to check whether the host is alive or not. '''
''' The output of the ping command is set to null, and displays whether up or not. '''
        devnull = open(os.devnull, 'w')
        res = subprocess.call(["ping", "-c", "1", host], stdout=devnull, stderr=devnull)
       
        if res == 0:
                print host, 'is up!'
        else:
                print host, 'is down!'
                sys.exit(1)
 
 
def main():
''' This is the main function which asks for three values viz. '''
''' host: IP address of the host '''
''' Maximum Port: the max value of port to be scanned '''
''' Minimum Port: the min value of port to start for the scanning'''
        try:
                host = raw_input("(*) Enter Host Address: ")
                max_port = int(raw_input("(*) Enter Max Port: "))
                min_port = int(raw_input("(*) Enter Min Port: "))
        except KeyboardInterrupt:
                print "\n\n(*) Interruption by User Occured."
                print "(*) Shutting down the Application."
                sys.exit(1)
        
        host_check(host)
        
        hostip = gethostbyname(host)
        print "\n(*) Host: %s IP: %s" % (host, hostip)
        print "\n\n(*) Scanning started at %s...\n" %(time.strftime("%H:%M:%S"))   
        start_time = datetime.now()
        
        for port in range(min_port, max_port):
            try:
                response = scan_host(host, port)
                if response == 0:
                    print("(*) Port %d: Open" % (port))
            except Exception, e:
                pass
       
        stop_time = datetime.now()
        duration = stop_time - start_time
        print "\n(*) Scanning done at %s ..." % (time.strftime("%H:%M:%S"))
        print "(*) Scanning Duration: %s ..." % (duration)
        print "(*) Have a nice day !!! ... 4hathacker_Ansible_Case"    


if __name__ == "__main__":
    main()

Its a very simple port scanning code which includes three functions viz. host_check(), scan_host() and main(). All functions are explained within the multiple line comments.

Lets see how it looks when you run the code.


Now the actual task for us is to automate the following script using Ansible. To achieve the same, I have used a python module - Pexpect. Its a pure Python module which matches a pattern after watching the output and then respond as if a human were typing responses. We can install Pexpect with pip and you can seek any help from this link

To use Pexpect in Ansible, we have to strictly follow the Ansible documentation otherwise I have seen a lot of problems while dealing with it. There is an Expect module in Ansible to do things like this, and it uses Pexpect behind the scene. I have created a 'firstpexp.yml' playbook which will automate the above python script.

 

1. In this playbook, I have used three variables viz., nmap_ip, max_port_number,  min_port_number as vars. 

2. While using expect module, firstly I ran the command module to run the myfirstpexp.py script.

3. In the responses, I have provided the already known output patterns in .yml format with their respective options to be filled at  runtime via the vars.

4. echo is optional just for the sake of checking whether the script is running fine or not. This I have even confirmed with debug module also.



Note: Pexpect works fine only if the pattern matches the response asked. We must escape special characters. In order to work for automatic server setups, like mysql_secure_installation, ambari_setup, etc. this works very effectively.

This is how you can make use of expect module in Ansible and interact with scripts in bash, python, php, etc. 

Merry Christmas !!!



December 5, 2017

Extending Ansible using Python

by 4hathacker  |  in Python at  11:39 PM
Hi everyone!

This is MyAnsibleQuest!!!



In the previous post, I have discussed about Ansible playbooks for provisioning database and web servers. In this post, we will unleash the power of Ansible in Linux by writing our own module. We can write Ansible modules in many languages, but I would like to use Python here. The reasons for selecting Python are:

1. All the modules of Ansible are written in Python.
2. Easy and direct integration with Ansible is possible.
3. Python reduces the amount of code required, as we can use boiler plate code.
4. Handling JSON output is easy.

Most Importantly, I love to code in python. 

Lets start with creating a setup for module development. Its very simple. We need a directory in which we place our playbook file and inside that a 'library' folder so that our playbook automatically look for the ansible module.

[root@server hands_on_ansible]# mkdir custom_module
[root@server hands_on_ansible]# cd custom_module
[root@server custom_module]# mkdir library
[root@server custom_module]# touch custom.yml
[root@server custom_module]# touch library/custom2.py


Now we will define our playbook.
There must be a reason to define a custom module. It may be a case that you want a custom function/task to accomplish. There might not be sufficient  modules available for your task or something like that.

For this post, I would like to create a trivial module for monitoring cpu usage of my Linux servers for 'n' peak processes. There are several commands in linux for monitoring the cpu usage with ram memory e.g. top, htop, ps, free, etc. Here I will be using 'ps' command with a set of arguments to be passed to display process id, parent process id, command, etc, in a meaningful sorted manner for every server in my inventory.

As we know about ansible-playbook writing with .yml extension, we will first write it.

1. My custom.yml file is defined to operate on 'hosts' as 'dbservers'.
2. I have included an entry of 'gather_facts' as 'no' because I don't want any kind of delay in output.
3. The name of my task is 'Get top cpu consuming process'
4. In my 'custom2' named module, I have passed 7 parameters viz. pid (process id), ppid (parent process id), cmd (command), mem(memory info), cpu (cpu info), sort (to define sorting basis), num (number of peak processes to show in output).
5. At last, I have used a 'result' variable with 'register' module to save the output and displayed the result with 'debug' module.


[root@server Desktop]# cat hands_on_ansible/custom_module/custom.yml
---

- hosts: dbservers
  gather_facts: no

  tasks:
    - name: Get top cpu consuming process
      custom2: 
        pid: pid
        ppid: ppid
        cmd: cmd
        mem: mem
        cpu: cpu
        sort: mem
        num: '17'    
      register: result

    - debug:
         var: result


Secondly, we will focus on the ansible module - 'custom2.py'. Ansible module must contain some basic information like metadata, documentation, examples, return values, etc. For more enhanced details of writing ansible module follow the documentation link. I have included only the documentation, for understanding the module.

#!/usr/bin/python

DOCUMENTATION = '''
---
module: my_monitoring_module
short_description: This is my server cpu-memory monitoring module.
version_added: "2.4"
description:
    - "This is my cpu-memory monitoring module to show 'n' peak processes at the time of module call."
options:
    pid:
        description:
            - This is the value same as pid denoting process id.
        required: true
    ppid:
        description:
            - This is the value same as ppid denoting parent process id.
        required: true
    cmd:
        description:
            - This is the value same as cmd denoting the command in process.
        required: false
    mem:
        description:
            - This is the value same as mem denoting the memory in percent for a process.
        required: true
        alias: memory
    cpu:
        description:
            - This is the value same as cpu denoting the cpu usage in percent for a process.
        required: true
    sort:
        description:
            - This is the value as either cpu or mem to sort by the order of cpu usage or memory usage.
        required: true
    num:
        description:
            - This is the value to output the number of peak processes.
        required: true
author:
    - Nitin (@4hathacker)
'''

from ansible.module_utils.basic import *
import subprocess

def main():
  # defining the available arguments/parameters
  # the user must pass to module
  module = AnsibleModule(
      argument_spec = dict(
          pid  = dict(required=True, type='str'),
          ppid = dict(required=True, type='str'),
          cmd  = dict(required=False, type='str'),
          mem  = dict(aliases=['memory'], required=True, type='str'),
          cpu  = dict(required=True, type='str'),
          sort = dict(required=True, type='str'), 
          num  = dict(required=True, type='str')
      ),
  # module supports check_mode
  # but value at exit remain unchanged
  # as its for monitoring pusrpose only
      supports_check_mode=True
  )

  if module.check_mode:
    module.exit_json(changed=False)
 
  params = module.params
 
  # passing the params to a shell command
  # command = 'ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head -n num'
  # passing the command in subprocess module


  if params['cmd'] is None:
      process = subprocess.Popen("ps -eo" + params['pid'] + "," + params['ppid'] + ",%" + params['mem'] + ",%" + params['cpu'] + " --sort=-%" + params['sort'] + " | head -n " + params['num'],shell=True, stdout=subprocess.PIPE, close_fds=True)
  else:
    process = subprocess.Popen("ps -eo" + params['pid'] + "," + params['cmd'] + "," + params['ppid'] + ",%" + params['mem'] + ",%" + params['cpu'] + " --sort=-%" + params['sort'] + " | head -n " + params['num'],shell=True, stdout=subprocess.PIPE, close_fds=True)


  exists = process.communicate()[0]
 
  # getting result if process is not None
  if exists:
        result = exists.split('\n')
        module.exit_json(changed=True, meminfo=result)
  else:
        err_info = "Error Occured: Not able to get peak cpu info"
        module.fail_json(msg=err_info)

if __name__ == '__main__':
        main()

With respect to the above mentioned module,

1. Its clear in the module, that I have used 'ps' command to accomplish the task.
2. User can use 'memory' as an alias for 'mem' in custom.yml file.
3. Python's subprocess module is used to run the ps command.
4. Result is displayed after splitting lines by '\n'.



Its confession time...

There is no need to write a module to find the top 'n' peak processes on the basis of cpu and memory usage. We can accomplish the same task by passing the ps command with same arguments in the shell module of Ansible. This check.yml file will look as given below.

[root@server Desktop]# cat check.yml
---
- hosts: dbservers
  tasks:
    - name: check memory and cpu usage in dbservers
      shell: "ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head -n 17"
      register: result

    - debug: var=result


Just have a look at result of this file. You can observe few more things like rc, stdout, stdout_lines, etc. and explore the ansible docs to add the following in your module.


 

November 26, 2017

Database Server provisioning using Ansible

by 4hathacker  |  in Python at  2:27 PM
Hello everyone in MyAnsibleQuest!

I went through webserver provisioning in the previous post. In this post, I am going to provision my database server in my dbservers machine [10.0.0.228] as defined in my /etc/ansible/hosts file.



I have used some modules like 'yum', 'apt', 'block', etc. during webserver installation. In addition to them, I will be using some modules to install mysql server in dbservers and then securing the server with the help of deleting default test databases, blank password accounts, etc. using ansible playbook. 

[root@server Desktop]# vim hands_on_ansible/mysql.yml

---
- hosts: dbservers

  tasks:
     - name: To install mysql
       action: yum name=http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
       action: yum name={{ item }}
       with_items:
           - MySQL-python
           - mysql
           - mysql-server

     - name: Start the MySQL service
       action: service name=mysqld state=started

     - name: Changing root password for all root accounts
       mysql_user: name=root host={{ item }} password={{ mysql_root_password }}
       with_items:
           - $ansible_hostname
           - 127.0.0.1
           - ::1
           - localhost

     - name: copy config file of mysql (.my.cnf) with root credentials
       template: src=templates/my.cnf.j2 dest=/root/.my.cnf owner=root mode=0600

     - name: delete anonymous MySQL server user for $server_hostname
       action: mysql_user user="" host=$server_hostname  state="absent"

     - name: delete anonymous MySQL server user for localhost
       action: mysql_user user="" state="absent"

In the above playbook, I have 6 tasks to accomplish on host dbservers.

1. Here I have used old legacy format, action: module options. According to ansible docs, it is not recommended but still prevailing in ansible playbook. I found it more readable but it depends on individual choice. My first task is to install mysql for which I need rpm packages of mysql. So it is same as:

wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
rpm -ivh  http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm

Post that, it will install three packages for mysql(client), mysql-server, MySQL-Python using the same 'yum' module. 'with_items' is used for repeated tasks over a list defined. My list contains three packages and 'yum' module is installing them one by one.

2. In the second task, it will start the mysql service using 'service' module. 

3. In the upcoming four tasks, it will do some security stuff to enhance the level of security for mysql. We know that, MySQL server installs with default login_user of ‘root’ and no password. To secure this user as part of an idempotent playbook, we must create at least two tasks: the first must change the root user’s password, without providing any login_user/login_password details. The second must drop a ~/.my.cnf file containing the new root credentials. Subsequent runs of the playbook will then succeed by reading the new credentials from the file. So, I have created a variable mysql_root_password in the /etc/ansible/hosts file. This will be set for $ansible_hostname, 127.0.0.1, ::1 and localhost using 'mysql_user' module. This confirms that  whosoever want to interact with mysql must enter the same password as defined by mysql_root_password.

[root@server Desktop]# vim /etc/ansible/hosts

node218 ansible_ssh_host=10.0.0.218
node227 ansible_ssh_host=10.0.0.227
node228 ansible_ssh_host=10.0.0.228
node229 ansible_ssh_host=10.0.0.229

[webservers]
node218
node227

[dbservers]
node228

[lbservers]
node229

[datacenter:children]
webservers
dbservers
lbservers

[datacenter:vars]
ansible_ssh_user=root
ansible_ssh_pass=redhat123
mysql_root_password=redhat123

4. I have created a jinja template my.cnf.j2 to set client credentials and copy that config file to nodes.

[root@server Desktop]# vim hands_on_ansible/templates/my.cnf.j2

[client]
user=root
password={{ mysql_root_password }}
5. In the final two tasks, I have removed the anonymous user accounts for mysql.

Finally, we can just check whether mysql is configured properly or not.


WebServer Provisioning using Ansible

by 4hathacker  |  in Python at  2:26 PM
Hi folks!

This blog post is in continuation with the previous blog post in which we have discussed about Ansible Ad-hoc commands. In this post, we will be covering the playbook writing for Ansible automation.



Plays or Playbooks are nothing but a list of instructions describing the steps to bring the server to a certain configuration state. For example, if we want to host a website on a system, we need apache to be installed. The initial state of the system is the one in which apache is not present.  We can write a play/playbook to install apache on it. This is a use case of Change Management and provisioning using Ansible.

In the previous post, I have described a default hosts file for my development scenario.

[root@server Desktop]# cat /etc/ansible/hosts
node218 ansible_ssh_host=10.0.0.218
node227 ansible_ssh_host=10.0.0.227
node228 ansible_ssh_host=10.0.0.228
node229 ansible_ssh_host=10.0.0.229

[webservers]
node218
node227

[dbservers]
node228

[lbservers]
node229

[datacenter:children]
webservers
dbservers
lbservers

[datacenter:vars]
ansible_ssh_user=q
ansible_ssh_pass=q

Playbooks are written in YAML format with file name extension as .yaml or .yml. This is a human readable data serialization language. YAML offer an "in-line" style for denoting associative arrays and lists. Ansible playbook usually starts with 3 hyphens as "---". The most important thing to keep in mind while writing ansible playbook is indentation and spaces.

Here I am going to accomplish the task of installing apache server in webservers list.

A. To check whether SELinux is enforced or not, and install apache server in webservers machine.

 [root@server hands_on_ansible]# vim apache.yml

---
- name: check SELinux then install and start apache
  hosts: webservers

  tasks:
  - name: Check to see if SELinux is working
    command: getenforce
    register: sestatus
    changed_when: false

  - name: install and start webserver
    block:
     - yum: name=httpd state=present
     - service: name=httpd state=started enabled=yes
    when: ansible_distribution == "RedHat"

  - name: install and start webserver
    block:
     - apt: name=apache2 state=present
     - service: name=apache2 state=started enabled=yes
    when: ansible_distribution == "Debian"

1. As we can see, the very first line is '---' which means the starting of apache.yml file. 

2. In the next line, there is '- ' (a dash and a space) with name of the task. A YAML file consists of dictionary means a key and a value (key: value). Here I have defined my hosts as webservers, which will take care of apache installation in only webserver ips.

3. The next line is showing the name of the task to be accomplished. In our case, it is to install apache and checking SELinux. Please note that 'name' is not a module in this line, it is just a way to enhance the readability of user.

4. A list of dictionary is defined as tasks and it contains name of different tasks and modules to accomplish them.

5. The first task is to check if SELinux is working or not. For this command module is used. In command module, a raw command is passed as 'getenforce'. The result is then saved using 'register' in a variable called sestatus. After that, changed_when is used to mark the  task evaluation on a specific condition. The command module will always return a change so to overcome that, 'changed_when' is used and initialised to false. Note that to run SELinux using ansible, the required python bindings to be installed in the host-controller are 'libsemanage-python' and 'libselinux-python'.

6. In the next task, I have separated the installation for apache server, for RedHat machine as well as for Ubuntu machine. For RedHat, I have used 'yum' module to install httpd and then started the httpd service using 'service' module. The whole thing I wrote in a block, using a 'block' module, which will run only after checking whether the ansible_distribution for that node is 'RedHat'. In a similar fashion, the next task check for 'Debian' distribution to install 'apache' using 'apt' module.

Now to run the same, I will write in the terminal:

[root@server Desktop]# ansible-playbook hands_on_ansible/apache.yml



We can observe clearly, the very first, ansible gathers information about the nodes. And then it starts working on the tasks to accomplish as above explained. For Debian tasks, it is showing skipping. Finally it provides a summary of tasks, to take account of change management.








November 16, 2017

Getting Started with Ansible and Ad-hoc commands

by 4hathacker  |  in Python at  8:28 PM
Hi folks!

In the previous blog post, we have a glimpse of ansible and its use cases. Here, we will be continuing our quest  in learning ansible with RedHat Developer version of Enterprise Linux.


Ansible has agentless architecture which means there is no need of any kind of extra utility to perform automation for ansible. It particularly uses python which already exists in most of the stable Linux versions. It uses OpenSSH and WinRM for communication with remote machines. This decreases chances of exploitation and provides efficient and secure way for automation.

I have explained the installation of Ansible in the previous post. Ansible requires installation on one master machine which acts as our host controller. In my lab-setup, I have,

1. host-controller

IP - 10.0.0.1              Hostname - server.sharma.com

2. remote machines

IP - 10.0.0.218          Hostname - node@218.sharma.com

IP - 10.0.0.227          Hostname - node@227.sharma.com

IP - 10.0.0.228          Hostname - node@228.sharma.com

IP - 10.0.0.229          Hostname - node@229.sharma.com

Ansible has a lot of tools in the toolkit. These tools are called modules. Each module extends ansible's capability for performing a particular task.There are more than 450 modules to work with. But, we will explore only some of the modules in this post. Using Ansible commands is very simple. 

[root@server Desktop]# ansible localhost -m setup

Ansible command starts with 'ansible', then localhost for our host machine. Post that comes '-m' which represents a module name (setup). 'setup' module gathers facts about remote machine and result in a very good lump of information. We can filter out particular information as well, using '-a' to pass a direct command line argument and not a module.

[root@server Desktop]# ansible 10.0.0.218 -m setup -a filter='ansible_distribution_*'
10.0.0.218 | SUCCESS => {
    "ansible_facts": {
        "ansible_distribution_file_parsed": true,
        "ansible_distribution_file_path": "/etc/redhat-release",
        "ansible_distribution_file_variety": "RedHat",
        "ansible_distribution_major_version": "7",
        "ansible_distribution_release": "Maipo",
        "ansible_distribution_version": "7.2"
    },
    "changed": false,
    "failed": false
}

Similarly, we can ping every remote machine to check whether our remote machines are ready or not.

[root@server Desktop]# ansible 10.0.0.218 -m ping -k
SSH password:
10.0.0.218 | SUCCESS => {
    "changed": false,
    "failed": false,
    "ping": "pong"
}

As per the output, my ping is successful and it has returned a 'pong'. Here it has asked for SSH password because of '-k'. But this is not the way to do it. I mean, if there are a plenty of remote machines to automate we can't provide all the names here and then giving a password through user. To overcome such scenario, there is an 'inventory' file. 'inventory' file comprises of collection of hosts (nodes) against which Ansible can work with. So, in my host controller, I will create one file - 'inventory' and write the remote IPs with password and username.

[root@server Desktop]# vim inventory
[root@server Desktop]# cat inventory

10.0.0.218 ansible_ssh_user=q ansible_ssh_pass=q
10.0.0.229 ansible_ssh_user=q ansible_ssh_pass=q
10.0.0.228 ansible_ssh_user=q ansible_ssh_pass=q
10.0.0.227 ansible_ssh_user=q ansible_ssh_pass=q

After this, I can ping all the machines at once without even entering the password.

[root@server Desktop]# ansible all -i inventory -m ping
10.0.0.218 | SUCCESS => {
    "changed": false,
    "failed": false,
    "ping": "pong"
}
10.0.0.228 | SUCCESS => {
    "changed": false,
    "failed": false,
    "ping": "pong"
}
10.0.0.227 | SUCCESS => {
    "changed": false,
    "failed": false,
    "ping": "pong"
}
10.0.0.229 | SUCCESS => {
    "changed": false,
    "failed": false,
    "ping": "pong"
}

But I am still looking to give definition to my servers according to their purpose in the company. Like some are database servers, some are web servers and so on. And I don't want to repeat the password for every machine. Its the same thing, I have written over and over. What I can do is, creating groups for different servers and using variables to store the user-credentials. I will be defining a default inventory at '/etc/ansible/hosts'.

[root@server Desktop]# vim /etc/ansible/hosts
[root@server Desktop]# cat /etc/ansible/hosts

node218 ansible_ssh_host=10.0.0.218
node227 ansible_ssh_host=10.0.0.227
node228 ansible_ssh_host=10.0.0.228
node229 ansible_ssh_host=10.0.0.229

[webservers]
node218
node227

[dbservers]
node228

[lbservers]
node229

[datacenter:children]
webservers
dbservers
lbservers

[datacenter:vars]
ansible_ssh_user=q
ansible_ssh_pass=q

In the '/etc/ansible/hosts' file, I have declared individual hosts as nodes. Then, I have created three groups for webservers, dbservers, lbservers. Post that, I made an entry for a larger group called datacenter, whose children are all the groups. For all the groups, datacenter will be in supreme command, so I created variables for datacenter as 'vars' and provided the user credentials. Now there is no need to give credentials as well as inventory location in the ansible command.

[root@server Desktop]# ansible all -m ping
node227 | SUCCESS => {
    "changed": false,
    "failed": false,
    "ping": "pong"
}
node229 | SUCCESS => {
    "changed": false,
    "failed": false,
    "ping": "pong"
}
node218 | SUCCESS => {
    "changed": false,
    "failed": false,
    "ping": "pong"
}
node228 | SUCCESS => {
    "changed": false,
    "failed": false,
    "ping": "pong"
}

This is the setup with which we can install some packages very easily in a single command hassle free, provided the user whose credentials are given in inventory file, has the power to do so.

[root@server Desktop]# ansible webservers -m yum -a "name=httpd state=present"
node218 | SUCCESS => {
    "changed": true,
    "failed": false,
    "msg": "Repository 'hadoopmain' is missing name in configuration, using id\n",
    "rc": 0,
    "results": [
        "Loaded plugins: product-id, search-disabled-repos, subscription-manager\nThis system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.\nResolving Dependencies\n--> Running transaction check\n---> Package httpd.x86_64 0:2.4.6-40.el7 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package        Arch            Version               Repository           Size\n================================================================================\nInstalling:\n httpd          x86_64          2.4.6-40.el7          hadoopmain          1.2 M\n\nTransaction Summary\n================================================================================\nInstall  1 Package\n\nTotal download size: 1.2 M\nInstalled size: 3.7 M\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n  Installing : httpd-2.4.6-40.el7.x86_64                                    1/1 \n  Verifying  : httpd-2.4.6-40.el7.x86_64                                    1/1 \n\nInstalled:\n  httpd.x86_64 0:2.4.6-40.el7                                                   \n\nComplete!\n"
    ]
}
node227 | SUCCESS => {
    "changed": true,
    "failed": false,
    "msg": "Repository 'hadoopmain' is missing name in configuration, using id\n",
    "rc": 0,
    "results": [
        "Loaded plugins: product-id, search-disabled-repos, subscription-manager\nThis system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.\nResolving Dependencies\n--> Running transaction check\n---> Package httpd.x86_64 0:2.4.6-40.el7 will be installed\n--> Processing Dependency: httpd-tools = 2.4.6-40.el7 for package: httpd-2.4.6-40.el7.x86_64\n--> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-40.el7.x86_64\n--> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-40.el7.x86_64\n--> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-40.el7.x86_64\n--> Running transaction check\n---> Package apr.x86_64 0:1.4.8-3.el7 will be installed\n---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed\n---> Package httpd-tools.x86_64 0:2.4.6-40.el7 will be installed\n---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package            Arch          Version               Repository         Size\n================================================================================\nInstalling:\n httpd              x86_64        2.4.6-40.el7          hadoopmain        1.2 M\nInstalling for dependencies:\n apr                x86_64        1.4.8-3.el7           hadoopmain        103 k\n apr-util           x86_64        1.5.2-6.el7           hadoopmain         92 k\n httpd-tools        x86_64        2.4.6-40.el7          hadoopmain         82 k\n mailcap            noarch        2.1.41-2.el7          hadoopmain         31 k\n\nTransaction Summary\n================================================================================\nInstall  1 Package (+4 Dependent packages)\n\nTotal download size: 1.5 M\nInstalled size: 4.3 M\nDownloading packages:\n--------------------------------------------------------------------------------\nTotal                                              5.5 MB/s | 1.5 MB  00:00     \nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n  Installing : apr-1.4.8-3.el7.x86_64                                       1/5 \n  Installing : apr-util-1.5.2-6.el7.x86_64                                  2/5 \n  Installing : httpd-tools-2.4.6-40.el7.x86_64                              3/5 \n  Installing : mailcap-2.1.41-2.el7.noarch                                  4/5 \n  Installing : httpd-2.4.6-40.el7.x86_64                                    5/5 \n  Verifying  : mailcap-2.1.41-2.el7.noarch                                  1/5 \n  Verifying  : httpd-tools-2.4.6-40.el7.x86_64                              2/5 \n  Verifying  : apr-1.4.8-3.el7.x86_64                                       3/5 \n  Verifying  : apr-util-1.5.2-6.el7.x86_64                                  4/5 \n  Verifying  : httpd-2.4.6-40.el7.x86_64                                    5/5 \n\nInstalled:\n  httpd.x86_64 0:2.4.6-40.el7                                                   \n\nDependency Installed:\n  apr.x86_64 0:1.4.8-3.el7                 apr-util.x86_64 0:1.5.2-6.el7       \n  httpd-tools.x86_64 0:2.4.6-40.el7        mailcap.noarch 0:2.1.41-2.el7       \n\nComplete!\n"
    ]
}

If the user is not able to do the same, let the ansible command ask for the password by appending '--ask-sudo-pass' in the same.

[root@server Desktop]# ansible webservers -m yum -a "name=httpd state=present" --ask-sudo-pass
[DEPRECATION WARNING]: The sudo command line option has been deprecated in
favor of the "become" command line arguments. This feature will be removed in
version 2.6. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
SUDO password:
node227 | SUCCESS => {
    "changed": false,
    "failed": false,
    "msg": "",
    "rc": 0,
    "results": [
        "httpd-2.4.6-40.el7.x86_64 providing httpd is already installed"
    ]
}
node218 | SUCCESS => {
    "changed": false,
    "failed": false,
    "msg": "",
    "rc": 0,
    "results": [
        "httpd-2.4.6-40.el7.x86_64 providing httpd is already installed"
    ]
}



The last thing I would like to share is debugging in ansible ad-hoc commands. This is decided by the verbosity level given in the command.

Example 1:
[root@server Desktop]# ansible lbservers -m ping -v
No config file found; using defaults
node229 | SUCCESS => {
    "changed": false,
    "failed": false,
    "ping": "pong"
}


With Example 1, '-v' is passed which only gives debugging information about config file.

Example 2:
[root@server Desktop]# ansible lbservers -m ping -vvv
ansible 2.4.0.0
  config file = None
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Oct 11 2015, 17:47:16) [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)]
No config file found; using defaults
Parsed /etc/ansible/hosts inventory source with ini plugin
META: ran handlers
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/ping.py
<10.0.0.229> ESTABLISH SSH CONNECTION FOR USER: q
<10.0.0.229> SSH: EXEC sshpass -d11 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o User=q -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/cc742780ab 10.0.0.229 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<10.0.0.229> (0, '/home/q\n', '')
<10.0.0.229> ESTABLISH SSH CONNECTION FOR USER: q
<10.0.0.229> SSH: EXEC sshpass -d11 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o User=q -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/cc742780ab 10.0.0.229 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/q/.ansible/tmp/ansible-tmp-1510841273.0-271302847336239 `" && echo ansible-tmp-1510841273.0-271302847336239="` echo /home/q/.ansible/tmp/ansible-tmp-1510841273.0-271302847336239 `" ) && sleep 0'"'"''
<10.0.0.229> (0, 'ansible-tmp-1510841273.0-271302847336239=/home/q/.ansible/tmp/ansible-tmp-1510841273.0-271302847336239\n', '')
<10.0.0.229> PUT /tmp/tmp_i5vky TO /home/q/.ansible/tmp/ansible-tmp-1510841273.0-271302847336239/ping.py
<10.0.0.229> SSH: EXEC sshpass -d11 sftp -o BatchMode=no -b - -C -o ControlMaster=auto -o ControlPersist=60s -o User=q -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/cc742780ab '[10.0.0.229]'
<10.0.0.229> (0, 'sftp> put /tmp/tmp_i5vky /home/q/.ansible/tmp/ansible-tmp-1510841273.0-271302847336239/ping.py\n', '')
<10.0.0.229> ESTABLISH SSH CONNECTION FOR USER: q
<10.0.0.229> SSH: EXEC sshpass -d11 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o User=q -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/cc742780ab 10.0.0.229 '/bin/sh -c '"'"'chmod u+x /home/q/.ansible/tmp/ansible-tmp-1510841273.0-271302847336239/ /home/q/.ansible/tmp/ansible-tmp-1510841273.0-271302847336239/ping.py && sleep 0'"'"''
<10.0.0.229> (0, '', '')
<10.0.0.229> ESTABLISH SSH CONNECTION FOR USER: q
<10.0.0.229> SSH: EXEC sshpass -d11 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o User=q -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/cc742780ab -tt 10.0.0.229 '/bin/sh -c '"'"'/usr/bin/python /home/q/.ansible/tmp/ansible-tmp-1510841273.0-271302847336239/ping.py; rm -rf "/home/q/.ansible/tmp/ansible-tmp-1510841273.0-271302847336239/" > /dev/null 2>&1 && sleep 0'"'"''
<10.0.0.229> (0, '\r\n{"invocation": {"module_args": {"data": "pong"}}, "ping": "pong"}\r\n', 'Shared connection to 10.0.0.229 closed.\r\n')
node229 | SUCCESS => {
    "changed": false,
    "failed": false,
    "invocation": {
        "module_args": {
            "data": "pong"
        }
    },
    "ping": "pong"
}
META: ran handlers
META: ran handlers


With Example 2, '-vvv' is passed which gives the information about ansible version, module location, python version, connection profile, etc.

Example 3:
[root@server Desktop]# ansible lbservers -m ping -vvvv
ansible 2.4.0.0
  config file = None
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Oct 11 2015, 17:47:16) [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)]
No config file found; using defaults
setting up inventory plugins
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
META: ran handlers
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/ping.py
<10.0.0.229> ESTABLISH SSH CONNECTION FOR USER: q
<10.0.0.229> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o User=q -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/cc742780ab 10.0.0.229 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<10.0.0.229> (0, '/home/q\n', 'OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4386\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<10.0.0.229> ESTABLISH SSH CONNECTION FOR USER: q
<10.0.0.229> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o User=q -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/cc742780ab 10.0.0.229 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/q/.ansible/tmp/ansible-tmp-1510841304.79-155502950928476 `" && echo ansible-tmp-1510841304.79-155502950928476="` echo /home/q/.ansible/tmp/ansible-tmp-1510841304.79-155502950928476 `" ) && sleep 0'"'"''
<10.0.0.229> (0, 'ansible-tmp-1510841304.79-155502950928476=/home/q/.ansible/tmp/ansible-tmp-1510841304.79-155502950928476\n', 'OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4386\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<10.0.0.229> PUT /tmp/tmpji9Qoj TO /home/q/.ansible/tmp/ansible-tmp-1510841304.79-155502950928476/ping.py
<10.0.0.229> SSH: EXEC sshpass -d11 sftp -o BatchMode=no -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o User=q -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/cc742780ab '[10.0.0.229]'
<10.0.0.229> (0, 'sftp> put /tmp/tmpji9Qoj /home/q/.ansible/tmp/ansible-tmp-1510841304.79-155502950928476/ping.py\n', 'OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4386\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "posix-rename@openssh.com" revision 1\r\ndebug2: Server supports extension "statvfs@openssh.com" revision 2\r\ndebug2: Server supports extension "fstatvfs@openssh.com" revision 2\r\ndebug2: Server supports extension "hardlink@openssh.com" revision 1\r\ndebug2: Server supports extension "fsync@openssh.com" revision 1\r\ndebug3: Sent message fd 5 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/q size 0\r\ndebug3: Looking up /tmp/tmpji9Qoj\r\ndebug3: Sent message fd 5 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/q/.ansible/tmp/ansible-tmp-1510841304.79-155502950928476/ping.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:31422\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 31422 bytes at 32768\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<10.0.0.229> ESTABLISH SSH CONNECTION FOR USER: q
<10.0.0.229> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o User=q -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/cc742780ab 10.0.0.229 '/bin/sh -c '"'"'chmod u+x /home/q/.ansible/tmp/ansible-tmp-1510841304.79-155502950928476/ /home/q/.ansible/tmp/ansible-tmp-1510841304.79-155502950928476/ping.py && sleep 0'"'"''
<10.0.0.229> (0, '', 'OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4386\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<10.0.0.229> ESTABLISH SSH CONNECTION FOR USER: q
<10.0.0.229> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o User=q -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/cc742780ab -tt 10.0.0.229 '/bin/sh -c '"'"'/usr/bin/python /home/q/.ansible/tmp/ansible-tmp-1510841304.79-155502950928476/ping.py; rm -rf "/home/q/.ansible/tmp/ansible-tmp-1510841304.79-155502950928476/" > /dev/null 2>&1 && sleep 0'"'"''
<10.0.0.229> (0, '\r\n{"invocation": {"module_args": {"data": "pong"}}, "ping": "pong"}\r\n', 'OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4386\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 10.0.0.229 closed.\r\n')
node229 | SUCCESS => {
    "changed": false,
    "failed": false,
    "invocation": {
        "module_args": {
            "data": "pong"
        }
    },
    "ping": "pong"
}
META: ran handlers
META: ran handlers


With Example 3, '-vvv' is passed which also gives the same information about ansible version, module location, python version, connection profile, etc.

[root@server Desktop]# ansible lbservers -m ping -vvv | grep '<10.0.0.229>' -c
15

 
[root@server Desktop]# ansible lbservers -m ping -vvvv | grep '<10.0.0.229>' -c
15


While observing the output for Example 2 and Example 3, notice <10.0.0.218> is returned in 15 lines. But the result of Example 3 includes additional connection debugging information.




Like Our Facebook Page

Nitin Sharma's DEV Profile
Proudly Designed by 4hathacker.