Hello there!
I have heard a lot of times the buzzword "Smartwork"... But how one should be able to do a "Smartwork". For this there are a lot of perspectives, related to the nature of work. For now I am going to discuss about Linux. For system admins and the people working on *Nix environment, smart work has much importance. We know that they have to manage a hell lot of work smartly on different machines at the same time. Their tasks include continuous monitoring of systems, changing user permissions, creating users, and so on.
I am glad to tell you that all these kind of work can be done easily with the help of one python module called 'fabric'. 'Fabric' is a very simple python module having some powerful functions which utilizes the SSH model for application deployment or sysadmin tasks. You can go through the fabric docs to know more. To install fabric in Linux you can go through either of the steps as per your Linux distribution.
Here I am going to demonstrate how to use fabric module to automatically run a python script on remote machines, maintaining the logs and backups for the future use, and getting back the logs on the main machine.
Aim:
1. To set optimum file permissions ('/etc/group'-644, '/etc/passwd'-644, '/etc/shadow'-600) in remote machines given their ip address ('10.0.0.218', '10.0.0.222', '10.0.0.227', '10.0.0.229').
2. Create backup of file permissions in the '/tmp' directory.
3. Create the log files in the format [filename_ip_date.log].
4. To track the records of log files, save the log files of remote machines in the main machine.
To achieve all of the above tasks, I will proceed systematically and show you how to approach for such problems. The baby steps include working on the main task i.e., to change file permissions. Utilizing our automation script, we can easily accomplish the task in all of the given ip addresses.
Firstly, we will create a python script to change the file permissions as 'test1.py'. I think it can be done easily with os or subprocess module passing the chmod commands to change the permissions.
os.system("sudo chmod 644 /etc/passwd")
os.system("sudo chmod 600 /etc/shadow")
os.system("sudo chmod 600 /etc/shadow")
os.system("sudo chmod 644 /etc/group")
But here we can't forget to take the backup of files before changing anything, as .acl(access control list) for each file. The command to take such backups will look like this:
os.system("getfacl -p /etc/group > /tmp/group.acl")
os.system("getfacl -p /etc/passwd > /tmp/passwd.acl")
os.system("getfacl -p /etc/shadow > /tmp/shadow.acl")
os.system("getfacl -p /etc/passwd > /tmp/passwd.acl")
os.system("getfacl -p /etc/shadow > /tmp/shadow.acl")
For log file name, we have to find out the ip address of each machine using python. This can be done using 'socket' or 'netifaces' modules. I have used 'os' module to simply extract the ip of an interface. For different NIC interfaces, different ip addresses are there. I will be using the 'eth0' interface ip in file name.
ipv4 = os.popen('ip addr show eth0').read().split("inet ")[1].split("/")[0]
For date, I am going to use two modules 'datetime' and 'time'. I can use either one of them also. I have extracted the date for filename in a variable 'odat' in DD-MM-YYYY format. And will take 'dat' variable for printing date and time of script run in the log file.
import datetime, time
odat = (time.strftime("%d-%m-%Y"))
dat = datetime.datetime.now()
odat = (time.strftime("%d-%m-%Y"))
dat = datetime.datetime.now()
The pen-ultimate task is to decide what should I write in the log file. According to current scenario, the log file must describe what is going on in the script, what all parameters are changing, what is the location for backup, the time of script run and the ip of system on which the script is running. You can add more info if you want. This can be done with the given below code snippet.
print "***************Log Started for test1*******************"
print '\nOn Date: ' + str(dat) + '\n' + 'In System with ip: ' + ipv4 + '\n'
print 'Updated access permissions:\n'
p1 = os.popen("ls -l /etc/passwd")
print p1.readline()
p2 = os.popen("ls -l /etc/shadow")
print p2.readline()
p3 = os.popen("ls -l /etc/group")
print p3.readline()
print "***************Log Ended for test1*******************"
p1 = os.popen("ls -l /etc/passwd")
print p1.readline()
p2 = os.popen("ls -l /etc/shadow")
print p2.readline()
p3 = os.popen("ls -l /etc/group")
print p3.readline()
print "***************Log Ended for test1*******************"
The very final task is to automate the following and getting back the logs in the main system. For automating all this, I am going to see the 'fabric.api' utility functions and pass appropriate parameters in them. The final 'automate.py' script will look like,
def main():
odat = (time.strftime("%d-%m-%Y"))
os.system('mkdir /test1-Related')
host_list = ["10.0.0.218", "10.0.0.222", "10.0.0.227", "10.0.0.228"]
for item in host_list:
env.host_string = item
env.user = "root"
env.password = "redhat123"
os.system('mkdir /test1-Related')
host_list = ["10.0.0.218", "10.0.0.222", "10.0.0.227", "10.0.0.228"]
for item in host_list:
env.host_string = item
env.user = "root"
env.password = "redhat123"
# make a directory as /tmp/test1 in remote server
sudo("mkdir /tmp/test1")
sudo("mkdir /tmp/test1")
# put the codes in /tmp/test1
put("/root/Desktop/test1.py", "/tmp/test1/test1.py")
put("/root/Desktop/test1.py", "/tmp/test1/test1.py")
# run the script on remote machine
sudo("python /tmp/test1/test1.py")
sudo("python /tmp/test1/test1.py")
# get the log files and save all the logs in /test1-Related
log_remote_path = '/tmp/test1_' + str(item) + '_' + str(odat) + '.log'
log_local_path = '/test1-Related'
get(log_remote_path, log_local_path)
log_remote_path = '/tmp/test1_' + str(item) + '_' + str(odat) + '.log'
log_local_path = '/test1-Related'
get(log_remote_path, log_local_path)
The results after running the 'automate.py' script will be shown on the terminal screen with each ip address. And you can check the '/tmp/test1-Related' for the log files, the same as created in the '/tmp' of remote machines. In a similar fashion, we can run n tasks/scripts on n remote machines easily. To find the complete code, please visit my Github account.
Checking node218 |
Checking node222 |
Checking node227 |
Checking node228 |
please import os module first and use commands module instead of os module...
ReplyDelete