find name of nfs server with python [duplicate] - python

How do I iterate through the mount points of a Linux system using Python? I know I can do it using df command, but is there an in-built Python function to do this?
Also, I'm just writing a Python script to monitor the mount points usage and send email notifications. Would it be better / faster to do this as a normal shell script as compared to a Python script?
Thanks.

The Python and cross-platform way:
pip install psutil # or add it to your setup.py's install_requires
And then:
import psutil
partitions = psutil.disk_partitions()
for p in partitions:
print p.mountpoint, psutil.disk_usage(p.mountpoint).percent

Running the mount command from within Python is not the most efficient way to solve the problem. You can apply Khalid's answer and implement it in pure Python:
with open('/proc/mounts','r') as f:
mounts = [line.split()[1] for line in f.readlines()]
import smtplib
import email.mime.text
msg = email.mime.text.MIMEText('\n'.join(mounts))
msg['Subject'] = <subject>
msg['From'] = <sender>
msg['To'] = <recipient>
s = smtplib.SMTP('localhost') # replace 'localhost' will mail exchange host if necessary
s.sendmail(<sender>, <recipient>, msg.as_string())
s.quit()
where <subject>, <sender> and <recipient> should be replaced by appropriate strings.

The bash way to do it, just for fun:
awk '{print $2}' /proc/mounts | df -h | mail -s `date +%Y-%m-%d` "you#me.com"

I don't know of any library that does it but you could simply launch mount and return all the mount points in a list with something like:
import commands
mount = commands.getoutput('mount -v')
mntlines = mount.split('\n')
mntpoints = map(lambda line: line.split()[2], mntlines)
The code retrieves all the text from the mount -v command, splits the output into a list of lines and then parses each line for the third field which represents the mount point path.
If you wanted to use df then you can do that too but you need to remove the first line which contains the column names:
import commands
mount = commands.getoutput('df')
mntlines = mount.split('\n')[1::] # [1::] trims the first line (column names)
mntpoints = map(lambda line: line.split()[5], mntlines)
Once you have the mount points (mntpoints list) you can use for in to process each one with code like this:
for mount in mntpoints:
# Process each mount here. For an example we just print each
print(mount)
Python has a mail processing module called smtplib, and one can find information in the Python docs

Related

Create a python script and use grep command?

I'm creating a script wherein I want to grep all a specific address based on the list?
before what I usually do run a grep 1 by 1 using this command ex. grep "192.168.1.1" *
Now I'm creating a script.
Example of the output.
print(i) output.
192.168.1.0
192.168.1.1
192.168.1.2
192.168.1.3
but how to call the list and put into loop under os.system so I can grep all the list?
Thanks
import ipaddress
import os
#Ask the ipaddress in CIDR format
ip = input("Enter the IP/CIDR: ")
os.chdir("/rs/configs")
print("pwd=%s" % os.getcwd())
for i in ipaddress.IPv4Network(ip):
print (i)
os.system("grep $i '*') #<--Grep from list and run to all directory *
The basic answer is "grep {} '*'".format(ip) but there are a number of problems with your script.
To improve usability, I would suggest you change the script so it accepts a list of IP addresses as command-line arguments instead.
You want to avoid os.system() in favor of subprocess.run()
There is no need to cd to the directory which contains the files you want to examine.
Finally, there is no need really to run grep, as Python itself is quite capable of searching a set of files.
import ipaddress
import glob
ips = set([ipaddress.IPv4Network(ip) for ip in sys.argv[1:]])
for file in glob.glob('/rs/configs/*'):
with open(file) as lines:
for line in lines:
if any(x in line for x in ips):
print("{0}:{1}".format(file, line))
This should be significantly more efficient by way of examining the files only once.
It's not entirely clear what you hope to gain by using ipaddress here if you are grepping for individual IP addresses anyway.

Python OS Commands Not Work Due To New App Execution

I need to check GoldenGate processes' lag. In order to this, I execute Goldengate than I try to run GoldenGate's own commands "info all".
import subprocess as sub
import re
import os
location = str(sub.check_output(['ps -ef | grep mgr'], shell = True)).split()
pattern = re.compile(r'mgr\.prm$')
print(type(location))
for index in location:
if pattern.search(index)!=None:
gg_location = index[:-14] + "ggsci"
exec_ggate = sub.call(str(gg_location))
os.system('info all')
Yet, when I execute the GoldenGate it opens a new GoldenGate's own shell. So, I think because of that, Python cannot be able to do run "info all" command. How can I solve this problem? If there is missing information, please inform me.
Thank you in advance,
For command automation on Golden Gate you have the following information in the Oracle docs: https://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_gettingstarted.htm#GWUAD1096
To input a script
Use the following syntax from the command line of the operating system.
ggsci < input_file
Where:
The angle bracket (<) character pipes the file into the GGSCI program.
input_file is a text file, known as an OBEY file, containing the commands that you want to issue, in the order, they are to be issued.
Taking your script (keep into mind I don't know to code into python) you can simply execute a shell command in python in the following way:
import os
os.system("command")
So try doing this:
import os
os.system("ggsci < input_file")
Changing the input_file as indicated by the docs.
I think you will have an easier time doing it this way.

send sms with variable in gammu [python]

Im trying to send a variable in an sms using gammu. Im using the gammu smsd runonreceive to run a python script when I send a message to my raspberry pi from my phone. This is what the script looks like.
#!/usr/bin/python
import os
os.system("sh .webgps.sh > coordinates.text")
file = "/home/pi/coordinates.text"
with open(file) as f:
(lat, long) = f.read().splitlines()
os.system("echo lat | sudo gammu-smsd-inject TEXT 07xxxxxxxxx")
What this script does is it runs a shell script which gets the latitude and longitude from my gps module and puts them in a text file. Then it gets the values from the text file and puts the latitude in the lat variable and the longitude in the long variable. I can verify that this works because when I print the variables I can see latitude and longitude and they are the same values as the ones in the text file.
Now the bit that im having problems with is sending the values to my phone. If i run the python script how it currently is, then i get a message on my phone which says lat. What I want is to be sent the actual values for latitude and longitude and I dont know how to put the variables into the gammu inject text line.
You receive "lat" in your phone, because the python var "lat" is not parsed that easy in the os.system echo call.
Sending a python variable to the shell is a bit weird story.
One solution that has worked for me in a similar situation is one like this:
with open(file) as f:
(lat, long) = f.read().splitlines()
cmd="echo "+lat+" | sudo gammu-smsd-inject TEXT 07xxxxxxxxx"
os.system(cmd)
Better use your own library of gammu, Python-gammu allows you to easily straight forward access the phone, and better error handling. Many examples are available in examples/ directory in the python-gammu sources.
On Ubuntu it is advised to use the distributions repository. So installing python-gammu should be per apt manager:
apt-get install python-gammu
Here is an example of the script: Sending a message
#!/usr/bin/env python
# Sample script to show how to send SMS
from __future__ import print_function
import gammu
import sys
# Create object for talking with phone
state_machine = gammu.StateMachine()
# Optionally load config file as defined by first parameter
if len(sys.argv) > 2:
# Read the configuration from given file
state_machine.ReadConfig(Filename=sys.argv[1])
# Remove file name from args list
del sys.argv[1]
else:
# Read the configuration (~/.gammurc)
state_machine.ReadConfig()
# Check parameters
if len(sys.argv) != 2:
print('Usage: sendsms.py [configfile] RECIPIENT_NUMBER')
sys.exit(1)
# Connect to the phone
state_machine.Init()
# Prepare message data
# We tell that we want to use first SMSC number stored in phone
message = {
'Text': 'python-gammu testing message',
'SMSC': {'Location': 1},
'Number': sys.argv[1],
}
# Actually send the message
state_machine.SendSMS(message)

Serial Numbers from a Storage Controller over SSH

Background
I'm working on a bash script to pull serial numbers and part numbers from all the devices in a server rack, my goal is to be able to run a single script (inventory.sh) and walk away while it generates text files containing the information I need. I'm using bash for maximum compatibility, the RHEL 6.7 systems do have Perl and Python installed, however they have minimal libraries. So far I haven't had to use anything other than bash, but I'm not against calling a Perl or Python script from my bash script.
My Problem
I need to retrieve the Serial Numbers and Part numbers from the drives in a Dot Hill Systems AssuredSAN 3824, as well as the Serial numbers from the equipment inside. The only way I have found to get all the information I need is to connect over SSH and run the following three commands dumping the output to a local file:
show controllers
show frus
show disks
Limitations:
I don't have "sshpass" installed, and would prefer not to install it.
The Controller is not capable of storing SSH keys ( no option in custom shell).
The Controller also cannot write or transfer local files.
The Rack does NOT have access to the Internet.
I looked at paramiko, but while Python is installed I do not have pip.
I also cannot use CPAN.
For what its worth, the output comes back in XML format. (I've already written the code to parse it in bash)
Right now I think my best option would be to have a library for Python or Perl in the folder with my other scripts, and write a script to dump the commands' output to files that I can parse with my bash script. Which language is easier to just provide a library in a file? I'm looking for a library that is as small and simple as possible to use. I just need a way to get the output of those commands to XML files. Right now I am just using ssh 3 times in my script and having to enter the password each time.
Have a look at SNMP. There is a reasonable chance that you can use SNMP tools to remotely extract the information you need. The manufacturer should be able to provide you with the MIBs.
I ended up contacting the Manufacturer and asking my question. They said that the system isn't setup for connecting without a password, and their SNMP is very basic and won't provide the information I need. They said to connect to the system with FTP and use "get logs " to download an archive of the configuration and logs. Not exactly ideal as it takes 4 minutes just to run that one command but it seems to be my only option. Below is the script I wrote to retrieve the file automatically by adding the login credentials to the .netrc file. This works on RHEL 6.7:
#!/bin/bash
#Retrieve the logs and configuration from a Dot Hill Systems AssuredSAN 3824 automatically.
#Modify "LINE" and "HOST" to fit your configuration.
LINE='machine <IP> login manage password <password>'
HOST='<IP>'
AUTOLOGIN="/root/.netrc"
FILE='logfiles.zip'
#Check for and verify the autologin file
if [ -f $AUTOLOGIN ]; then
printf "Found auto-login file, checking for proper entry... \r"
READLINE=`cat $AUTOLOGIN | grep "$LINE"`
#Append the line to the end of .netrc if file exists but not the line.
if [ "$LINE" != "$READLINE" ]; then
printf "Proper entry not found, creating it... \r"
echo "$LINE" >> "$AUTOLOGIN"
else
printf "Proper entry found... \r"
fi
#Create the Autologin file if it doesn't exist
else
printf "Auto-Login file does not exist, creating it and setting permissions...\r"
echo "$LINE" > "$AUTOLOGIN"
chmod 600 "$AUTOLOGIN"
fi
#Start getting the information from the controller. (This takes a VERY long time)
printf "Retrieving Storage Controller data, this will take awhile... \r"
ftp $HOST << SCRIPT
get logs $FILE
SCRIPT
exit 0
This gave me a bunch of files in the zip, but all I needed was the "store_....logs" file. It was about 500,000 lines long, the first portion is the entire configuration in XML format, then the configuration in text format, followed by the logs from the system. I parsed the file and stripped off the logs at the end which cut the file down to 15,000 lines. From there I divided it into two files (config.xml and config.txt). I then pulled the XML output of the 3 commands that I needed and it to the 3 files my previously written script searches for. Now my inventory script pulls in everything it needs, albeit pretty slow due to waiting 4 minutes for the system to generate the zip file. I hope this helps someone in the future.
Edit:
Waiting 4 minutes for the system to compile was taking too long. So I ended up using paramiko and python scripts to dump output from the commands to files that my other code can parse. It accepts the IP of the Controller as a parameter. Here is the script for those interested. Thank you again for all the help.
#!/usr/bin/env python
#Saves output of "show disks" from the storage Controller to an XML file.
import paramiko
import sys
import re
import xmltodict
IP = sys.argv[1]
USERNAME = "manage"
PASSWORD = "password"
FILENAME = "./logfiles/disks.xml"
cmd = "show disks"
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
client.connect(IP,username=USERNAME,password=PASSWORD)
stdin, stdout, stderr = client.exec_command(cmd)
except Exception as e:
sys.exit(1)
data = ""
for line in stdout:
if re.search('#', line):
pass
else:
data += line
client.close()
f = open(FILENAME, 'w+')
f.write(data)
f.close()
sys.exit(0)

Using console commands in python

I am using console commands in python, however, it is not outputting the value I want.
The path is:
#ifconfig -a | grep "HWaddr"
From this command I get:
eth0 Link encap:Ethernet HWaddr 30:9E:D5:C7:1z:EF
eth1 Link encap:Ethernet HWaddr 30:0E:95:97:0A:F0
I need to use console commands to retrieve that value, so this is what I have so far for code:
def getmac():
mac=subprocess.check_output('ifconfig -a | grep "HWaddr"')
print "%s" %(mac)
I basically only want to retrieve the hardware address which is 30:0E:D5:C7:1A:F0. My code above doesn't retrieve that. My question is how do I use console commands to get the value I want.
Thanks in advance.
The most robust and easy way in Linux to get the MAC address is to get it from sysfs, mounted on /sys.
For interface etho, the location would be /sys/class/net/eth0/address; Similarly for eth1, it would be /sys/class/net/eth1/address.
% cat /sys/class/net/eth0/address
74:d4:35:XX:XX:XX
So, you could just read the file in python too:
with open('/sys/class/net/eth0/address') as f:
mac_eth0 = f.read().rstrip()
Quoting from here.
Python 2.5 includes an uuid implementation which (in at least one version) needs the mac address. You can import the mac finding function into your own code easily:
from uuid import getnode as get_mac
mac = get_mac()
The return value is the mac address as 48 bit integer.
import subprocess
def getmac(command):
return subprocess.check_output(command, shell=True)
command = "ifconfig -a | grep HWaddr"
print "%s" %(getmac(command).split()[9])
# or print out the entire list to see which index your HWAddr corresponds to
# print "%s" %(getmac(command).split())
Or as per user heemayl,
command = "cat /sys/class/net/eth1/address"
print "%s" %(getmac(command))
Note:
1. Using shell=True isn't recommended as per Python docs
2. This isn't as efficient compared to the normal ways of reading a file in Python.
You could have also returned
subprocess.check_output(command)
However, in the above case, you might get an OSError or a CalledProcessError(retcode, cmd, output=output) depending on whether you passed your commands as a list, which can be solved if you explicitly mention your python path as per this

Categories