I've got a webserver that I'm presently benchmarking for CPU usage. What I'm doing is essentially running one process to slam the server with requests, then running the following bash script to determine the CPU usage:
#! /bin/bash
for (( ;; ))
do
echo "`python -c 'import time; print time.time()'`, `ps -p $1 -o '%cpu' | grep -vi '%CPU'`"
sleep 5
done
It would be nice to be able to do this in Python so I can run it in one script instead of having to run two. I can't seem to find any platform independent (or at least platform independent to linux and OS X) way to get the ps output in Python without actually launching another process to run the command. I can do that, but it would be really nice if there were an API for doing this.
Is there a way to do this, or am I going to have to launch the external script?
You could check out this question about parsing ps output using Python.
One of the answers suggests using the PSI python module. It's an extension though, so I don't really know how suitable that is for you.
It also shows in the question how you can call a ps subprocess using python :)
My preference is to do something like this.
collection.sh
for (( ;; ))
do
date; ps -p $1 -o '%cpu'
done
Then run collection.sh >someFile while you "slam the server with requests".
Then kill this collection.sh operation after the server has been slammed.
At the end, you'll have file with your log of date stamps and CPU values.
analysis.py
import datetime
with( "someFile", "r" ) as source:
for line in source:
if line.strip() == "%CPU": continue
try:
date= datetime.datetime.strptime( line, "%a %b %d %H:%M:%S %Z %Y" )
except ValueError:
cpu= float(line)
print date, cpu # or whatever else you want to do with this data.
You could query the CPU usage with PySNMP. This has the added benefit of being able to take measurements from a remote computer. For that matter, you could install a VM of Zenoss or its kin, and let it do the monitoring for you.
if you don't want to invoke PS then why don't you try with /proc file system.I think you can write you python program and read the files from /proc file system and extract the data you want.I did this using perl,by writing inlined C code in perl script.I think you can find similar way in python as well.I think its doable,but you need to go through /prof file system and need to figure out what you want and how you can get it.
http://www.faqs.org/docs/kernel/x716.html
above URL might give some initial push.
Related
I need to check GoldenGate processes' lag. In order to this, I execute Goldengate than I try to run GoldenGate's own commands "info all".
import subprocess as sub
import re
import os
location = str(sub.check_output(['ps -ef | grep mgr'], shell = True)).split()
pattern = re.compile(r'mgr\.prm$')
print(type(location))
for index in location:
if pattern.search(index)!=None:
gg_location = index[:-14] + "ggsci"
exec_ggate = sub.call(str(gg_location))
os.system('info all')
Yet, when I execute the GoldenGate it opens a new GoldenGate's own shell. So, I think because of that, Python cannot be able to do run "info all" command. How can I solve this problem? If there is missing information, please inform me.
Thank you in advance,
For command automation on Golden Gate you have the following information in the Oracle docs: https://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_gettingstarted.htm#GWUAD1096
To input a script
Use the following syntax from the command line of the operating system.
ggsci < input_file
Where:
The angle bracket (<) character pipes the file into the GGSCI program.
input_file is a text file, known as an OBEY file, containing the commands that you want to issue, in the order, they are to be issued.
Taking your script (keep into mind I don't know to code into python) you can simply execute a shell command in python in the following way:
import os
os.system("command")
So try doing this:
import os
os.system("ggsci < input_file")
Changing the input_file as indicated by the docs.
I think you will have an easier time doing it this way.
Background
I'm working on a bash script to pull serial numbers and part numbers from all the devices in a server rack, my goal is to be able to run a single script (inventory.sh) and walk away while it generates text files containing the information I need. I'm using bash for maximum compatibility, the RHEL 6.7 systems do have Perl and Python installed, however they have minimal libraries. So far I haven't had to use anything other than bash, but I'm not against calling a Perl or Python script from my bash script.
My Problem
I need to retrieve the Serial Numbers and Part numbers from the drives in a Dot Hill Systems AssuredSAN 3824, as well as the Serial numbers from the equipment inside. The only way I have found to get all the information I need is to connect over SSH and run the following three commands dumping the output to a local file:
show controllers
show frus
show disks
Limitations:
I don't have "sshpass" installed, and would prefer not to install it.
The Controller is not capable of storing SSH keys ( no option in custom shell).
The Controller also cannot write or transfer local files.
The Rack does NOT have access to the Internet.
I looked at paramiko, but while Python is installed I do not have pip.
I also cannot use CPAN.
For what its worth, the output comes back in XML format. (I've already written the code to parse it in bash)
Right now I think my best option would be to have a library for Python or Perl in the folder with my other scripts, and write a script to dump the commands' output to files that I can parse with my bash script. Which language is easier to just provide a library in a file? I'm looking for a library that is as small and simple as possible to use. I just need a way to get the output of those commands to XML files. Right now I am just using ssh 3 times in my script and having to enter the password each time.
Have a look at SNMP. There is a reasonable chance that you can use SNMP tools to remotely extract the information you need. The manufacturer should be able to provide you with the MIBs.
I ended up contacting the Manufacturer and asking my question. They said that the system isn't setup for connecting without a password, and their SNMP is very basic and won't provide the information I need. They said to connect to the system with FTP and use "get logs " to download an archive of the configuration and logs. Not exactly ideal as it takes 4 minutes just to run that one command but it seems to be my only option. Below is the script I wrote to retrieve the file automatically by adding the login credentials to the .netrc file. This works on RHEL 6.7:
#!/bin/bash
#Retrieve the logs and configuration from a Dot Hill Systems AssuredSAN 3824 automatically.
#Modify "LINE" and "HOST" to fit your configuration.
LINE='machine <IP> login manage password <password>'
HOST='<IP>'
AUTOLOGIN="/root/.netrc"
FILE='logfiles.zip'
#Check for and verify the autologin file
if [ -f $AUTOLOGIN ]; then
printf "Found auto-login file, checking for proper entry... \r"
READLINE=`cat $AUTOLOGIN | grep "$LINE"`
#Append the line to the end of .netrc if file exists but not the line.
if [ "$LINE" != "$READLINE" ]; then
printf "Proper entry not found, creating it... \r"
echo "$LINE" >> "$AUTOLOGIN"
else
printf "Proper entry found... \r"
fi
#Create the Autologin file if it doesn't exist
else
printf "Auto-Login file does not exist, creating it and setting permissions...\r"
echo "$LINE" > "$AUTOLOGIN"
chmod 600 "$AUTOLOGIN"
fi
#Start getting the information from the controller. (This takes a VERY long time)
printf "Retrieving Storage Controller data, this will take awhile... \r"
ftp $HOST << SCRIPT
get logs $FILE
SCRIPT
exit 0
This gave me a bunch of files in the zip, but all I needed was the "store_....logs" file. It was about 500,000 lines long, the first portion is the entire configuration in XML format, then the configuration in text format, followed by the logs from the system. I parsed the file and stripped off the logs at the end which cut the file down to 15,000 lines. From there I divided it into two files (config.xml and config.txt). I then pulled the XML output of the 3 commands that I needed and it to the 3 files my previously written script searches for. Now my inventory script pulls in everything it needs, albeit pretty slow due to waiting 4 minutes for the system to generate the zip file. I hope this helps someone in the future.
Edit:
Waiting 4 minutes for the system to compile was taking too long. So I ended up using paramiko and python scripts to dump output from the commands to files that my other code can parse. It accepts the IP of the Controller as a parameter. Here is the script for those interested. Thank you again for all the help.
#!/usr/bin/env python
#Saves output of "show disks" from the storage Controller to an XML file.
import paramiko
import sys
import re
import xmltodict
IP = sys.argv[1]
USERNAME = "manage"
PASSWORD = "password"
FILENAME = "./logfiles/disks.xml"
cmd = "show disks"
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
client.connect(IP,username=USERNAME,password=PASSWORD)
stdin, stdout, stderr = client.exec_command(cmd)
except Exception as e:
sys.exit(1)
data = ""
for line in stdout:
if re.search('#', line):
pass
else:
data += line
client.close()
f = open(FILENAME, 'w+')
f.write(data)
f.close()
sys.exit(0)
I am trying to get some stats for a directory in hdfs. I am trying to get the no of files/subdirs and the size for each. I started out thinking that I can do this in bash.
#!/bin/bash
OP=$(hadoop fs -ls hdfs://mydirectory)
echo $(wc -l < "$OP")
I only have this much so far and I quickly realised that python might be a better option for this. However I am not able to figure out how to execute hadoop commands like hadoop fs -ls from python
Try the following snippet:
output = subprocess.Popen(["hadoop", "fs", "-ls", "/user"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for line in output.stdout:
print(line)
Additionally, you can refer to this sub-process example, where you can get return status, output and error message separately.
See https://docs.python.org/2/library/commands.html for your options, including how to get the return status (in case of an error). The basic code you're missing is
import commands
hdir_list = commands.getoutput('hadoop fs -ls hdfs://mydirectory')
Yes: deprecated in 2.6, still useful in 2.7, but removed from Python 3. If that bothers you, switch to
os.command (<code string>)
... or better yet use subprocess.call (introduced in 2.4).
i need some help with this...
I have a program installed on my computer that i want to call to calculate some things and give me an output-file...
in Matlab the command "dos()" does the job giving me also the cmd screen output in matlab.
I need this to work in python but i am making something wrong.
data='file.csv -v'
db=' -d D:\directory\bla\something.db'
anw='"D:\Program Files\bla\path\to\anw.exe"' + db + ' -i' + data
"anw" output is this one:
>>> anw
'"D:\\Program Files\\bla\\path\\to\\anw.exe" -d D:\\directory\\bla\\something.db -i file.csv -v'
## without the "" it does not work either
import subprocess as sb
p= sb.Popen('cmd','/K', anw) ## '/C' does not work either
i get the following error message from cmd.exe inside the python shell
Windows cannot find "\"D:\Program Files\bla\path\to\anw.exe"" Make sure you typed the name correctly, and then try again.
this line runs when i make a bat. file out of it.
it runs in matlab via "dos(anw)" so what is wrong here?
ps: i have blanks in my command... could this be the problem? i do not know where the first "\" comes from in the cmd. exe error message
for now i created a bat. file with all the stuff cmx.de should do in the specific directory where the input file lies...
i just had to tell python to change directory with
import os
os.chdir("D:\working\directory")
os.system(r'D:\working\directory\commands.bat')
it works good and gives me the output of cmd directly in the python shell
I tried fabric with a '>' in the command string. It always gives out an error code 2. Currently dabbling with subprocess.call, subprocess.check_output and keeping stdout="filesocket". Not working. The only thing that gets written in the file is the USAGE for mysqldump. Using shlex to parse 'mysqldump -uroot -ppassword database table1 table2'
All this because I don't know shell scripting with string variables from the 'date' utility. How do I take the current date and use it to name the backup file in shell script. OR how do I get this thing done in python?
Thanks in advance.
regards.
You can get a custom date out of date using the following syntax.
CUSTOM_DATE=$(date "+%Y-%m-%d_%H_%M_%S")
The easiest way to accomplish this is to put a script on the remote end that does 'everything'
#!/bin/bash
CUSTOM_DATE=$(date "+%Y-%m-%d_%H_%M_%S")
mysqldump -u admin -p password database table1 table2 >/path/to/backups/mysqldump.${CUSTOM_DATE}.db
"How do I take the current date and use it to name the backup file in shell script. OR how do I get this thing done in python?"
from datetime import datetime
filename = 'mysql_backup_{0:%Y%m%d_%H%M}.sql'.format(datetime.now())
# filename == 'mysql_backup_20120227_0952.sql'
My Answer from related stackoverflow post.
In Microsoft Windows, run below command in CMD
mysqldump -u USERNAME -pYOURPASSWORD --all-databases > "C:/mysql_backup_%date:~-10,2%-%date:~-7,2%-%date:~-4,4%-%time:~0,2%_%time:~3,2%_%time:~6,2%.sql"
Output file will look like,
mysql_backup_21-02-2015-13_07_18.sql
If you want to automate the backup process, then you can use Windows Task Scheduler, and put above command in .bat file - task scheduler will run the .bat file at specified interval.