I have a running demo on a Linux server which consumes quite a bit of CPU and Memory usage. These parameters keep changing based on the load of the running demo. I want to extract the CPU usage and Memory usage periodically , i.e. every 3-4 sec and create a plot of the extracted result.
Considering the process as " Running Demo", on the terminal I typed:
ps aux |grep Running Demo | awk '{print $3 $4}'
This gives me the CPU and Memory usage of Running Demo. But I want the next two things i.e.
1) get this result outputted every 3-4 sec.
2) Make a plot of the generated result.
Any help or suggestion will be highly appreciated. I am a starter in this community.
Thanks
What you are trying to do is well known as an existing project :
See Munin
EXAMPLE
NOTE
it's supported by the Open Source community, so...
it will be stronger
don't run odd commands like ps aux |grep Running Demo | awk '{print $3 $4}' but ps auxw | awk '/Running Demo/{print $3 $4}'
many plugins exists and works for basics : CPU, RAM, FW, Apache and many more
if you really need gnuplot, see a top 3 on a goggle search http://blah.token.ro/post/249956031/using-gnuplot-to-graph-process-cpu-usage
the following python script accepts an output filename (png), and one or more pids. when you press ctrl-C it stops and uses gnuplot to generate a nice graph.
#!/usr/bin/env python
import os
import tempfile
import time
import sys
def total(pids):
return [sum(map(int, file('/proc/%s/stat' % pid).read().split()[13:17])) for pid in pids]
def main():
if len(sys.argv) == 1 or sys.argv[1] == '-h':
print 'log.py output.png pid1 pid2..'
return
pids = sys.argv[2:]
results = []
prev = total(pids)
try:
while True:
new = total(pids)
result = [(new[i]-prev[i])/0.1 for i, pid in enumerate(pids)]
results.append(result)
time.sleep(0.1)
prev = new
except KeyboardInterrupt:
pass
t1, t2 = tempfile.mkstemp()[1], tempfile.mkstemp()[1]
f1, f2 = file(t1, 'w'), file(t2, 'w')
print
print 'data: %s' % t1
print 'plot: %s' % t2
for result in results:
print >>f1, ' '.join(map(str, result))
print >>f2, 'set terminal png size %d,480' % (len(results)*5)
print >>f2, "set out '%s'" % sys.argv[1]
print >>f2, 'plot ' + ', '.join([("'%s' using ($0/10):%d with linespoints title '%s'" % (t1, i+1, pid)) for i, pid in enumerate(pids)])
f1.close()
f2.close()
os.system('gnuplot %s' % t2)
if __name__ == '__main__':
main()
Related
Here is an example code of a slow application. (Imagine the boot of a Linux) This is the DUT, which should be controlled.
#linux.py
import time
print('Loading kernel ......')
time.sleep(0.5)
print('Loading foo [ OK ]')
time.sleep(0.5)
print('Loading bar [ OK ]')
time.sleep(0.5)
input('login> ')
I want to control via pexpect python script like the following.
# controller.py
import pexpect
import sys
pybin = sys.executable
command = pybin + ' linux.py'
p = pexpect.spawn(command)
p.expect('> ', timeout = 5)
print(p.before.decode(), end='')
print(p.match.group(0).decode())
p.sendline('')
It is OK, but I cannot get the console's output of the "Linux.py's" boot before total boot-up. I mean, I didn't get feedback before the login prompt. Imagine, there is an error during the boot-up. The script above will fail with timeout exception.
My GOAL
To monitor the child process and print it's output while waiting to prompt. How can it be done?
Solution 1
I found an easy way based on this question
# controller.py
import pexpect
import sys
pybin = sys.executable
command = pybin + ' linux.py'
p = pexpect.spawn(command)
while True:
# The return value is the index of the matched string
i=p.expect(['> ', '\n'], timeout = 5)
print(p.before.decode(), end='')
print(p.match.group(0).decode(), end='')
if i==0:
break
print()
p.sendline('')
The key is to wait on multiple "expected string". Then decide which is the real prompt and which is the line end. This solution works if the error is terminated with a newline character.
Solution 2
Other way is to use small timeout and print the proper chunk of the before string:
# controller.py
import pexpect
import sys
pybin = sys.executable
command = pybin + ' linux.py'
p = pexpect.spawn(command)
timeout_cnt = 0
print_idx = 0
while True:
try:
i=p.expect('> ', timeout = 1)
# prompt has arrived
break
except pexpect.TIMEOUT:
timeout_cnt += 1
if timeout_cnt>30:
# A real timeout has occured
raise
finally:
print(p.before.decode()[print_idx:], end='')
print_idx = len(p.before.decode())
print(p.match.group(0).decode(), end='')
print()
p.sendline('')
I'm creating a command that will take a partial or whole name match of a process and kill its lowest pid - and thus the rest of the processes spawned from it. My code returns a min(list_of_process_ids) of 0, of which there is no min of 0. Please enlighten me as to why this is happening. Thank you.
#!/usr/bin/env python
"""Kill proceses by partial name matching"""
import os, sys
def usage():
return ("pskill.py process_name")
def pids(proc):
""" Find the processes"""
procs = []
procs = os.system("ps -ef|grep -i " + proc + "|grep -v grep|grep -v pfind|awk '{print $2}'")
procs = [int(x) for x in str(procs)]
return procs
def kill(procs):
ppid = min(procs)
os.system("kill " + str(ppid))
return ("Processes Killed...")
def main():
if len(sys.argv) != 2:
print (usage())
else:
proc = sys.argv[1]
pids(proc)
kill(pids(proc))
main()
You aren't grabbing the stdout, so you aren't actually getting anything other that the exist status of the command. Which you can be glad is 0 :-)
Try using the subprocess module. Specifically with the stdout option of piping the result to your python console...
I had a code that was running successfully, but takes too long to run. So I decided to try to parallelize it.
Here is a simplified version of the code:
import multiprocessing as mp
import os
import time
output = mp.Queue()
def calcSum(Nstart,Nstop,output):
pid = os.getpid()
for s in range(Nstart, Nstop):
file_name = 'model' + str(s) + '.pdb'
file = 'modelMap' + str(pid) + '.dat'
#does something with the contents of the pdb file
#creates another file by using some other library:
someVar.someFunc(file_name=file)
#uses a function to read the file
density += readFile(file)
os.remove(file)
print pid,s
output.put(density)
if __name__ == '__main__':
snapshots = int(sys.argv[1])
cpuNum = int(sys.argv[2])
rangeSet = np.zeros((cpuNum)) + snapshots//cpuNum
for i in range(snapshots%cpuNum):
rangeSet[i] +=1
processes = []
for c in range(cpuNum):
na,nb = (np.sum(rangeSet[:c])+1, np.sum(rangeSet[:c+1]))
processes.append(mp.Process(target=calcSum,args=(int(na),int(nb),output)))
for p in processes:
p.start()
print 'now i''m here'
results = [output.get() for p in processes]
print 'now i''m there'
for p in processes:
p.join()
print 'think i''l stay around'
t1 =time.time()
print len(results)
print (t1-t0)
I run this code with the command python run.py 10 4.
This code prints the pid and s successfully in the outer loop in calcSum. I can also see that two CPUs are at 100% in the terminal. What happens is that finally pid 5 and pid 10 are printed, then the CPU usage drops to zero, and nothing happens. None of the following print statements work, and the script still looks like it's running in the terminal. I'm guessing that the processes are not exited. Is that the case? How can I fix it?
Here's the complete output:
$ python run.py 10 4
now im here
9600
9601
9602
9603
9602 7
9603 9
9601 4
9600 1
now im there
9602 8
9600 2
9601 5
9603 10
9600 3
9601 6
At that point I have to stop termination with Ctrl+C.
A few other notes:
if I comment os.remove(file) out, I can see the created files in the directory
unfortunately, I cannot bypass the part in which a file is created and then read, within calcSum
EDIT At first it worked to switch output.get() and p.join(), but upon some other edits in the code, this is no longer working. I have updated the code above.
At the end of a script, I would like to return the peak memory usage. After reading other questions, here is my script:
#!/usr/bin/env python
import sys, os, resource, platform
print platform.platform(), platform.python_version()
os.system("grep 'VmRSS' /proc/%s/status" % os.getpid())
print resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
dat = [x for x in xrange(10000000)]
os.system("grep 'VmRSS' /proc/%s/status" % os.getpid())
print resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
and here is what I get:
$ test.py
Linux-2.6.18-194.26.1.el5-x86_64-with-redhat-5.5-Final 2.7.2
VmRSS: 4472 kB
0
VmRSS: 322684 kB
0
Why is resource.getrusage always returning me 0?
The same thing happens interactively in a terminal. Can this be due to the way Python was specifically installed on my machine? (It's a computer cluster I'm using with others and managed by admins.)
Edit: same thing happen when I use subprocess; executing this script
#!/usr/bin/env python
import sys, os, resource, platform
from subprocess import Popen, PIPE
print platform.platform(), platform.python_version()
p = Popen(["grep", "VmRSS", "/proc/%s/status" % os.getpid()], shell=False, stdout=PIPE)
print p.communicate()
print "resource:", resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
dat = [x for x in xrange(10000000)]
p = Popen(["grep", "VmRSS", "/proc/%s/status" % os.getpid()], shell=False, stdout=PIPE)
print p.communicate()
print "resource:", resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
gives this:
$ test.py
Linux-2.6.18-194.26.1.el5-x86_64-with-redhat-5.5-Final 2.7.2
('VmRSS:\t 4940 kB\n', None)
resource: 0
('VmRSS:\t 323152 kB\n', None)
resource: 0
Here's a way to replace the ´os.system´ call
In [131]: from subprocess import Popen, PIPE
In [132]: p = Popen(["grep", "VmRSS", "/proc/%s/status" % os.getpid()], shell=False, stdout=PIPE)
In [133]: p.communicate()
Out[133]: ('VmRSS:\t 340832 kB\n', None)
I also have no issue running the line you felt you have problems with:
In [134]: print resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
340840
Edit
The rusage issue could well be a kernel dependent issue and simply not available on your red hat dist http://bytes.com/topic/python/answers/22489-getrusage
You could of course have a separate thread in your code looking at the current usage and storing throughout the execution of the code and store the highest value observed
Edit 2
Here's a full solution skipping resource and monitoring usages via Popen. The frequency of checking must of course be relevant but not frequent so that it eats all cpu.
#!/usr/bin/env python
import threading
import time
import re
import os
from subprocess import Popen, PIPE
maxUsage = 0
keepThreadRunning = True
def memWatch(freq=20):
global maxUsage
global keepThreadRunning
while keepThreadRunning:
p = Popen(["grep", "VmRSS", "/proc/%s/status" % os.getpid()],
shell=False, stdout=PIPE)
curUsage = int(re.search(r'\d+', p.communicate()[0]).group())
if curUsage > maxUsage:
maxUsage = curUsage
time.sleep(1.0 / freq)
if __name__ == "__main__":
t = threading.Thread(target=memWatch)
t.start()
print maxUsage
[p for p in range(1000000)]
print maxUsage
[str(p) for p in range(1000000)]
print maxUsage
keepThreadRunning = False
t.join()
The memWatch function can be optimized by calculating the sleep time once, not reformatting the path to the process each loop and compiling the regular expression before entering the while loop. But in all I hope that was the functionality you sought.
I am writing a python script to keep a buggy program open and I need to figure out if the program is not respoding and close it on windows. I can't quite figure out how to do this.
On Windows you can do this:
import os
def isresponding(name):
os.system('tasklist /FI "IMAGENAME eq %s" /FI "STATUS eq running" > tmp.txt' % name)
tmp = open('tmp.txt', 'r')
a = tmp.readlines()
tmp.close()
if a[-1].split()[0] == name:
return True
else:
return False
It is more robust to use the PID though:
def isrespondingPID(PID):
os.system('tasklist /FI "PID eq %d" /FI "STATUS eq running" > tmp.txt' % PID)
tmp = open('tmp.txt', 'r')
a = tmp.readlines()
tmp.close()
if int(a[-1].split()[1]) == PID:
return True
else:
return False
From tasklist you can get more information than that. To get the "NOT RESPONDING" processes directly, just change "running" by "not responding" in the functions given. See more info here.
Piling up on the awesome answer from #Saullo GP Castro, this is a version using subprocess.Popen instead of os.system to avoid creating a temporary file.
import subprocess
def isresponding(name):
"""Check if a program (based on its name) is responding"""
cmd = 'tasklist /FI "IMAGENAME eq %s" /FI "STATUS eq running"' % name
status = subprocess.Popen(cmd, stdout=subprocess.PIPE).stdout.read()
return name in str(status)
The corresponding PID version is:
def isresponding_PID(pid):
"""Check if a program (based on its PID) is responding"""
cmd = 'tasklist /FI "PID eq %d" /FI "STATUS eq running"' % pid
status = subprocess.Popen(cmd, stdout=subprocess.PIPE).stdout.read()
return str(pid) in str(status)
The usage of timeit showed that the usage of subprocess.Popen is twice as fast (mainly because we don't need to go through a file):
+-----------------------------+---------------------------+
| Function | Time in s (10 iterations) |
+-----------------------------+---------------------------+
| isresponding_os | 8.902 |
+-----------------------------+---------------------------+
| isrespondingPID_os | 8.318 |
+-----------------------------+---------------------------+
| isresponding_subprocess | 4.852 |
+-----------------------------+---------------------------+
| isresponding_PID_subprocess | 4.868 |
+-----------------------------+---------------------------+
Suprisingly, it is a bit slower for os.system implementation if we use PID but not much different if we use subprocess.Popen.
Hope it can help.