CPU percent of running process - python

I'm trying to get the percent of CPU usage for an external process in python. I've seen some other posts on this topic, but haven't helped me too much. When I run the following function I get values that aren't consistent with what I'm seeing in task manager. For example, if I'm monitoring a chrome process I get values that oscillate between 1 and 2, but task manager shows values oscillating between 25 and 30. Any suggestions? Thanks.
def monitor(pid):
cpu_table = []
p = psutil.Process(pid)
while p.is_running():
cpu_table.append(p.get_cpu_percent())
time.sleep(1)
return cpu_table

There are several chrome processes and you might be monitoring the wrong one
cpu_percent() "compares system CPU times elapsed since last call or module import". Pass the same interval that task manager uses (in case, it is not 1 second). Make sure to start both your monitor() function and the task manager at the same time.

Related

Why do two CPUs work when I am only working with one process?

I'm running a script like
def test0():
start = time()
for i in range(int(1e8)):
i += 1
print(time() - start)
When I run this on my machine which has 4 CPUs I get the following trace for CPU usage using the Ubuntu 20.04 system monitor.
In the image you can see I ran two experiments separated by some time. But in each experiment, the activity of 2 of the CPUs peaks. Why?
This seems normal to me. The process that is running your python code is not, at least by default, pinned to a specific core. This means that the process can be switched between different cores, which is what is happening in this case. Those spikes are not simultaneous, it indicates that the process was switched from one core to another.
On Linux, you can observe this using
watch -tdn0.5 ps -mo pid,tid,%cpu,psr -p 172810
where 172810 is PID of the python process (which you can get, for example, from the output of top)
If you want to pin the process to a particular core, you can use psutils in your code.
import psutil
p = psutil.Process()
p.cpu_affinity([0]) # pinning the process to cpu 0
Now, you should see only one core spiking. (but avoid doing this if you don't have a good reason for it).

Is reusing same process name in loop situation possibly generate zombie process?

My script has to run over a day and its core cycle runs 2-3 times per a minute. I used multiprocessing to give a command simultaneously and each of them will be terminated/join within one cycle.
But in reality I found the software end up out of swap memory or computer freezing situation, I guess this is caused by accumulated processes. I can see on another session while running program, python PID abnormally increasing by time. So I just assume this must be something process thing. What I don't understand is how it happens though I made sure each cycle's process has to be finished on that cycle before proceed the next one.
so I am guessing, actual computing needs more time to progress 'terminate()/join()' job, so I should not "reuse" same object name. Is this proper guessing or is there other possibility?
def function(a,b):
try:
#do stuff # audio / serial things
except:
return
flag_for_2nd_cycle=0
for i in range (1500): # main for running long time
#do something
if flag_for_2nd_cycle==1:
while my_process.is_alive():
if (timecondition) < 30: # kill process if it still alive
my_process.terminate()
my_process.join()
flag_for_2nd_cycle=1
my_process=multiprocessing.process(target=function, args=[c,d])
my_process.start()
#do something and other process jobs going on, for example
my_process2 = multiprocessing.process() ##*stuff
my_process2.terminate()
my_process2.join()
Based on your comment, you are controlling three projectors over serial ports.
The simplest way to do that would be to simply open three serial connections (using pySerial). Then run a loop where you check for available data each of the connections and if so, read and process it. Then you send commands to each of the projectors in turn.
Depending on the speed of the serial link you might not need more than this.

How can I limit one iteration of a loop to a fixed time in seconds?

I'm trying to program two devices - the first by calling an application and manually clicking on program, and the second by calling a batch file and waiting for it to finish. I need each iteration of this loop to be 30 s so both devices can be programmed.
I've tried recording the time taken when it starts the iteration, and the time at the end of programming the second device. Then I set it to time.sleep(30-total time taken). This returns an execution time of slightly longer than 30 s per iteration.
for i in range(48):
t1 = time.time()
#program 1st board by calling app from python and clicking it using python.
#wait a static number of seconds (s) as there is no feedback from this app.
#program 2nd board by calling a batch file.
#this gives feedback as the code does not move to the next line until the
#batch file is finished
t2 = time.time()
time.sleep(30-(t2-t1))
#some other code
Actual results: a little over 30 seconds.
Expected results: exactly 30 seconds.
Is this because of the scheduling in python?
This is a result of scheduling in your operating system. When a process relinquishes the processor by calling sleep, there is no guarantee that it will wake up after the elapsed time requested in the call to sleep. Depending on how busy the system is, it could be delayed by a little, or it could be delayed by a lot.
If you have hard timing requirements, you need a realtime operating system.

How to schedule a periodic task that is immune to system time change using Python

I am using python's sched module to run a task periodically, and I think I have come across a bug.
I find that it relies on the time of the system on which the python script is run. For example, let's say that I want to run a task every 5 seconds. If I forward the system time, the scheduled task will run as expected. However, if I rewind the system time to, say 1 day, then the next scheduled task will run in 5 seconds + 1 day.
If you run the script below and then change your system time by a few days back, then you can reproduce the issue. The problem can be reproduced on Linux and Windows.
import sched
import time
import threading
period = 5
scheduler = sched.scheduler(time.time, time.sleep)
def check_scheduler():
print time.time()
scheduler.enter(period, 1, check_scheduler, ())
if __name__ == '__main__':
print time.time()
scheduler.enter(period, 1, check_scheduler, ())
thread = threading.Thread(target=scheduler.run)
thread.start()
thread.join()
exit(0)
Anyone has any python solution around this problem?
From the sched documentation:
class sched.scheduler(timefunc, delayfunc)
The scheduler class defines a generic interface to scheduling events. It needs two functions to actually deal with the “outside
world” — timefunc should be callable without arguments, and return a
number (the “time”, in any units whatsoever). The delayfunc function
should be callable with one argument, compatible with the output of
timefunc, and should delay that many time units. delayfunc will also
be called with the argument 0 after each event is run to allow other
threads an opportunity to run in multi-threaded applications.
The problem you have is that your code uses time.time() as timefunc, whose return value (when called without arguments) is the current system time and is thus affected by re-winding the system clock.
To make your code immune to system time changes you'd need to provide a timefunc which doesn't depend on the system time, start/current timestamps, etc.
You can write your own function, for example one returning the number of seconds since your process is started, which you'd have to actually count in your code (i.e. don't compute it based on timestamp deltas). The time.clock() function might help, if it's based on CPU time counters, but I'm not sure if that's true or not.

Python Cron job on Ubuntu

I have a python program that I want to run every 10 seconds, just like cron job. I cannot use sleep in a loop because the time interval would become uncertain. The way I am doing it now is like this:
interval = 10.0
next = time.time()
while True:
now = time.time()
if now < next:
time.sleep(next - now)
t = Thread(target=control_lights,)
t.start()# start a thread
next += interval
It generates a new thread that executes the contro_lights function. The problem is that as time goes, the number of python process grows and takes memory/CPU. Is there any good way to do this? Thanks a lot
may be try use supervisord or god for this script? It is very simple to use and to control a number of you'r processes on UNIX-like operating system
Take a look at a program called The Fat Controller which is a scheduler similar to CRON but has many more options. The interval can be measured from the end of the previous run (like a for loop) or regularly every x seconds, which I think is what you want. Particularly useful in this case is that you can tell The Fat Controller what to do if one of the processes takes longer than x seconds:
run a new instance anyway (increase parallel processes up to a specified maximum)
wait for the previous one to finish
kill the previous one and start a new one
There should be plenty of information in the documentation on how to get it set up.
You can run a cron job every 10 seconds, just set the second param to '0/10'. It will run on 0, 10, 20 etc
#run every 10 seconds from mon-fri, between 8-17
CronTrigger(day_of_week='mon-fri', hour='8-17', second='0/10')

Categories