I was wondering if there is a module that allows the program to see what tasks are running. For example, if I am running Google Chrome, Python Idle, and the program, it should see all 3. (It is most important that it can see its self.)
psutil
psutil is a module providing an interface for retrieving information on all running processes and system utilization (CPU, disk, memory, network) in a portable way by using Python.
Related
I am running some python scripts in my linux terminal that happen to be pretty resource intensive, but when I do my system will become pretty non-responsive until the process has completed. I know there are commands like nice and cpulimit but I haven't found a great way to just open a terminal that is somehow resource limited (and what percentage of resources can be devoted to it) and can be used to run any scripts during that particular session.
So is there a good way to do this?
As I was writing a Python script using a third party module, the workload was so big that the OS (Linux with 32GB memory) killed it everytime before it could complete. We learned from syslog that it ran out of physical memory, so the OS killed it through OOM.
Many current performance analysis tools e.g. profile require completion of the script and can not go into the modules that the script used. So I reckon that this should be a common case where completion of the script is not available, and performance analysis is needed desperately under this kind of circumstance. Any advice?
From the original question:
Profile is an amazing tool for performance analysis and does not require completion, and can go into the module that the script used. I think for this question, the best answer is to use profile.
I am running a multithreaded python(3.3) application which has been compiled using cx_freeze. I need to monitor the CPU usage, Memory Usage, thread info, objects info, process status.
I know there is inbuilt python profiler (cprofile) and then there is yappi and others which don't seem to serve my purpose because i want to run these profiler within my application.This way i will be able to view the profiler results and take necessary action (eg - stopping the application whenever CPU usage goes above a certain threshold)
My application is designed to run on Linux as a backgroud process.
I need a cross-platform module which allows me to enumerate processes on the machine. It needs to work on Windows and Unix, and get things like PID and Process Names.
Is there such module?
psutil should work nicely for this.
"psutil is a module providing an interface for retrieving information on all running processes and system utilization (CPU, memory) in a portable way by using Python, implementing many functionalities offered by command line tools like ps, top, kill, lsof and netstat."
I'm developing a long-running multi-threaded Python application for Windows, and I want the process to know the CPU time that each of its threads has taken. I can get the overall times for the entire process with os.times() but I need to know the per-thread times.
I know that there are external tools such as the Sysinternals Process Explorer, but my program itself needs to have this information. If I were on Linux, I look in the /proc filesystem, as described here. If I were writing C code, I'd use the GetThreadTimes call, as described here.
So how can I accomplish this on Windows using Python?
win32process.GetThreadTimes
You want the Python for Windows Extensions to do hairy windows things.
Or you can simply use yappi. (https://code.google.com/p/yappi/) It transparently uses GetThreadTimes() if CPU clock type is selected for profiling.
See here also for an example: https://code.google.com/p/yappi/wiki/YThreadStats_v082