How can I read system information in Python on OS X? - python

Following from this OS-agnostic question, specifically this response, similar to data available from the likes of /proc/meminfo on Linux, how can I read system information from OS X using Python (including, but not limited to memory usage).

You can get a large amount of system information from the command line utilities sysctl and vm_stat (as well as ps, as in this question.)
If you don't find a better way, you could always call these using subprocess.

The only stuff that's really nicely accesible is available from the platform module, but it's extremely limited (cpu, os version, architecture, etc). For cpu usage and uptime I think you will have to wrap the command line utilities 'uptime' and 'vm_stat'.
I built you one for vm_stat, the other one is up to you ;-)
import os, sys
def memoryUsage():
result = dict()
for l in [l.split(':') for l in os.popen('vm_stat').readlines()[1:8]]:
result[l[0].strip(' "').replace(' ', '_').lower()] = int(l[1].strip('.\n '))
return result
print memoryUsage()

I did some more googling (looking for "OS X /proc") -- it looks like the sysctl command might be what you want, although I'm not sure if it will give you all the information you need. Here's the manpage: http://developer.apple.com/DOCUMENTATION/Darwin/Reference/ManPages/man8/sysctl.8.html
Also, wikipedia.

i was searching for this same thing, and i noticed there was no accepted answer for this question. in the intervening time since the question was originally asked, a python module called psutil was released:
https://github.com/giampaolo/psutil
for memory utilization, you can use the following:
>>> psutil.virtual_memory()
svmem(total=8374149120L, available=2081050624L, percent=75.1, used=8074080256L, free=300068864L, active=3294920704, inactive=1361616896, buffers=529895424L, cached=1251086336)
>>> psutil.swap_memory()
sswap(total=2097147904L, used=296128512L, free=1801019392L, percent=14.1, sin=304193536, sout=677842944)
>>>
there are functions for cpu utilization, process management, disk, and network as well. the only omission from the module is a function for retrieving load average, but the python stdlib has os.getloadavg() if you are on a UNIX-like system.
psutil claims to support Linux, Windows, OSX, FreeBSD and Sun Solaris, but i have only tried OSX mavericks and fedora 20.

Here's a MacFUSE-based /proc fs:
http://www.osxbook.com/book/bonus/chapter11/procfs
If you have control of the boxes you're running your python program on it might be a reasonable solution. At any rate it's nice to have a /proc to look at!

Related

Python multiprocessing copy-on-write behaving differently between OSX and Ubuntu

I'm trying to share objects between the parent and child process in Python. To play around with the idea, I've created a simple Python script:
from multiprocessing import Process
from os import getpid
import psutil
shared = list(range(20000000))
def shared_printer():
mem = psutil.Process(getpid()).memory_info().rss / (1024 ** 2)
print(getpid(), len(shared), '{}MB'.format(mem))
if __name__ == '__main__':
p = Process(target=shared_printer)
p.start()
shared_printer()
p.join()
The code snippet uses the excellent psutil library to print the RSS (Resident Set Size). When I run this on OSX with Python 2.7.15, I get the following output:
(33101, 20000000, '1MB')
(33100, 20000000, '626MB')
When I run the exact same snippet on Ubuntu (Linux 4.15.0-1029-aws #30-Ubuntu SMP x86_64 GNU/Linux), I get the following output:
(4077, 20000000, '632MB')
(4078, 20000000, '629MB')
Notice that the child process' RSS is basicall 0MB on OSX and about the same size as the parent process' RSS in Linux. I had assumed that copy-on-write behavior would work the same way in Linux and allow the child process to refer to the parent process' memory for most pages (perhaps except the one storing the head of the object).
So I'm guessing that there's some difference in the copy-on-write behavior in the 2 systems. My question is: is there anything I can do in Linux to get that OSX-like copy-on-write behavior?
So I'm guessing that there's some difference in the copy-on-write behavior >in the 2 systems. My question is: is there anything I can do in Linux to >get that OSX-like copy-on-write behavior?
The answer is NO. Behind the command psutil.Process(getpid()).memory_info().rss / (1024 ** 2) the OS uses the UNIX command $top [PID] and search for the field RES. Which contains the non-swapped physical memory a task has used in kb. i.e. RES = CODE + DATA.
IMHO, these means that both OS uses different memory managers. So that, it's almost impossible to constrain how much memory a process uses/needs. This is a intern issue of the OS.
In Linux the child process has the same size of the parent process. Indeed, they copy the same stack, code and data. But different PCB (Process Control Block). Therefore, it is impossible to get close to 0 as OSX does. It smells that OSX does not literally copy the code and data. If they are the same code, it will make pointer to the data of the parent process.
PD: I hope that would help you!

Python platform independent way to get system information

I was wondering if there was a platform independent way of getting system information for linux, windows, mac. I know you can use platform module to get some basic information. I am looking for more detailed information like
CPU information like number of logical cores, number of physical
cores, number of sockets, frequency, capabilities
Total amount of physical memory
Disk space -- total, free for each disk
Network interfaces, mac address, ip address (ipv4/ipv6), speed, hostname
OS information
I recommend using psutil library for this. Not everything you require is available, but it's a good place to start. For example, to get the CPU count, you can use the following code.
>>> import psutil
>>> psutil.cpu_count() # Logical core
4
>>> psutil.cpu_count(logical=False) # Physical core
2

Using resource in windows

i've got a script that uses the resource-module from python (see http://docs.python.org/library/resource.html for information). Now i want to port this script to windows. is there any alternative version of this (the python-docs are labeling it as "unix only").
if there isn't, is there any other workaround?
I'm using the following method/constant:
resource.getrusage(resource.RUSAGE_CHILDREN)
resource.RLIMIT_CPU
Thank you
PS: I'm using python 2.7 / 3.2
There's no good way of doing this generically for all "Resources"" -- hence why it's a Unix only command. For CPU speed only you can either use registry keys to set the process id limit:
http://technet.microsoft.com/en-us/library/ff384148%28WS.10%29.aspx
As done here:
http://code.activestate.com/recipes/286159/
IMPORTANT: Backup your registry before trying anything with registry
Or you could set the thread priority:
http://msdn.microsoft.com/en-us/library/ms685100%28VS.85%29.aspx
As done here:
http://nullege.com/codes/search/win32process.SetThreadPriority
For other resources you'll have to scrap together similar DLL access APIs to achieve the desired effect. You should first ask yourself if you need this behavior. Oftentimes you can limit CPU time by sleeping the thread in operation at convenient times to allow the OS to swap processes and memory controls can be done problematically to check data structure sizes.

How can I watch for a filesystem change without drastically affecting system performance?

Time for another newbie question, I fear. I'm attempting to use Python 3.2.2 (the version is important, in this case) to monitor a particular Windows path for changes. The simplest method, and the method I'm using, is:
original_state = os.listdir(path_string)
while os.listdir(path_string) == original_state:
time.sleep(1)
change_time = datetime.datetime.now()
I'm writing this code to do some timing tests of another application. With that goal in mind, the Python script needs to (a) not adversely affect system performance, and (b) be relatively precise -- a margin of error of +/- 1 second is the absolute maximum I can justify. Unfortunately, this method doesn't meet the first criterion: When running this particular bit of code, the virtual environment is hammered, drastically slowing down the operations whose performance I'm trying to accurately measure.
I've read how to watch a File System for change, How do I watch a file for changes?, and http://timgolden.me.uk/python/win32_how_do_i/watch_directory_for_changes.html (an article recommended as a solution to that second SO question.) Unfortunately, Tim Golden's code appears to be Python 2.x code -- as near as I can tell, the pywin32 module isn't supported in Python 3.
What can I do in Python 3 to monitor this particular path without running into the same performance problems?
According to the ActivePython 3.2 Documentation, their pywin32 now supports Python 3.x
On Linux there is iNotify and pyNotify. Similar asynchronous notification mechanism on windows is FindFirstChangeNotification function which is a part of FileSystemWatcher Class
Please look at solutions on the Tim Golden's page:
http://timgolden.me.uk/python/win32_how_do_i/watch_directory_for_changes.html
http://timgolden.me.uk/python/win32_how_do_i/watch_directory_for_changes.html#use_findfirstchange
It is also possible to monitor a file or directory using GFileMonitor with Gio taking care of the underlying operating system details. Although, granted you likely won't be using Gtk if this is a Windows program. For posterity:
from gi.repository import Gio
gfile = Gio.file_new_for_path('/home/user/Downloads')
gfilemonitor = gfile.monitor(Gio.FileMonitorFlags.NONE, None)
gfilemonitor.connect('changed', callback_func)

Monitor Process in Python?

I think this is a pretty basic question, but here it is anyway.
I need to write a python script that checks to make sure a process, say notepad.exe, is running. If the process is running, do nothing. If it is not, start it. How would this be done.
I am using Python 2.6 on Windows XP
The process creation functions of the os module are apparently deprecated in Python 2.6 and later, with the subprocess module being the module of choice now, so...
if 'notepad.exe' not in subprocess.Popen('tasklist', stdout=subprocess.PIPE).communicate()[0]:
subprocess.Popen('notepad.exe')
Note that in Python 3, the string being checked will need to be a bytes object, so it'd be
if b'notepad.exe' not in [blah]:
subprocess.Popen('notepad.exe')
(The name of the file/process to start does not need to be a bytes object.)
There are a couple of options,
1: the more crude but obvious would be to do some text processing against:
os.popen('tasklist').read()
2: A more involved option would be to use pywin32 and research the win32 APIs to figure out what processes are running.
3: WMI (I found this just now), and here is a vbscript example of how to query the machine for processes through WMI.
Python library for Linux process management

Categories