I am aware of the psutil package which provides (along with many other things) a way to access information about system memory, including the amount of available memory (psutil.virtual_memory().available). How would I go about querying available memory in a pure Python implementation? The solution would need to work for UNIX-like systems, including Linux, OS X, and so on.
Already sort of answered here, although this method includes reading the /proc/meminfo file, which comes on most Linux/Unix distributions.
As per other operating systems, it looks like psutil is your only option, unless Windows has something similar.
Update:
For OS X/macOS, something similar may be possible, using vm_stat, like the python script in this answer.
Related
I'm looking for a python module that will allow me to monitor and log the system statistics of 4 computers. I need this module to be able to:
Work on at least Windows and Debian Linux
Monitor disk usage, memory usage, network usage, cpu load, and core temperature (if available)
Unfortunately, I haven’t been able to find a module that satisfies either qualifier, and I want to avoid wrapping python around another language to accomplish this.
If anyone has anything remotely close to what I'm describing, I'd greatly appreciate it.
Thanks
Have you looked at Munin? You can build all sorts of plugins in Python as well, plus many already exist.
Maybe WMI is what you need. See link below for reference.
Python_WMI
CPython's multiprocessing package is implemented fairly differently on Windows and on Linux, as a Windows implementation cannot rely on fork(2). However, it seems to me that the Windows implementation of multiprocessing (spawning a separate process and send it the required state by serializing it) should work on Linux (or am I wrong?).
While I work on Linux, I would like to make sure that the code I write also works on Windows (e.g., not accidentally have unpicklable arguments, etc.). Is there a way I can force CPython to use the Windows implementation of multiprocessing on Linux?
Thanks.
Hum, in fact this has just become possible very recently: http://bugs.python.org/issue8713.
Now I just have to run 3.4alpha2 :)
I recently asked this question and got a wonderful answer to it involving the os.walk command. My script is using this to search through an entire drive for a specific folder using for root, dirs, files in os.walk(drive):. Unfortunately, on a 600 GB drive, this takes about 10 minutes.
Is there a better way to invoke this or a more efficient command to be using? Thanks!
If you're just looking for a small constant improvement, there are ways to do better than os.walk on most platforms.
In particular, walk ends up having to stat many regular files just to make sure they're not directories, even though the information is (Windows) or could be (most *nix systems) already available from the lower-level APIs. Unfortunately, that information isn't available at the Python level… but you can get to it via ctypes or by building a C extension library, or by using third-party modules like scandir.
This may cut your time to somewhere from 10% to 90%, depending on your platform and the details of your directory layout. But it's still going to be a linear search that has to check every directory on your system. The only way to do better than that is to access some kind of index. Your platform may have such an index (e.g., Windows Desktop Search or Spotlight); your filesystem may as well (but that will require low-level calls, and may require root/admin access), or you can build one on your own.
Use subprocess.Popen to start a native 'find' process.
scandir.walk(path) gives 2-20 times faster results then the os.walk(path).
you can use this module pip install scandir
here is docs for scandir
I have a script in python which uses a resource which can not be used by more than a certain amount of concurrent scripts running.
Classically, this would be solved by a named semaphores but I can not find those in the documentation of the multiprocessing module or threading .
Am I missing something or are named semaphores not implemented / exposed by Python? and more importantly, if the answer is no, what is the best way to emulate one?
Thanks,
Boaz
PS. For reasons which are not so relevant to this question, I can not aggregate the task to a continuously running process/daemon or work with spawned processes - both of which, it seems, would have worked with the python API.
I suggest a third party extension like these, ideally the posix_ipc one -- see in particular the sempahore section in the docs.
These modules are mostly about exposing the "system V IPC" (including semaphores) in a unixy way, but at least one of them (posix_ipc specifically) is claimed to work with Cygwin on Windows (I haven't verified that claim). There are some documented limitations on FreeBSD 7.2 and Mac OSX 10.5, so take care if those platforms are important to you.
You can emulate them by using the filesystem instead of a kernel path (named semaphores are implemented this way on some platforms anyhow). You'll have to implement sem_[open|wait|post|unlink] yourself, but it ought to be relatively trivial to do so. Your synchronization overhead might be significant (depending on how often you have to fiddle with the semaphore in your app), so you might want to initialize a ramdisk when you launch your process in which to store named semaphores.
Alternatively if you're not comfortable rolling your own, you could probably wrap boost::interprocess::named_semaphore (docs here) in a simple extension module.
I write tools that are used in a shared workspace. Since there are multiple OS's working in this space, we generally use Python and standardize the version that is installed across machines. However, if I wanted to write some things in C, I was wondering if maybe I could have the application wrapped in a Python script, that detected the operating system and fired off the correct version of the C application. Each platform has GCC available and uses the same shell.
One idea was to have the C compiled to the users local ~/bin, with timestamp comparison with C code so it is not compiled each run, but only when code is updated. Another was to just compile it for each platform, and have the wrapper script select the proper executable.
Is there an accepted/stable process for this? Are there any catches? Are there alternatives (assuming the absolute need to use native C code)?
Clarification: Multiple OS's are involved that do not share ABI. Eg. OS X, various Linuxes, BSD etc. I need to be able to update the code in place in shared folders and have the new code working more or less instantaneously. Distributing binary or source packages is less than ideal.
Launching a Python interpreter instance just to select the right binary to run would be much heavier than you need. I'd distribute a shell .rc file which provides aliases.
In /shared/bin, you put the various binaries: /shared/bin/toolname-mac, /shared/bin/toolname-debian-x86, /shared/bin/toolname-netbsd-dreamcast, etc. Then, in the common shared shell .rc file, you put the logic to set the aliases according to platform, so that on OSX, it gets alias toolname=/shared/bin/toolname-mac, and so forth.
This won't work as well if you're adding new tools all the time, because the users will need to reload the aliases.
I wouldn't recommend distributing tools this way, though. Testing and qualifying new builds of the tools should be taking up enough time and effort that the extra time required to distribute the tools to the users is trivial. You seem to be optimizing to reduce the distribution time. Replacing tools that quickly in a live environment is all too likely to result in lengthy and confusing downtime if anything goes wrong in writing and building the tools--especially when subtle cross-platform issues creep in.
Also, you could use autoconf and distribute your application in source form only. :)
You know, you should look at static linking.
These days, we all have HUGE hard drives, and a few extra megabytes (for carrying around libc and what not) is really not that big a deal anymore.
You could also try running your applications in chroot() jails and distributing those.
Depending on your mix os OSes, you might be better off creating packages for each class of system.
Alternatively, if they all share the same ABI and hardware architecture, you could also compile static binaries.