Python module to cross platform monitor system statistics? - python

I'm looking for a python module that will allow me to monitor and log the system statistics of 4 computers. I need this module to be able to:
Work on at least Windows and Debian Linux
Monitor disk usage, memory usage, network usage, cpu load, and core temperature (if available)
Unfortunately, I haven’t been able to find a module that satisfies either qualifier, and I want to avoid wrapping python around another language to accomplish this.
If anyone has anything remotely close to what I'm describing, I'd greatly appreciate it.
Thanks

Have you looked at Munin? You can build all sorts of plugins in Python as well, plus many already exist.

Maybe WMI is what you need. See link below for reference.
Python_WMI

Related

How do I run python without an OS

I read somewhere you could run python without an OS. How would I do this? Would I need to compile it? Can I run it raw? And if I did need to compile it, what tool would I use and what format would I compile it to?
As far as I know there's not really any way to do this easily but I could be wrong. There are "portable" versions of python but these are operating system dependent. I think what you're referencing is some guys at PyCon managed to run python from the GRUB Bootloader. Your best bet would be installing some minimalist Linux distribution, with essentially only Python and some core packages required to run. The problem is that there's a lot of types of hardware out there, all with their own drivers and assembly language. Python can work as a low level language when you need it to but it seems like configuration would be a nightmare. I haven't looked into it super thoroughly but it seems difficult and impractical. Having an OS above python gives you access to the package managers IDEs and compilier options that make python worth using.
yea that's one of the options, pretty much all the "light" distros will be similar, if you want more to try out try here. Not sure why you're worried about speed though, if you're having speed issues it's far more likely to be the IDE you're using or your code bogging down the computer, not any sort of compiler issues.

python system idle/inactive without using ctypes - platform independent as well

I was wondering if it is possible to detect if system is idle using psutil (https://code.google.com/p/psutil/). I want to be able to trigger something if there is no system wide activity for a few seconds. I have some *trouble using the ctypes/python wrapper method described is some other posts.
*trouble: My python setup is a bit tricky. psutil seem to work smoothly for other things in my application. So I would prefer it, if possible.
Thanks,
M&M
I managed to get ctypes to work (well...without crashing python that is :-) ) and so now I am able to use this solution from SO
And thanks to FogleBird for the above solution
In the process of tackling this problem I learnt about psutil which I think can benefit those who are looking for system monitoring, profiling and limiting process resources and management of running processes. And it's cross platform...yay!
Cheers,
M&M

deploying python applications

Is it possible to deploy python applications such that you don't release the source code and you don't have to be sure the customer has python installed?
I'm thinking maybe there is some installation process that can run a python app from just the .pyc files and a shared library containing the interpreter or something like that?
Basically I'm keen to get the development benefits of a language like Python - high productivity etc. but can't quite see how you could deploy it professionally to a customer where you don't know how there machine is set up and you definitely can't deliver the source.
How do professional software houses developing in python do it (or maybe the answer is that they don't) ?
You protect your source code legally, not technologically. Distributing py files really isn't a big deal. The only technological solution here is not to ship your program (which is really becoming more popular these days, as software is provided over the internet rather than fully installed locally more often.)
If you don't want the user to have to have Python installed but want to run Python programs, you'll have to bundle Python. Your resistance to doing so seems quite odd to me. Java programs have to either bundle or anticipate the JVM's presence. C programs have to either bundle or anticipate libc's presence (usually the latter), etc. There's nothing hacky about using what you need.
Professional Python desktop software bundles Python, either through something like py2exe/cx_Freeze/some in-house thing that does the same thing or through embedding Python (in which case Python comes along as a library rather than an executable). The former approach is usually a lot more powerful and robust.
Yes, it is possible to make installation packages. Look for py2exe, cx_freeze and others.
No, it is not possible to keep the source code completely safe. There are always ways to decompile.
Original source code can trivially be obtained from .pyc files if someone wants to do it. Code obfuscation would make it more difficult to do something with the code.
I am surprised no one mentioned this before now, but Cython seems like a viable solution to this problem. It will take your Python code and transpile it into CPython compatible C code. You also get a small speed boost (~25% last I checked) since it will be compiled to native machine code instead of just Python byte code. You still need to be sure the user has Python installed (either by making it a pre-requisite pushed off onto the user to deal with, or bundling it as part of the installer process). Also, you do need to have at least one small part of your application in pure Python: the hook into the main function.
So you would need something basic like this:
import cython_compiled_module
if __name__ == '__main__':
cython_compiled_module.main()
But this effectively leaks no implementation details. I think using Cython should meet the criteria in the question, but it also introduces the added complexity of compiling in C, which loses some of Python's easy cross-platform nature. Whether that is worth it or not is up to you.
As others stated, even the resulting compiled C code could be decompiled with a little effort, but it is likely much more close to the type of obfuscation you were initially hoping for.
Well, it depends what you want to do. If by "not releasing the source code" you mean "the customer should not be able to access the source code in any way", well, you're fighting a losing battle. Even programs written in C can be reverse engineered, after all. If you're afraid someone will steal from you, make them sign a contract and sue them if there's trouble.
But if you mean "the customer should not care about python files, and not be able to casually access them", you can use a solution like cx_Freeze to turn your Python application into an executable.
Build a web application in python. Then the world can use it via a browser with zero install.

Named semaphores in Python?

I have a script in python which uses a resource which can not be used by more than a certain amount of concurrent scripts running.
Classically, this would be solved by a named semaphores but I can not find those in the documentation of the multiprocessing module or threading .
Am I missing something or are named semaphores not implemented / exposed by Python? and more importantly, if the answer is no, what is the best way to emulate one?
Thanks,
Boaz
PS. For reasons which are not so relevant to this question, I can not aggregate the task to a continuously running process/daemon or work with spawned processes - both of which, it seems, would have worked with the python API.
I suggest a third party extension like these, ideally the posix_ipc one -- see in particular the sempahore section in the docs.
These modules are mostly about exposing the "system V IPC" (including semaphores) in a unixy way, but at least one of them (posix_ipc specifically) is claimed to work with Cygwin on Windows (I haven't verified that claim). There are some documented limitations on FreeBSD 7.2 and Mac OSX 10.5, so take care if those platforms are important to you.
You can emulate them by using the filesystem instead of a kernel path (named semaphores are implemented this way on some platforms anyhow). You'll have to implement sem_[open|wait|post|unlink] yourself, but it ought to be relatively trivial to do so. Your synchronization overhead might be significant (depending on how often you have to fiddle with the semaphore in your app), so you might want to initialize a ramdisk when you launch your process in which to store named semaphores.
Alternatively if you're not comfortable rolling your own, you could probably wrap boost::interprocess::named_semaphore (docs here) in a simple extension module.

Using C in a shared multi-platform POSIX environment

I write tools that are used in a shared workspace. Since there are multiple OS's working in this space, we generally use Python and standardize the version that is installed across machines. However, if I wanted to write some things in C, I was wondering if maybe I could have the application wrapped in a Python script, that detected the operating system and fired off the correct version of the C application. Each platform has GCC available and uses the same shell.
One idea was to have the C compiled to the users local ~/bin, with timestamp comparison with C code so it is not compiled each run, but only when code is updated. Another was to just compile it for each platform, and have the wrapper script select the proper executable.
Is there an accepted/stable process for this? Are there any catches? Are there alternatives (assuming the absolute need to use native C code)?
Clarification: Multiple OS's are involved that do not share ABI. Eg. OS X, various Linuxes, BSD etc. I need to be able to update the code in place in shared folders and have the new code working more or less instantaneously. Distributing binary or source packages is less than ideal.
Launching a Python interpreter instance just to select the right binary to run would be much heavier than you need. I'd distribute a shell .rc file which provides aliases.
In /shared/bin, you put the various binaries: /shared/bin/toolname-mac, /shared/bin/toolname-debian-x86, /shared/bin/toolname-netbsd-dreamcast, etc. Then, in the common shared shell .rc file, you put the logic to set the aliases according to platform, so that on OSX, it gets alias toolname=/shared/bin/toolname-mac, and so forth.
This won't work as well if you're adding new tools all the time, because the users will need to reload the aliases.
I wouldn't recommend distributing tools this way, though. Testing and qualifying new builds of the tools should be taking up enough time and effort that the extra time required to distribute the tools to the users is trivial. You seem to be optimizing to reduce the distribution time. Replacing tools that quickly in a live environment is all too likely to result in lengthy and confusing downtime if anything goes wrong in writing and building the tools--especially when subtle cross-platform issues creep in.
Also, you could use autoconf and distribute your application in source form only. :)
You know, you should look at static linking.
These days, we all have HUGE hard drives, and a few extra megabytes (for carrying around libc and what not) is really not that big a deal anymore.
You could also try running your applications in chroot() jails and distributing those.
Depending on your mix os OSes, you might be better off creating packages for each class of system.
Alternatively, if they all share the same ABI and hardware architecture, you could also compile static binaries.

Categories