CPython's multiprocessing package is implemented fairly differently on Windows and on Linux, as a Windows implementation cannot rely on fork(2). However, it seems to me that the Windows implementation of multiprocessing (spawning a separate process and send it the required state by serializing it) should work on Linux (or am I wrong?).
While I work on Linux, I would like to make sure that the code I write also works on Windows (e.g., not accidentally have unpicklable arguments, etc.). Is there a way I can force CPython to use the Windows implementation of multiprocessing on Linux?
Thanks.
Hum, in fact this has just become possible very recently: http://bugs.python.org/issue8713.
Now I just have to run 3.4alpha2 :)
Related
I am aware of the psutil package which provides (along with many other things) a way to access information about system memory, including the amount of available memory (psutil.virtual_memory().available). How would I go about querying available memory in a pure Python implementation? The solution would need to work for UNIX-like systems, including Linux, OS X, and so on.
Already sort of answered here, although this method includes reading the /proc/meminfo file, which comes on most Linux/Unix distributions.
As per other operating systems, it looks like psutil is your only option, unless Windows has something similar.
Update:
For OS X/macOS, something similar may be possible, using vm_stat, like the python script in this answer.
I'm writing a program that is supposed to synchronize several different parts,
including hardware. This is done by using a python script that communicates with
other programs.
I've found out that something I need for synchronization is for the main script
to be able to tell if another particular program is running, or if it stops.
I imagine it would look someting like:
#checking if a program runs
if is_running(program):
statements
#Waiting for a program to stop
while is_running(program):
pass
Does anyone know? I'm using Python 2.7 on Windows 7.
This question is pretty similar to your situation, and suggests using WMI which will run on python 2.4 to 3.2 and Windows 7, or using the builtin wmic to get the list of proc.
If you care about making the code cross platform, you could also use psutil, which works on "Linux, Windows, OSX, FreeBSD and Sun Solaris, both 32-bit and 64-bit architectures, with Python versions from 2.4 to 3.4."
I am using py2exe to compiling python scripts in executable files on Windows Xp/7/2000.
I am wondering if such executable scripts could freeze the operating system, and I have to reboot Windows.
I suppose such problems could occur if I try to manage driver library.
What do you think about?
Theoretically, yes. Windows is not the most stable OS out there, and programs sometime "freeze" it even without mucking with drivers and kernel-mode code. Python programs aren't any different in this respect, whether packed with py2exe or not, since Python programs on Windows easily have access to the same Windows APIs any other program can access.
However, I have a feeling you're not "just asking" if you have a specific application freezing the system, it's something that should be addressed for the specific case in hand. Unless the application does something really crazy, it's probably a bug in it that can be solved.
A Python program - regardless of whether iterpreted by the Python executable or in py2exe form - can do the same as any other program. That means that it should not be able to freeze a modern operating system unless it is run with superuser rights. However, programs (especially malicious and badly written ones) can significantly degrade user experience, for example by going fullscreen and refusing to show the desktop or starting lots of threads and processes.
This question already has answers here:
How do I watch a file for changes?
(28 answers)
Closed 6 years ago.
I'm trying to use a method within a Python program to detect whether a file on the file system has been modified. I know that I could have something run on an every-5-seconds to check the last modification date off of the system, but I was curious as to whether there's an easier method for doing this, without needing to require my program to check repeatedly.
Does anyone know of such a method?
watchdog
Excellent cross platform library for watching directories.
From the website
Supported Platforms
Linux 2.6 (inotify)
Mac OS X (FSEvents, kqueue)
FreeBSD/BSD (kqueue)
Windows (ReadDirectoryChangesW with I/O completion ports; ReadDirectoryChangesW worker threads)
OS-independent (polling the disk for directory snapshots and comparing them periodically; slow and not recommended)
I've used it on a couple projects and it seems to work wonderfully.
For linux, there is pyinotify.
From the homepage:
Pyinotify is a Python module for
monitoring filesystems changes.
Pyinotify relies on a Linux Kernel
feature (merged in kernel 2.6.13)
called inotify. inotify is an
event-driven notifier, its
notifications are exported from kernel
space to user space through three
system calls. pyinotify binds these
system calls and provides an
implementation on top of them offering
a generic and abstract way to
manipulate those functionalities.
Thus it is obviously not cross-platform and relies on a new enough kernel version. However, as far as I can see, requiring kernel support would be true about any non-polling mechanism.
On windows there is:
watcher, which is a nice python port of the .NET FileSystemWatcher API.
Also there's (the one I wrote) dirwatch.
Both rely on the windows ReadDirectoryChangesW function. Though for real work, I'd use watcher (proper C extension, good API, python 2 & 3 support).
Mine is mostly an experiment calling the relevant APIs on windows, so it's only interesting if you want an example of calling these things from python.
You should also see inotifyx which is very similar to the previously mentioned pyinotify, but is said to have an API which changes less.
I have a script in python which uses a resource which can not be used by more than a certain amount of concurrent scripts running.
Classically, this would be solved by a named semaphores but I can not find those in the documentation of the multiprocessing module or threading .
Am I missing something or are named semaphores not implemented / exposed by Python? and more importantly, if the answer is no, what is the best way to emulate one?
Thanks,
Boaz
PS. For reasons which are not so relevant to this question, I can not aggregate the task to a continuously running process/daemon or work with spawned processes - both of which, it seems, would have worked with the python API.
I suggest a third party extension like these, ideally the posix_ipc one -- see in particular the sempahore section in the docs.
These modules are mostly about exposing the "system V IPC" (including semaphores) in a unixy way, but at least one of them (posix_ipc specifically) is claimed to work with Cygwin on Windows (I haven't verified that claim). There are some documented limitations on FreeBSD 7.2 and Mac OSX 10.5, so take care if those platforms are important to you.
You can emulate them by using the filesystem instead of a kernel path (named semaphores are implemented this way on some platforms anyhow). You'll have to implement sem_[open|wait|post|unlink] yourself, but it ought to be relatively trivial to do so. Your synchronization overhead might be significant (depending on how often you have to fiddle with the semaphore in your app), so you might want to initialize a ramdisk when you launch your process in which to store named semaphores.
Alternatively if you're not comfortable rolling your own, you could probably wrap boost::interprocess::named_semaphore (docs here) in a simple extension module.