Inter process communication primitives (Semaphores, Shared Memory) in python on windows? posix_ipc works great on linux, anything similar for windows?
You can use most (all) of the win32 ipc when you install pywin32
For semaphores on Windows I've created https://pypi.org/project/semaphore-win-ctypes/.
This provides low level access to the Windows Semaphore APIs from python. There's some interesting use cases this enables like letting python processes and non-python processes acquire the same semaphore object.
Related
I have to write a python application that has to run with python 2.4 for Unix and 2.7 for Windows.
This application must run parallel tasks that will synchronize and have to share message between them.
What would be the best, most simple, reliable and lightweight solution to do that?
I found a library that uses os.fork() but unfortunately os.fork() is not windows compatible.
The multiprocess package is not python 2.4 compatible.
I think the only solution left is subprocess but I was wondering if there was another solution to solve my problem.
If you use Threads it may be easier. There's a practical guide here.
The guide also has examples on using Queue's with threads.
Using queues you can pass messages between the threads.
Threads are lighter than creating a new process, and should be cross platform, and it looks like they're in python version 2.4. This guide I think was written for 2.4
I was wondering if there is a module that allows the program to see what tasks are running. For example, if I am running Google Chrome, Python Idle, and the program, it should see all 3. (It is most important that it can see its self.)
psutil
psutil is a module providing an interface for retrieving information on all running processes and system utilization (CPU, disk, memory, network) in a portable way by using Python.
I need a cross-platform module which allows me to enumerate processes on the machine. It needs to work on Windows and Unix, and get things like PID and Process Names.
Is there such module?
psutil should work nicely for this.
"psutil is a module providing an interface for retrieving information on all running processes and system utilization (CPU, memory) in a portable way by using Python, implementing many functionalities offered by command line tools like ps, top, kill, lsof and netstat."
I have a few data loggers in the field. The manufacturer set them up as dial up ftp servers. I'm writing a python program that automagically downloads all the latest files from the server into a specified folder on my computer.
Which OS independent library do you recommend for dial up?
Do you have any suggestions, comments, or concerns that you can share?
Thanks
Why not use Python's built-in ftplib? Looks pretty straightforward, unless I'm missing something?
For using a modem with Python, this thread talks about using the pyserial module.
I've never used pyserial with a modem, but I have with a USB port and an arduino. It was pretty straight forward, so I'm sure with some research about modem communication you could do it pretty easily. PySerial doesn't come with python by default, but from their site,
[PySerial] provides backends for Python running on Windows, Linux, BSD (possibly any POSIX compliant system), Jython and IronPython (.NET and Mono).
and earlier versions exist for MacOS and others.
While developing a Django app deployed on Apache mod_wsgi I found that in case of multithreading (Python threads; mod_wsgi processes=1 threads=8) Python won't use all available processors. With the multiprocessing approach (mod_wsgi processes=8 threads=1) all is fine and I can load my machine at full.
So the question: is this Python behavior normal? I doubt it because using 1 process with few threads is the default mod_wsgi approach.
The system is:
2xIntel Xeon 5XXX series (8 cores (16 with hyperthreading)) on FreeBSD 7.2 AMD64 and Python 2.6.4
Thanks all for answers.
We all found that this behavior is normal because of GIL. Here is a good explanation:
http://jessenoller.com/2009/02/01/python-threads-and-the-global-interpreter-lock/
or stackoverflow GIL discussion: What is a global interpreter lock (GIL)?.
Will Python use all processors in thread mode? No.
Python won't use all available processors; is this Python behavior normal? Yes, it's normal because of the GIL.
For a discussion see http://mail.python.org/pipermail/python-3000/2007-May/007414.html.
You may find that having a couple (or 4) of threads per core/process can still improve performance if there is some blocking, for example waiting for a response from the database would cause that process to block other connections otherwise.
Will python use all processors in thread mode? No.
It this normal? Yes, this is normal. Python makes no effort to locate all your cores.
"1 process with few threads is default mod_wsgi approach". But that's not optimal or even desirable. That's just a default. Don't read anything into it.
If you want to use all your computer's resources, make the OS handle it. Use processes.
The distinction between multi-processing and multi-threading is hard to measure for the most part. Using processes or threads barely matters. It's usually simpler to use processes, since there's trivial OS support for this.
Bottom Line
Use multiple processes, that allows the OS (and Apache) to make as much use as possible of the system.
Threads share a limited set of I/O resources for the Process they're part of, and web page serving is I/O bound. Processes have independent I/O resources and will more easily max out your processor.
There is still hope. The GIL is only an implementation artifact of the C Python implementation that you download from python.org. Jython and IronPython are two other implementations of Python, and they have no GIL, so you may have better threading results with one of them.
Yes. Python is not really multi-threaded. Instead, there is a global lock and each thread gets to execute a few operations in turn. This makes it much more simple to write MT applications in Python since there can't be any problems with stale caches, etc.
So one Python process can only ever occupy a single CPU. To fully utilize a multi-core system, you must run several Python processes.
I don't know if it is still the case, but there is a global lock in the Python interpreter, which prevents the use of all processor resources from a single interpreter, even when using multi threading. IIRC, the global lock has to do with I/O.
It seems you are watching the result of this lock, so, personally, I would use multiple processes with a single thread.