I am using Python3 to execute PYQT code; and at the same time, I need to call Python2.7 code, for operations that I cannot perform via Python3.
I did implement the 2.7 code execution via Popen; although it takes a considerable amount of time to run the 2.7 code, when called from Popen. The same operation is performed much faster if I run it directly from Python2.7.
Would be possible to use multiprocessing instead of subprocess.Popen for the same purpose, to speed up the execution of the 2.7 code?
And if that is appropriate; what would be the correct way to call Python2.7 code from a multiprocessing.Process? Or is it a waste to use multiprocess, since I am executing only one operation?
multiprocessing is similar to subprocess only on non-POSIX systems that cannot fork processes so you could, theoretically, hack away multiprocessing to use a different interpreter. It would be more trouble than its worth, tho, because at that point you wouldn't get any performance boost between spawning a subprocess and using a multiprocessing.Process (in fact, it would probably end slower due to the communication overhead added to multiprocessing.Process).
So, if we're talking only about a single task that has to execute in a different interpreter, this is as fast as you're gonna get. If there are multiple tasks to be executed in a different interpreter you may still benefit from multiprocessing.Process by spawning a single subprocess to run the different interpreter and then using multiprocessing within it to distribute multiple tasks over your cores.
Related
Fairly new 'programmer' here, trying to understand how Python interacts with Windows when multiple unrelated scripts are run simultaneously, for example from Task Manager or just starting them manually from IDLE. The scripts just make http calls and write files to disk, and environment is 3.6.
Is the interpreter able to draw resources from the OS (processor/memory/disk) independently such that the time to complete each script is more or less the same as it would be if it were the only script running (assuming the scripts cumulatively get nowhere near using up all the CPU or memory)? If so, what are the limitations (number of scripts, etc.).
Pardon mistakes in terminology. Note the quotes on 'programmer'.
how Python interacts with Windows
Python is an executable, a program. When a program is executed a new process is created.
python myscript.py starts a new python.exe process where the first argument is your script.
when multiple unrelated scripts are run simultaneously
They are multiple processes.
Is the interpreter able to draw resources from the OS (processor/memory/disk) independently?
Yes. Each process may access the OS API however it wishes, to the extend that it is possible.
What are the limitations?
Most likely RAM. The same limitations as any other process might encounter.
These are difficult questions to answer, in part because they depend on:
Your operating system: Your OS gets to schedule and run tasks when it wants, which the Python programmer often does not have control over.
What your scripts are actually doing: If your scripts are all trying to write to the same drive, their execution may be halted more often than if no device was being written to. Or the script might run even faster if only one script writes to the drive, as the CPU can let one script calculate when another script writes. (It's hard to tell without benchmark testing.)
How many CPUs you're using: The number of Central Processing Units can improve parallel processing of programs -- but perhaps not. If your programs are constantly reading and writing from the same disk, more CPUs may not be a benefit.
Your Python version: (I'm just adding this for completeness.)
Ultimately, the only way you're going to get any real information on this is if you do your own benchmarking -- and even then, you should remember that those figures you find are only applicable to your current setup. That is, if you go to another computer elsewhere, you may find you get different results.
If you aren't familiar with Python's timeit module, I recommend you look into it. (I'm pretty sure it's a standard module, so you should already have it.) It'll help you do benchmark testing and let you get some definitive answers for your platform.
By asking questions like yours, you may soon hear about Python's GIL (Global Interpreter Lock). It has to do with Python threads, and some people think it's a blessing, and some think it's a curse. Either way, this page:
https://realpython.com/python-gil/
has a good high-level explanation of it when it can work well and when it might not.
I want to use multiprocessing to spread work across a system's multiple cores. As part of the work, they will run subprocess.call(..., shell=True). What happens when they do that? Does the subprocess fork stay on that core?
If the main work is done in the child processes created using subprocess module then you don't need multiprocessing to spread the work across multiple CPU cores. See Python threading multiple bash subprocesses?
What happens when they do that?
subprocess.call() runs an external command and waits for it to complete. It doesn't matter whether it is started inside a worker process created by multiprocessing module or not.
Does the subprocess fork stay on that core?
If you need it; you should set CPU affinity explicitly. psutil provides a portable way to set/get CPU affinity for a process.
If you use numpy then it may affect the cpu affinity. See Why does multiprocessing use only a single core after I import numpy?
AFAIK, Python, using import thread, (or C#) doesn't do "real" multithreading, meaning all threads run on 1 CPU core.
But in C, using pthreads in linux, You get real multithreading.
Is this true ?
Assuming it is true, is there any difference between them when you have only 1 CPU core (I have it in a VM)?
Python uses something called a Global Interpreter Lock which means multiple python threads can only run within one native Thread.
There is more documentation in the official Docs here: https://wiki.python.org/moin/GlobalInterpreterLock
There shouldn't be a real performance difference on single core systems. On multicore systems the difference will varie based on what you do. (I/O is for the most part not affected by the GIL).
I'm not aware of how it works C# internally, but for CPython (the "official" python interpreter) it is true: threads are not really parallel due to GIL.
Other implementation of the Python interpreter do not suffer of this problem (like C's pthreads library).
Howevere if you only have 1 CPU you won't notice any difference.
As a side note: if you need real parallelism in CPython you could you multiprocessing module, which uses processes instead of threads.
EDIT:
Also thread module is a bit deprecated, you should consider using threading.
I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute "python myscript.py" and it should only use 1 cpu.
However, when I look at the results of the command "top", I see that python is using almost 390% of my cpus. I have a quad core, so 8 threads. I don't think that this is helping my script to run faster. So, I would like to understand why python is using more than one cpu, and stop it from doing so.
Interesting thing is when I run a second script, that one also takes up 390%. If I run a 3rd script, the cpu usage for each of them drops to 250%. I had a similar problem with matlab a while ago, and the way I solved it was to launch matlab with -singlecompthread, but I don't know what to do with python.
If it helps, I'm solving the Poisson equation (which is not parallelized at all) in my script.
UPDATE:
My friend ran the code on his own computer and it only takes 100% cpu. I don't use any BLAS, MKL or any other thing. I still don't know what the cause for 400% cpu usage is.
There's a piece of fortran algorithm from the library SLATEC, which solves the Ax=b system. That part I think is using a lot of cpu.
Your code might be calling some functions that uses C/C++/etc. underneath. In that case, it is possible for multiple thread usage.
Are you calling any libraries that are only python bindings to some more efficiently implemented functions?
You can always set your process affinity so it run on only one cpu. Use "taskset" command on linux, or process explorer on windows.
This way, you should be able to know if your script has same performance using one cpu or more.
Could it be that your code uses SciPy or other numeric library for Python that is linked against Intel MKL or another vendor provided library that uses OpenMP? If the underlying C/C++ code is parallelised using OpenMP, you can limit it to a single thread by setting the environment variable OMP_NUM_THREADS to 1:
OMP_NUM_THREADS=1 python myscript.py
Intel MKL for sure is parallel in many places (LAPACK, BLAS and FFT functions) if linked with the corresponding parallel driver (the default link behaviour) and by default starts as many compute threads as is the number of available CPU cores.
In order to do quality assurance in a critical multicore (8) workstation, I want to run the same code in different processors but not in parallel or concurrently.
I need to run it 8 times, one run for each processor.
What I don't know is how to select the processor I want.
How can this be accomplished in Python?
In Linux with schedutils, I believe you'd use taskset -c X python foo.py to run that specific Python process on CPU X (exactly how you identify your CPUs may vary, but I believe numbers such as 1, 2, 3, ... should work anywhere). I'm sure Windows, BSD versions, etc, have similar commands to support direct processor assignment, but I don't know them.
Which process goes on which core is normaly decided by your OS. On linux there is taskset from the schedutils package to explicitly run a program on a processor.
Python 2.6 has a multiprocessing module that takes python functions and runs them in seperate processes, probably moving each new process to a different core.