Running Python code in different processors - python

In order to do quality assurance in a critical multicore (8) workstation, I want to run the same code in different processors but not in parallel or concurrently.
I need to run it 8 times, one run for each processor.
What I don't know is how to select the processor I want.
How can this be accomplished in Python?

In Linux with schedutils, I believe you'd use taskset -c X python foo.py to run that specific Python process on CPU X (exactly how you identify your CPUs may vary, but I believe numbers such as 1, 2, 3, ... should work anywhere). I'm sure Windows, BSD versions, etc, have similar commands to support direct processor assignment, but I don't know them.

Which process goes on which core is normaly decided by your OS. On linux there is taskset from the schedutils package to explicitly run a program on a processor.
Python 2.6 has a multiprocessing module that takes python functions and runs them in seperate processes, probably moving each new process to a different core.

Related

Control number of CPU using in jupyterlab server

I'm using jupyterlab and I know that I have 12 cores available.
At the moment I use only 1 and I would like to use more.
I have tried to changed the number I use by write this in the terminal:
export JULIA_NUM_THREADS=7
but then when I print:
import threading
threading.activeCount()
>>>5
how can I make more CPU available for my jupyterlab notebook?
This is really not my field so I'm sorry if is smething really simple I just don't understand what am I doing wrong and where to start from.
TLDD; No configuration needed. It is available to you, just need to code explicitely what you want to run in parallel.
JULIA_ACTIVE_THREADS is a configuration option for the Julia Kernel in Jupyter, not for the Python Kernel (the process that runs your notebook code).
Unless you run Jupyter inside a container, you can use out of the box all cores available in your system. If Jupyter is in a container or a virtual machine, it will use what you allocate and nothing more.
Just remember that by default you use 1 core when you run your Jupyter kernel.
When you run threading.active_count() and get 1, this means you are using one running thread on your code. Moden processors can use several threads for each available core. The bad news is that this is not a measure about how good you are using the cpu.
Python can act as an orchestrator for libraries that work in paraller behind the scenes (think numpy, pandas, tensorflow...).
If you want to code Python code that use more than 1 thread and/or 1 CPU, take a look at the multiprocess module.
The multipreocessing module is part of the standard library, and you can use it inside without trouble inside Jupyter. Probably you will find the Process and Pool methods useful (if you want to work with deep learning, there is a pytorch.multiprocessing module with the same interface but with support for working with GPUs in different threads).
A few thoughts, but to long for a comment, i am not familiar with jupyter, only "normal python", so maybe this all gets in the wrong direction ;):
As far as i know, the àctive_count (in my opinion you should not use the old camelCase name) only returns the amount of active threads, not the available. So try to add more threads. I have a Quadcore and jupyter starts with 5 threads, but i can add more.
Multithreading is not the same as multiprocessing (If you want to run on different Cores you have to use multiprocessing) (python thread vs. multiproccess), maybe you are looking for the wrong thing?

Speed comparison using multiprocessing.Process versus subprocess.Popen

I am using Python3 to execute PYQT code; and at the same time, I need to call Python2.7 code, for operations that I cannot perform via Python3.
I did implement the 2.7 code execution via Popen; although it takes a considerable amount of time to run the 2.7 code, when called from Popen. The same operation is performed much faster if I run it directly from Python2.7.
Would be possible to use multiprocessing instead of subprocess.Popen for the same purpose, to speed up the execution of the 2.7 code?
And if that is appropriate; what would be the correct way to call Python2.7 code from a multiprocessing.Process? Or is it a waste to use multiprocess, since I am executing only one operation?
multiprocessing is similar to subprocess only on non-POSIX systems that cannot fork processes so you could, theoretically, hack away multiprocessing to use a different interpreter. It would be more trouble than its worth, tho, because at that point you wouldn't get any performance boost between spawning a subprocess and using a multiprocessing.Process (in fact, it would probably end slower due to the communication overhead added to multiprocessing.Process).
So, if we're talking only about a single task that has to execute in a different interpreter, this is as fast as you're gonna get. If there are multiple tasks to be executed in a different interpreter you may still benefit from multiprocessing.Process by spawning a single subprocess to run the different interpreter and then using multiprocessing within it to distribute multiple tasks over your cores.

Python 3 on macOS: how to set process affinity

I am trying to restrict the number of CPUs used by Python (for benchmarking & to see if it speeds up my program).
I have found a few Python modules for achieving this ('os', 'affinity', 'psutil') except that their methods for changing affinity only works with Linux (and sometimes Windows). There is also a suggestion to use the 'taskset' command (Why does multiprocessing use only a single core after I import numpy?) but this command not available on macOS as far as I know.
Is there a (preferable clean & easy) way to change affinity while running Python / iPython on macOS? It seems like changing processor affinity in Mac is not as easy as in other platforms (http://softwareramblings.com/2008/04/thread-affinity-on-os-x.html).
Not possible. See Thread Affinity API Release Notes:
OS X does not export interfaces that identify processors or control thread placement—explicit thread to processor binding is not supported. Instead, the kernel manages all thread placement. Applications expect that the scheduler will, under most circumstances, run its threads using a good processor placement with respect to cache affinity.
Note that thread affinity is something you'd consider fairly late when optimizing a program, there are a million things to do which have a larger impact on your program.
Also note that Python is particularly bad at multithreading to begin with.

Looking at the task manager

I'm writing a program that is supposed to synchronize several different parts,
including hardware. This is done by using a python script that communicates with
other programs.
I've found out that something I need for synchronization is for the main script
to be able to tell if another particular program is running, or if it stops.
I imagine it would look someting like:
#checking if a program runs
if is_running(program):
statements
#Waiting for a program to stop
while is_running(program):
pass
Does anyone know? I'm using Python 2.7 on Windows 7.
This question is pretty similar to your situation, and suggests using WMI which will run on python 2.4 to 3.2 and Windows 7, or using the builtin wmic to get the list of proc.
If you care about making the code cross platform, you could also use psutil, which works on "Linux, Windows, OSX, FreeBSD and Sun Solaris, both 32-bit and 64-bit architectures, with Python versions from 2.4 to 3.4."

Stop Python from using more than one cpu

I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute "python myscript.py" and it should only use 1 cpu.
However, when I look at the results of the command "top", I see that python is using almost 390% of my cpus. I have a quad core, so 8 threads. I don't think that this is helping my script to run faster. So, I would like to understand why python is using more than one cpu, and stop it from doing so.
Interesting thing is when I run a second script, that one also takes up 390%. If I run a 3rd script, the cpu usage for each of them drops to 250%. I had a similar problem with matlab a while ago, and the way I solved it was to launch matlab with -singlecompthread, but I don't know what to do with python.
If it helps, I'm solving the Poisson equation (which is not parallelized at all) in my script.
UPDATE:
My friend ran the code on his own computer and it only takes 100% cpu. I don't use any BLAS, MKL or any other thing. I still don't know what the cause for 400% cpu usage is.
There's a piece of fortran algorithm from the library SLATEC, which solves the Ax=b system. That part I think is using a lot of cpu.
Your code might be calling some functions that uses C/C++/etc. underneath. In that case, it is possible for multiple thread usage.
Are you calling any libraries that are only python bindings to some more efficiently implemented functions?
You can always set your process affinity so it run on only one cpu. Use "taskset" command on linux, or process explorer on windows.
This way, you should be able to know if your script has same performance using one cpu or more.
Could it be that your code uses SciPy or other numeric library for Python that is linked against Intel MKL or another vendor provided library that uses OpenMP? If the underlying C/C++ code is parallelised using OpenMP, you can limit it to a single thread by setting the environment variable OMP_NUM_THREADS to 1:
OMP_NUM_THREADS=1 python myscript.py
Intel MKL for sure is parallel in many places (LAPACK, BLAS and FFT functions) if linked with the corresponding parallel driver (the default link behaviour) and by default starts as many compute threads as is the number of available CPU cores.

Categories