I have a program which can optionally use tqdm to display a progress bar for long calculations.
I've packaged the program using pyinstaller.
The packaged program runs to completion just fine if I don't enable tqdm.
If I enable tqdm, the package works fine and displays progress bars until it gets to the very end, then the message below is displayed and the packaged program is relaunched (with the wrong args).
multiprocessing/resource_tracker.py:104: UserWarning: resource_tracker:
process died unexpectedly, relaunching. Some resources might leak.
There is nothing fancy about the way the program invokes tqdm.
There is a loop that iterates over the members of a list.
The program does not import the multiprocessing module.
def best_cost(self, guesses, steps=1, verbose=0) -> None:
""" find the guess with the minimum cost """
self.cost = float("inf")
for guess in guesses if verbose < 3 else tqdm(guesses, mininterval=1.0):
groups = dict()
for (word, frequency) in self.words.items():
result = calc_result(guess, word)
groups.setdefault(result, {})[word] = frequency
self.check_cost(guess, groups, steps=steps)
return
The problem occurs whether pyinstaller creates the package with --onedir or --onefile.
I'm running on MACOS Monterey V12.4 on an Intel processor.
I'm using python 3.10.4, but the problem also occurs with 3.9. Pip installed packages include: pyinstaller 5.1, pyinstaller-hooks-contrib 2022.7, and tqdm 4.64.0.
I package the program in a virtualenv.
I found this in the documentation for the multiprocessing module:
On Unix using the spawn or forkserver start methods will also start a resource tracker process which tracks the unlinked named system resources (such as named semaphores or SharedMemory objects) created by processes of the program. When all processes have exited the resource tracker unlinks any remaining tracked object. Usually there should be none, but if a process was killed by a signal there may be some “leaked” resources. (Neither leaked semaphores nor shared memory segments will be automatically unlinked until the next reboot. This is problematic for both objects because the system allows only a limited number of named semaphores, and shared memory segments occupy some space in the main memory.)
I can't figure out how to debug the the problem since it only occurs with the pyinstaller packaged program and my IDE (vscode) doesn't help.
My program doesn't use the multiprocessing module.
The tqdm documentation doesn't mention issues like this, but it does include the multiprocessing module to create locks (semaphores) in the tqdm class which
get used by the instances.
There aren't any tqdm methods I can find to delete the semaphores resources;
I guess that is just supposed to happen automatically when the program ends.
I think the pyinstaller packaged program may also use the multiprocessing
module.
Perhaps pyinstaller is interfering with deletion of the tqdm semaphores somehow?
The packaged program works fine till it starts to exit and then gets restarted.
Does anyone know how I can get the packaged program to exit cleanly?
So I am using joblib to parallelize some code and I noticed that I couldn't print things when using it inside a jupyter notebook.
I tried using doing the same example in ipython and it worked perfectly.
Here is a minimal (not) working example to write in a jupyter notebook cell
from joblib import Parallel, delayed
Parallel(n_jobs=8)(delayed(print)(i) for i in range(10))
So I am getting the output as [None, None, None, None, None, None, None, None, None, None] but nothing is printed.
What I expect to see (print order could be random in reality):
1
2
3
4
5
6
7
8
9
10
[None, None, None, None, None, None, None, None, None, None]
Note:
You can see the prints in the logs of the notebook process. But I would like the prints to happen in the notebook, not the logs of the notebook process.
EDIT
I have opened a Github issue, but with minimal attention so far.
I think this caused in part by the way Parallel spawns the child workers, and how Jupyter Notebook handles IO for those workers. When started without specifying a value for backend, Parallel will default to loky which utilizes a pooling strategy that directly uses a fork-exec model to create the subprocesses.
If you start Notebook from a terminal using
$ jupyter-notebook
the regular stderr and stdout streams appear to remain attached to that terminal, while the notebook session will start in a new browser window. Running the posted code snippet in the notebook does produce the expected output, but it seems to go to stdout and ends up in the terminal (as hinted in the Note in the question). This further supports the suspicion that this behavior is caused by the interaction between loky and notebook, and the way the standard IO streams are handled by notebook for child processes.
This lead me to this discussion on github (active within the past 2 weeks as of this posting) where the authors of notebook appear to be aware of this, but it would seem that there is no obvious and quick fix for the issue at the moment.
If you don't mind switching the backend that Parallel uses to spawn children, you can do so like this:
from joblib import Parallel, delayed
Parallel(n_jobs=8, backend='multiprocessing')(delayed(print)(i) for i in range(10))
with the multiprocessing backend, things work as expected. threading looks to work fine too. This may not be the solution you were hoping for, but hopefully it is sufficient while the notebook authors work on finding a proper solution.
I'll cross-post this to GitHub in case anyone there cares to add to this answer (I don't want to misstate anyone's intent or put words in people mouths!).
Test Environment:
MacOS - Mojave (10.14)
Python - 3.7.3
pip3 - 19.3.1
Tested in 2 configurations. Confirmed to produce the expected output when using both multiprocessing and threading for the backend parameter. Packages install using pip3.
Setup 1:
ipykernel 5.1.1
ipython 7.5.0
jupyter 1.0.0
jupyter-client 5.2.4
jupyter-console 6.0.0
jupyter-core 4.4.0
notebook 5.7.8
Setup 2:
ipykernel 5.1.4
ipython 7.12.0
jupyter 1.0.0
jupyter-client 5.3.4
jupyter-console 6.1.0
jupyter-core 4.6.2
notebook 6.0.3
I also was successful using the same versions as 'Setup 2' but with the notebook package version downgraded to 6.0.2.
Note:
This approach works inconsistently on Windows. Different combinations of software versions yield different results. Doing the most intuitive thing-- upgrading everything to the latest version-- does not guarantee it will work.
In Z4-tier's git link scottgigante's method work in Windows, but opposite to the mentined results: in Jupyter notebook, the "multiprocessing" backend hang forever, but the default loky work well (python 3.8.5 and notebook 6.1.1):
from joblib import Parallel, delayed
import sys
def g(x):
stream = getattr(sys, "stdout")
print("{}".format(x), file=stream)
stream.flush()
return x
Parallel(n_jobs=2)(delayed(g)(x**2) for x in range(5))
executed in 91ms, finished 11:17:25 2021-05-13
[0, 1, 4, 9, 16]
A simpler method is to use the identity function in delay:
Parallel(n_jobs=2)(delayed(lambda y:y)([np.log(x),np.sin(x)]) for x in range(5))
executed in 151ms, finished 09:34:18 2021-05-17
[[-inf, 0.0],
[0.0, 0.8414709848078965],
[0.6931471805599453, 0.9092974268256817],
[1.0986122886681098, 0.1411200080598672],
[1.3862943611198906, -0.7568024953079282]]
Or use like this:
Parallel(n_jobs=2)(delayed(lambda y:[np.log(y),np.sin(y)])(x) for x in range(5))
executed in 589ms, finished 09:44:57 2021-05-17
[[-inf, 0.0],
[0.0, 0.8414709848078965],
[0.6931471805599453, 0.9092974268256817],
[1.0986122886681098, 0.1411200080598672],
[1.3862943611198906, -0.7568024953079282]]
This problem is fixed in the latest version of ipykernel.
To solve the issue just do pip install --upgrade ipykernel.
From what I understand, any version above 6 will do, as mentioned in this Github comment by one of the maintainers.
I have different datasets with different sizes and I want to calculate the memory usage of the sorting algorithms implemented in Python against these data sets. For each type of sorting algorithms,I want to know the memory usage for each data set so that I can plot a graph.
I tried using psutil.virtual_memory() but it is not giving the expected results.
I tried to install heapy but anaconda is telling me that no such library exists.
I would recommend checking out memory_profiler. A profiler that can output the line by line memory consumption of a Python function. It can be installed by running pip install -U memory_profiler and it seems pretty simple to use. Simply add a #profile over the function you would like to monitor and run python -m memory_profiler YourScript.py to start monitoring. memory_profiler also comes with a auto-plotter executable, mprof. More information on mprof can be found here and here.
You can also write the memory_profiler output to a file using:
fp=open('memory_profiler.log','w+')
#profile(stream=fp)
And by importing the module in the script itself, you can avoid calling -m memory_profiler all together: from memory_profiler import profile
If that doesn't quite fit the bill, I'd recommend checking out this help full little blog.
Using mpi4py, I'm running a python program which launches multiple fortran processes in parallel, starting from a SLURM script using (for example):
mpirun -n 4 python myprog.py
but have noticed that myprog.py takes longer to run the higher the number of tasks requested eg. running myprog.py (following code shows only the mpi part of program):
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
data = None
if rank == 0:
data = params
recvbuf = np.empty(4, dtype=np.float64)
comm.Scatter(data, recvbuf, root=0)
py_task(int(recvbuf[0]), recvbuf[1], recvbuf[2], int(recvbuf[3]))
with mpirun -n 1 ... on a single recvbuf array takes 3min- whilst running on four recvbuf arrays (expectedly in parallel), on four processors using mpirun -n 4 ... takes about 5 min. However, I would expect run times to be approximately equal for both the single and four processor cases.
py_task is effectively a python wrapper to launch a fortran program using:
subprocess.check_call(cmd)
There seems to be some interaction between subprocess.check_call(cmd) and the mpi4py package that is stopping the code from properly operating in parallel.
I've looked up this issue but can't seem to find anything that's helped it. Are there any fixes to this issue/ detailed descriptions explaining what's going on here/ recommendations on how to isolate the cause of the bottleneck in this code?
Additional note:
This pipeline has been adapted to mpi4py from "joblib import Parallel", where there was no previous issues with the subprocess.check_call() running in parallel, and is why I suspect this issue is linked with the interaction between subprocess and mpi4py.
The slowdown was initially fixed by adding in:
export SLURM_CPU_BIND=none
to the slurm script that was launching the jobs.
Whilst the above did provide a temporary fix, the issue was actually much deeper and I will provide a very basic description of it here.
1) I uninstalled the mpi4py I had installed with conda, then reinstalled it with Intel MPI loaded (the recommended MPI version for our computing cluster). In the SLURM script, I then changed the launching of the python program to:
srun python my_prog.py .
and removed the export... line above, and the slowdown was removed.
2) Another slowdown was found for launching > 40 tasks at once. This was due to:
Each time the fortran-based subprocess gets launched, there's a cost to the filesystem requesting the initial resources (eg. supplying file as an argument to the program). In my case there were a large number of tasks being launched simultaneously and each file could be ~500mb, which probably exceeded the IO capabilities of the cluster filesystem. This led to a large overhead introduced to the program from the slowdown of launching each subprocess.
The previous joblib implementation of the parallelisation was only using a maximum of 24 cores at a time, and then there was no significant bottleneck in the requests to the filesystem- hence why no performance issue was previously found.
For 2), I found the best solution was to significantly refactor my code to minimise the number of subprocesses launched. A very simple fix, but one I hadn't been aware of before finding out about the bottlenecks in resource requests on filesystems.
(Finally, I'll also add that using the subprocess module within mpi4py is generally not recommended online, with the multiprocessing module preferred for single node usage. )
This issue may be related to https://github.com/ipython/ipyparallel/issues/207 which is also not marked as solved, yet.
I also opened this issue here https://github.com/ipython/ipyparallel/issues/286
I want to execute multiple tasks in parallel using python and ipyparallel in a jupyter notebook and using 4 local engines by executing ipcluster start in a local console.
Besides that one can also use DirectView, I use LoadBalancedView to map a set of tasks. Each task takes around 0.2 seconds (can vary though) and each task does a MySQL query where it loads some data and then processes it.
Working with ~45000 tasks works fine, however, my memory grows really high. This is actually bad because I want to run another experiment with over 660000 tasks which I can't run anymore because it bloats up my memory limit of 16 GB and then the memory swapping on my local drive starts. However, when using the DirectView my memory grows relatively small and is never full. But I actually need LoadBalancedView.
Even when running a minimal working example without database query this happens (see below).
I am not perfectly familiar with the ipyparallel library but I've read something about logs and caches that the ipcontroler does which may cause this. I am still not sure if it is a bug or if I can change some settings to avoid my problem.
Running a MWE
For my Python 3.5.3 environment running on Windows 10 I use the following (recent) packages:
ipython 6.1.0
ipython_genutils 6.1.0
ipyparallel 6.0.2
jupyter 1.0.0
jupyter_client 4.4.0
jupyter_console 5.0.0
jupyter_core 4.2.0
I would like the following example to work for LoadBalancedView without the immense memory growth (if possible at all):
Start ipcluster start on a console
Run a jupyter notebook with the following three cells:
<1st cell>
import ipyparallel as ipp
rc = ipp.Client()
lview = rc.load_balanced_view()
<2nd cell>
%%px --local
import time
<3rd cell>
def sleep_here(i):
time.sleep(0.2)
return 42
amr = lview.map_async(sleep_here, range(660000))
amr.wait_interactive()