Exit code 139 Python [duplicate] - python

I'm trying to execute a Python script, but I am getting the following error:
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
I'm using python 3.5.2 on a Linux Mint 18.1 Serena OS
Can someone tell me why this happens, and how can I solve?

The SIGSEGV signal indicates a "segmentation violation" or a "segfault". More or less, this equates to a read or write of a memory address that's not mapped in the process.
This indicates a bug in your program. In a Python program, this is either a bug in the interpreter or in an extension module being used (and the latter is the most common cause).
To fix the problem, you have several options. One option is to produce a minimal, self-contained, complete example which replicates the problem and then submit it as a bug report to the maintainers of the extension module it uses.
Another option is to try to track down the cause yourself. gdb is a valuable tool in such an endeavor, as is a debug build of Python and all of the extension modules in use.
After you have gdb installed, you can use it to run your Python program:
gdb --args python <more args if you want>
And then use gdb commands to track down the problem. If you use run then your program will run until it would have crashed and you will have a chance to inspect the state using other gdb commands.

Another possible cause (which I encountered today) is that you're trying to read/write a file which is open. In this case, simply closing the file and rerunning the script solved the issue.

After some times I discovered that I was running a new TensorFlow version that gives error on older computers. I solved the problem downgrading the TensorFlow version to 1.4

When I encounter this problem, I realize there are some memory issues. I rebooted PC and solved it.

This can also be the case if your C-program (e.g. using cpython is trying to access a variable out-of-bound
ctypedef struct ReturnRows:
double[10] your_value
cdef ReturnRows s_ReturnRows # Allocate memory for the struct
s_ReturnRows.your_value = [0] * 12
will fail with
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)

For me, I was using the OpenCV library to apply SIFT.
In my code, I replaced cv2.SIFT() to cv2.SIFT_create() and the problem is gone.

Deleted the python interpreter and the 'venv' folder solve my error.

I got this error in PHP, while running PHPUnit. The reason was a circular dependency.

I received the same error when trying to connect to an Oracle DB using the pyodbc module:
connection = pyodbc.connect()
The error occurred on the following occasions:
The DB connection has been opened multiple times in the same python
file
While in debug mode a breakpoint has been reached
while the connection to the DB being open
The error message could be avoided with the following approaches:
Open the DB only once and reuse the connection at all needed places
Properly close the DB connection after using it
Hope, that will help anyone!

11 : SIGSEGV - This signal is arises when an memory segement is illegally accessed.
There is a module name signal in python through which you can handle this kind of OS signals.
If you want to ignore this SIGSEGV signal, you can do this:
signal.signal(signal.SIGSEGV, signal.SIG_IGN)
However, ignoring the signal can cause some inappropriate behaviours to your code, so it is better to handle the SIGSEGV signal with your defined handler like this:
def SIGSEGV_signal_arises(signalNum, stack):
print(f"{signalNum} : SIGSEGV arises")
# Your code
signal.signal(signal.SIGSEGV, SIGSEGV_signal_arises)

I encountered this problem when I was trying to run my code on an external GPU which was disconnected. I set os.environ['PYOPENCL_CTX']=2 where GPU 2 was not connected. So I just needed to change the code to os.environ['PYOPENCL_CTX'] = 1.

For me these three lines of code already reproduced the error, no matter how much free memory was available:
import numpy as np
from sklearn.cluster import KMeans
X = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]])
kmeans = KMeans(n_clusters=1, random_state=0).fit(X)
I could solve the issue by removing an reinstalling the scikit-learn package. A very similar solution to this.

This can also occur if trying to compound threads using concurrent.futures. For example, calling .map inside another .map call.
This can be solved by removing one of the .map calls.

I had the same issue working with kmeans from scikit-learn.
Upgrading from scikit-learn 1.0 to 1.0.2 solved it for me.

This issue is often caused by incompatible libraries in your environment. In my case, it was the pyspark library.

In my case, reverting my most recent conda installs fixed the situation.

I got this error when importing monai. It was solved after I created a new conda environment. Possible reasons I could imagine were either that there were some conflict between different packages, or maybe that my environment name was the same as the package name I wanted to import (monai).

found on other page.
interpreter: python 3.8
cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
this solved issue for me.
i was getting SIGSEGV with 2.7, upgraded my python to 3.8 then got different error with OpenCV. and found answer on OpenCV 4.0.0 SystemError: <class 'cv2.CascadeClassifier'> returned a result with an error set.
but eventually one line of code fixed it.

Related

Can't import custom modules in python3.9 when running in wsl2

So I am trying to write some python code that will do two things, that seem to be mutually exclusive on my machine. My PC's host operating system is windows and I run Kali-Linux in WSL2 when I need to test my code on Linux. My code's main function creates two separate multiprocessing.Process objects, assigning a different thread, starting them both one after the other and then calling for them both to be joined. The plan is to allow each to run a simple server application simultaneously on different ports. This does not work when running python3 in PowerShell, as it seems to require access to os.fork() which doesn't work in said environment. When I found this out I pivoted to running in WSL2 which worked fantastically, for a time. After a while of experimenting with some ideas I decided to take some of my code and spin it off into its own file, which I placed in its own 'Libs' folder. WSL2 however, was unable to import this new file, instead giving me the exception ModuleNotFoundError: No module named 'NetStuff'. I originally had added:
sys.path.append('./Libs')
as has worked for me in the past, however when I found that WSL2 was unable to find my module, I printed out sys.path and it revealed that rather than appending my $current_working_directory/Libs like I intended, I was just appending the literal string, which wasn't useful. I then decided to try:
sys.path.append(str(pathlib.Path().resolve()) + '/Libs')
which at the bare minimum shows up as I would expect in sys.path. This, however still didn't work, python was unable to find my module and would unceremoniously crash every time. This led me to try something else, I ran my code in python3 under PowerShell again, which had no issue importing my module, it did still crash due to lacking os.fork() but the import gave no issues. Confused and annoyed I opened my code in IDLE 3.9 which, for some inexplicable reason, was able to import the file, and seemingly use os.fork(). The only major issue with running in IDLE is that it is seemingly incapable of understanding ascii colour escape characters. Given that the goal is to run my code in bash, and ideally also PowerShell, I am not satisfied with this as a solution. I returned to trying to fix the issue in WSL2 by adding my module to /home/Noah/bin, and appending this directory to sys.path, but this has still not so much as given me a new symptom.
I am utterly at a loss at this point. none of the fixes I know off hand are working, and neither are the new ones I've found online. I can't tell if I'm just missing something fundamental about python or if I'm running into a bug, if it's the latter, i can't seem to find other people with the same issue. As a result of my confusion and frustration I am appealing to you, kind users of stackoverflow.
The following is the snippet that is causing me problems in WSL2:
path0 = ('/home/Noah/bin')
path1 = (str(pathlib.Path().resolve()) + '/Libs')
sys.path.append(path0)
sys.path.append(path1)
print(sys.path)
import NetStuff
The following is output of print(sys.path) in WSL2:
['/mnt/c/Users/Noah/VSCodeRepos/Python/BlackPack', '/usr/lib/python39.zip', '/usr/lib/python3.9', '/usr/lib/python3.9/lib-dynload', '/home/noah/.local/lib/python3.9/site-packages', '/usr/local/lib/python3.9/dist-packages', '/usr/lib/python3/dist-packages', '/home/Noah/bin', '/mnt/c/Users/Noah/VSCodeRepos/Python/BlackPack/Libs']
The following is the error being thrown by WSL2:
Traceback (most recent call last):
File "/mnt/c/Users/Noah/VSCodeRepos/Python/BlackPack/BlackPackServer.py", line 21, in <module>
import NetStuff
ModuleNotFoundError: No module named 'NetStuff'
I am specifically hoping to fix the issue with WSL2 at the moment as I am fairly certain that getting the code to run on PowerShell is merely going to require rewriting my code so that it doesn't rely on os.fork(). Thank you for reading my problem, and if I left out any information that you would like to see just tell me and I'll add it in an edit!
Edit: I instantly realized that I should specify that my host machine is running windows 10.

scanpy neighbors function: LLVM ERROR: Symbol not found: __svml_sqrtf8

Whenever I use sc.pp.neighbors(adata) I get this message (without any error):
I have:
scanpy==1.8.1
pynndescent==0.5.4
numba==0.54.0
umap-learn==0.5.1
anndata==0.7.6
My dataset contains only ~20,000 cells so it's quite weird that my kernel dies using this relatively small dataset.
I even tried to use scanpy's bbknn function as an alternative, and my kernel died as well.
I also encountered the same problem as an issue on github: https://github.com/theislab/scanpy/issues/1567 but it had no solution yet.
I tried to run the code on cmd instead of jupyter-notebook and got the next error:
LLVM ERROR: Symbol not found: __svml_sqrtf8
What should I do in order to properly run this function?
The above comment by #Iguananaut worked for me:
If you can reproduce the problem outside the Jupyter Notebook, then it's not really a problem relative to the use of Jupyter, and that tag can be avoided. The problem is somewhere else. The issue is likely related to numba, and possibly an incompatibility between a pre-compiled numba and other libraries installed on your system. I wonder if it would help if you set the environment variable NUMBA_DISABLE_INTEL_SVML=1
I created a new environmental variable as below:
variable name: NUMBA_DISABLE_INTEL_SVML
variable value: 1
This then allowed me to run umap. Before I was seeing the same error in a terminal window:
Symbol not found: _svml_sqrtf8

BrokenProcessPool message when using Parallel from Joblib 0.16.0 - Python

I've got a situation, when using the parallel function of joblib (v0.16.0)
I have this lines on my code:
with parallel_backend('loky', n_jobs=8):
lineas = Parallel(verbose=10)(delayed(apply_prior_ind_def)(g) for g in df1_merge.groupby(['S1EMP','CONTRA1']))
The problem here is that sometimes the execution fails under the following message:
BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.
There is no apparent traceback to this issue, because without making any changes to the code and just restarting the terminal, it can sometimes restart the execution and finish it correctly.
Hope someone have had a similar issue when using parallel, and can shed a light on it.
Many thanks in advance.
Using Spyder3 and Python 3.6.5
What has worked for me is to have 2 terminals open at the same time. The first one is the one that fails to launch, but the second one is able to run the whole parallel code.

Error in prediction using sknn.mlp

I use Anaconda on a Windows 10 laptop with Python 2.7 and Spark 2.1. Built a deep learning model using Sknn.mlp package. I have completed the model. When I try to predict using the predict function, it throws an error. I run the same code on my Mac and it works just fine. Wondering what is wrong with my windows packages.
'NoneType' object is not callable
I verified input data. It is numpy.array and it does not have null value. Its dimension is same as training one and all attributed are the same. Not sure what it can be.
I don't work with Python on Windows, so this answer will be very vague, but maybe it will guide you in the right direction. Sometimes there are cross-platform errors due to one module still not being updated for the OS, frequently when another related module gets an update. I recall something happened to me with a django application which required somebody more familiar with Windows to fix it for me.
Maybe you could try with an environment using older versions of your modules until you find the culprit.
I finally solved the problem on windows. Here is the solution in case you face it.
The Theano package was faulty. I installed the latest version from github and then it threw another error as below:
RuntimeError: To use MKL 2018 with Theano you MUST set "MKL_THREADING_LAYER=GNU" in your environment.
In order to solve this, I created a variable named MKL_Threading_Layer under user environment variable and passed GNU. Reset the kernel and it was working.
Hope it helps!

Pandas import error when debugging using PVTS

I am dealing with a very silly error, and wondering if any of you have the same problem. When I try to import pandas using import pandas as pd I get an error in copy.py. I debugged into the pamdas imports, and I found that the copy error is thrown when pandas tries to import this: from pandas.io.html import read_html
The exception that is throwns is:
un(shallow)copyable object of type <type 'Element'>
I do not get this error if I try to straight up run the code and not use the PVTS debugger. I am using the python 2.7 interpreter, pandas version 0.12 which came with the python xy 2.7.5.1 distro and MS Visual Studio 2012.
Any help would be appreciated. Thanks!
This is a limitation of the way PTVS detects unhandled exceptions - it can't see the except-block that's going to catch this exception because it is in the code that is eval'd from a string. See the bug in the tracker for more details.
As a workaround, disable "Debug standard library" checked in Tools -> Options -> Python Tools -> Debugging - this should cause the exception to be ignored.
I had the same problem for a while, disabling "Debug standard library" didn't help, then I downloaded the latest version of Python (3.4), pip installed the libs (for example NLTK), and it worked!
I had a system crash while developing a PTVS app and then ran into this problem, re-running the Intellisense 'refresh DB' cleared it.
I faced the same issue, but just hitting 'Continue' will cause it to be ignored and the code execution will proceed in the usual way.
Or you could uncheck the "Break when this exception type is user-handled" option that comes up in the dialog box displaying the error.

Categories