How can I stop numpy_1.8 from masking numpy_1.10? - python

I had thought I was using the most recent version of numpy (1.10) At least, when I run pip list I see numpy (1.10.4). However, when I get into the python interpreter and type
import numpy
numpy.__version__
I see
'1.8.2'
I expect that the 1.8.2 got installed sometime after the 1.10 version, because I've recently installed some new packages and I now when I run some code that used to work, I get this error:
RuntimeError: module compiled against API version a but this version of numpy is 9
In OpenCV 2.4.8: module compiled against API version 9 the accepted answer mentioned that the numpy team refers to version 1.8 as numpy version 9. So, I think that I was originally using numpy_1.10, and somehow got numpy_1.8 installed.
My first question is How did this happen and how can I guard against it?
I also want to know how I can stop numpy_1.8 from blocking 1.10. My initial thought was to remove numpy_1.8 using apt-get, but that would've removed many other packages that are dependent upon 1.8.
I'd be tempted to just use rm to get rid of the 1.8 version, but am worried that those other packages would be affected.
My second thought is to change sys.path to make certain the 1.10 version is seen before the 1.8 version. So far, I'm not using PYTHONPATH. Is there a way to change sys.path without using PYTHONPATH? Is this a reasonable approach to take?

The simplest thing to do is to remove your NumPy 1.8 installation. Look at numpy.__file__ to find where your 1.8 installation is, then delete the directory.
Alternatively, as you mentioned and if you don't want to delete 1.8, you could also change your path. Something like this ought to do it:
import sys
sys.path.insert(3, '<path_to_your_NumPy_1.10_install>')
I've inserted at position 3 so that you keep things like '' at the top, but you could modify this as needed.

Related

HDBSCAN: ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

I try to inititialize HDBSCAN for clustering in JupytherLab. I use Python 3.7.6..
import numpy as np
import pandas as pd
from sklearn.datasets import load_digits
from sklearn.manifold import TSNE
import hdbscan
There always always appears the same error (see headline) and until now I do not know, from what exactly it comes from.
I have looked in several post after solutions, but no solution has helped me until yet.
For example:
uninstalled and installed numpy.
installed numpy >= 1.20.0
tried lines like pip install package --no-cache-dir --no-binary :all:
tried following package version combination: hdbscan=0.8.19, matplotlib=3.2.2, numpy=1.15.4, pandas=0.23.4, scikit-learn=0.20.1, scipy=1.1.0, tensorflow=1.13.1.
I have also tried to install packages like tensorboard, but it did not helped. Everything is installed via the Terminal and with pip.
I start to think, that the problem might be deeper - but maybe I overlooked something important.
Can somebody help me to find the bug, please?
Best regards
Philipp
I guess you've probably seen this very long HDBSCAN GitHub issue where there still doesn't seem to be a clear solution. Unfortunately it seems to affect different systems in weird ways and there is a huge list of possible solutions and things that worked for other people (personally, just reinstalling numpy worked for me when I had a similar problem last week.)
The fact that you can try so many things and still have it not work seems suspicious. Maybe something else about your Python install or the way you're trying them is affecting the solutions? For instance, is JupyterLab definitely using the same Python environment that you're trying these solutions on? (You could test this by uninstalling HDBSCAN and seeing if the error changes instead to "package not found.")
Other than the many solutions in the GitHub issue (which it sounds like you've already tried), I really don't think there's much else you can try other than freshly reinstalling Python. Something about NumPy 1.20 and a change to the C API is causing this issue and it could be that something is lurking in your install every time you try these solutions.
Alternatively, you could make a new Python install/environment with a tool like pyenv or anaconda so that it doesn't break your existing install, and you can try and install just the bare minimum on this new install (i.e. just HDBSCAN.)
Upgrading the numpy library solved the problem.
My numpy version is 1.22.0 and sklearn version is 0.24.1.
you should also look here: ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

How to get rid of 3rd collections.abc DeprecationWarning

My application is flooded with warnings from a 3rd package
transformers/modeling_deberta.py:18: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
from collections import Sequence
How can I suppress those warnings?
I've tried:
export PYTHONWARNINGS="ignore::DeprecationWarning"
warnings.filterwarnings(action="ignore")
warnings.filterwarnings(action="ignore", category=DeprecationWarning)
warnings.filterwarnings(action="ignore", category=DeprecationWarning, module=r".*transformers.*")
warnings.filterwarnings(action="ignore", category=DeprecationWarning, module=r".*collections.*")
warnings.filterwarnings(action="ignore", message=r".*collections.abc.*")
Update
The following options are not feasible:
Remove the 3rd package that generates those warnings. It is irreplaceable.
Downgrade to python 3.3
Maybe I should just wait for the 3rd package to upgrade. Just wonder if there is any other option to suppress specific 3rd party warnings in python.
The warning tells you that you are getting some resources from a location which was correct prior to Python 3.3 and will not work at all starting from Python 3.9. You are using a Python version between 3.3 and 3.9, which means that this will still work for you for the time being, but you will need to refactor your code so you import ABCs from collections.abc instead of from collections. Unless you refactor your code in the manner the error suggests, you will be stuck with Python versions prior to 3.9, which limits your possibilities, will disallow the usage of any goodies implemented after those versions and will increasingly see libraries incompatible with your project due to they being too modern for your project.
You can get rid of the warnings by downgrading your project to a Python version prior to 3.3, but that's a direction you should strive to avoid if possible. The best solution is to refactor your project to comply to the terms of modern Python versions and if you use packages that prevent you from doing so, then you might want to upgrade those packages. If there is no upgrade that would solve this issue, then it is well worth asking your question whether it is a higher cost to implement their functionality in a more modern way in terms of labor on your part or is it a higher cost in terms of technological shortage if you get stuck with old Python versions.
I found my answer from here
Solution: make sure the following code runs before the 3rd package import.
If multiprocessing is used, the code has to be called in each process.
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=DeprecationWarning)
from collections import Sequence

The sklearn.* module is deprecated in version 0.22 and will be removed in version 0.24

I am migrating a piece of software from Python 2.7 to Python 3.
One problem that arises is:
The sklearn.neighbors.kde module is deprecated in version 0.22 and
will be removed in version 0.24. The corresponding classes / functions
should instead be imported from sklearn.neighbors. Anything that
cannot be imported from sklearn.neighbors is now part of the private
API.
I am not sure which line causes this, and not sure if it is an error or a warning, and what are the implications.
On python 2.7 everything works fine.
How do I get rid of this?
It will work until you update your scikit/sklearn version. Then the Kernel Density package will not run any more. You still have time to search for similar modules if you want to update your version.
But you can also set up different environments with different versions. Thus, if you need this module just start an environment and don't upgrade your sklearn version in this environment.
It's just a warning, for now -- until you upgrade sklearn to version 0.24. Then your code will need to be modified before it works. It's giving you a heads-up about this, so you can fix your code ahead of time. The modifications described below should work with your current version; you don't need to wait to upgrade before changing your code (at least, that's how these deprecation warnings usually work).
The corresponding classes / functions should instead be imported from sklearn.neighbors.
If I read this message correctly, it's saying that if you're using a function like sklearn.neighbours.kde.some_function() in your code now, you need to change it to sklearn.neighbours.some_function().
Anything that cannot be imported from sklearn.neighbors is now part of the private API.
This seems to be saying that there may be some functions that will no longer be available to you, even using the modification above.

Segmentation Fault: 11 on OSX python

I'm getting an intermittant segfault in python, which really shouldn't happen. It's a heisenbug, so I haven't figured out exactly what's causing it.
I've done the search and found that there was a known problem with an older version of python, but I'm using 2.7.10 (in a virtualenv, in case that matters)
I'm using pandas (0.18.0) , scipy(0.17.0) and numpy (1.11.0), in case the problem might be in there...
It looks like: How to generate core dumps in Mac OS X?
might be the best way to get stack trace...it appears in ~/Library/Logs/DiagnosticReports I'm not sure if it's USEFUL, and it's not a core per se, to be put into a debugger, but it's something...

Python reportlab pyRXP on OS X

So I am trying to install pyRXP on my os x machine (using anaconda).
If I use pip, import pyRXP doesn't work,
I have also tried to install by cloning https://bitbucket.org/rptlab/pyrxp
and running the setup.py file
It claims I need to compile something if I am not on windows, but I can't work out what, where that source is.
Sorry I am kind of new to python, and this has become very confusing, no amount of googling is helping.
Cheers for any help.
In newer versions of pyRXP, the module seems to have been renamed to pyRXPU. Something about supporting Unicode.
It seems that pyRXP (which supported 8 bit characters) has been removed and only pyRXPU (supporting 16 bit unicode characters) is in the package now, although the documentation still suggests you can import pyRXP! I've submitted an issue about this.
The only solution, if using the latest version of pyRXP (as suggested by the previous answer), is to instead do:
import pyRXPU
Update: The documentation, and README file in the repository, have now been fixed to only reference import pyRXPU and no longer use import pyRXP.

Categories