Any idea how to deal with mglearn import error? - python

I am a new data science bootcamp student. Recently I bought a book "Introduction to Machine Learning with Pyhton". However the book heavily uses mglearn library. When I want to import the library I am having an error. (You can see from below.) I can not demonstrate the examples provided from the book. Is there any way to tackle this issue?
Many thanks in advance!.
ImportError Traceback (most recent call last)
Cell In [3], line 1
----> 1 import mglearn
File c:\Users\murad\AppData\Local\Programs\Python\Python310\lib\site-packages\mglearn\__init__.py:1
----> 1 from . import plots
2 from . import tools
3 from .plots import cm3, cm2
File c:\Users\murad\AppData\Local\Programs\Python\Python310\lib\site-packages\mglearn\plots.py:5
3 from .plot_animal_tree import plot_animal_tree
4 from .plot_rbf_svm_parameters import plot_svm
----> 5 from .plot_knn_regression import plot_knn_regression
6 from .plot_knn_classification import plot_knn_classification
7 from .plot_2d_separator import plot_2d_classification, plot_2d_separator
File c:\Users\murad\AppData\Local\Programs\Python\Python310\lib\site-packages\mglearn\plot_knn_regression.py:7
4 from sklearn.neighbors import KNeighborsRegressor
5 from sklearn.metrics import euclidean_distances
----> 7 from .datasets import make_wave
8 from .plot_helpers import cm3
11 def plot_knn_regression(n_neighbors=1):
File c:\Users\murad\AppData\Local\Programs\Python\Python310\lib\site-packages\mglearn\datasets.py:5
3 import os
4 from scipy import signal
----> 5 from sklearn.datasets import load_boston
6 from sklearn.preprocessing import MinMaxScaler, PolynomialFeatures
7 from .make_blobs import make_blobs
File c:\Users\murad\AppData\Local\Programs\Python\Python310\lib\site-packages\sklearn\datasets\__init__.py:156, in __getattr__(name)
105 if name == "load_boston":
106 msg = textwrap.dedent(
107 """
108 `load_boston` has been removed from scikit-learn since version 1.2.
(...)
154 """
155 )
--> 156 raise ImportError(msg)
157 try:
158 return globals()[name]
ImportError:
`load_boston` has been removed from scikit-learn since version 1.2.
The Boston housing prices dataset has an ethical problem: as
investigated in [1], the authors of this dataset engineered a
non-invertible variable "B" assuming that racial self-segregation had a
positive impact on house prices [2]. Furthermore the goal of the
research that led to the creation of this dataset was to study the
impact of air quality but it did not give adequate demonstration of the
validity of this assumption.
The scikit-learn maintainers therefore strongly discourage the use of
...
[2] Harrison Jr, David, and Daniel L. Rubinfeld.
"Hedonic housing prices and the demand for clean air."
Journal of environmental economics and management 5.1 (1978): 81-102.
<https://www.researchgate.net/publication/4974606_Hedonic_housing_prices_and_the_demand_for_clean_air>
I tried to find an answer from online but I could not find anything.

The mglearn package seems to rely on deprecated and removed features of its dependencies. Hopefully the authors will upgrade their package, or at least better specify its dependencies. You can read about the several problems users are having on the project issues webpage.
To successfully import mglearn, here's what I did.
In a dedicated directory I created a virtual environment with:
python -m venv _venv
... and then activated it:
. _venv/bin/activate
I then upgraded/installed the packages that at least allow import mglearn to complete without throwing an error.
python -m pip install --upgrade --upgrade-strategy eager pip setuptools ipython mglearn "scikit-learn==1.0.2" "joblib<0.12"
If you need to install more packages, using --upgrade-strategy eager for those packages may not be the best choice. YMMV.
There is probably a conda equivalent of the above procedure, but I'm not familiar with it.
As you use the mglearn features, it's possible there may be further dependency problems that need to be resolved. Good luck!

Related

NameError: name 'cos' is not defined, sageMath9.4

I am writing a Python module using SageMath9.4. Basically, I want to import this module into Jupyterlab Notebooks (running a SageMath 9.4 kernel) to do calculations etc.
Here is the start of it:
class Coxeter_System:
'''This class defines the standard root system associated to an abstract Coxeter group.'''
def __init__(self, coxeter_matrix):
'''Sets up a Coxeter system and root system. At this stage, limited to up to rank 7.
'''
def set_up_coefficient_space(coxeter_matrix):
'''Sets up a polynomial ring a Free module over a polynomial ring quotiented by the minimal polynomials of the
non-rational cos(pi/m_ij) values.
This is so roots can be compared using an abstract free module rather than over reals'''
A = coxeter_matrix
k = len(A.rows())
# Get the cos(pi/m_ij) which are irrational
non_rational_angles = [x for x in [cos(pi/x) for x in set(A[i,j] for i in range(0,k) for j in range(0,k))] if x not in QQ]
However, when I open another Jupyterlab session, import the Python module and try to create an instance of the object "Coxeter_System", I get the following error (I have tried to do from math import cos both from the notebook where I want to import the module to, and in the module itself, but I still get the same error.
Any help would be greatly appreciated!):
NameError Traceback (most recent call last)
<ipython-input-6-6b542b6cb042> in <module>
----> 1 W = c.Coxeter_System(Matrix([[Integer(1),Integer(3),Integer(4)],[Integer(3),Integer(1),Integer(3)],[Integer(4),Integer(3),Integer(1)]]))
~/coxeter_groups.py in __init__(self, coxeter_matrix)
60 return matrix(R,k,B)
61
---> 62 R = set_up_coefficient_space(coxeter_matrix)
63 A = coxeter_matrix
64 k = len(A.rows())
~/coxeter_groups.py in set_up_coefficient_space(coxeter_matrix)
17
18 # Get the cos(pi/m_ij) which are irrational
---> 19 non_rational_angles = [x for x in [cos(pi/x) for x in set(A[i,j] for i in range(0,k) for j in range(0,k))] if x not in QQ]
20
21 # sort the irrational values of cos(pi/m_ij) in ascending order
~/coxeter_groups.py in <listcomp>(.0)
17
18 # Get the cos(pi/m_ij) which are irrational
---> 19 non_rational_angles = [x for x in [cos(pi/x) for x in set(A[i,j] for i in range(0,k) for j in range(0,k))] if x not in QQ]
20
21 # sort the irrational values of cos(pi/m_ij) in ascending order
NameError: name 'cos' is not defined
Just use math.cos instead of using cos after from math import cos
e.g. math.cos(pi/x)
import math
math.cos(pi/x)
For most IDEs you will have to import some of the most popular math packages,
# Example:
import numpy as np
import math as m
import sagemath as sm
numpy and math you will have no problem using it with a simple import numpy in Google Colab or most popular IDEs.
What is SageMath?
SageMath, previously known as Sage, is a computational algebraic system (CAS) that stands out for being built on mathematical and contrasted packages such as NumPy, Sympy, PARI/GP o Máxima.
Install the SageMath package in Jupyter Notebook:
In the case of sagemath, inside jupyter you will have to do some extra steps.
To install the sagemath package we must do a pip:
!pip install sagemath
The image shows the confirmation within Jupyter
Using sagemath
# import and add the alias s
import sagemath as s
Yo use it we place the alias "s" in front of the function
s.cos(s.pi/x)
If sagemath does not always work correctly.
We know the following: "The philosophy of SageMath is to use existing open-source libraries wherever they exist. Therefore, it uses many libraries from other projects."
For these mathematical functions you can use numpy:
import numpy as np
np.cos(np.pi/x)
It will not fail and also in Jupyter you do not need additional installation.
I hope I've been useful.

I have python error when import sensitivity

In Python when write #
import sensitivity
i found that error
ImportError Traceback (most recent call last)
in
----> 1 import sensitivity
~\anaconda3\envs\name_of_my_env\lib\site-packages\sensitivity_init_.py in
3 visualizations including gradient DataFrames and hex-bin plots
4 """
----> 5 from sensitivity.main import SensitivityAnalyzer
~\anaconda3\envs\name_of_my_env\lib\site-packages\sensitivity\main.py in
9 from IPython.display import display, HTML
10
---> 11 from sensitivity.df import sensitivity_df, _style_sensitivity_df, _two_variable_sensitivity_display_df
12 from sensitivity.hexbin import _hex_figure_from_sensitivity_df
13
~\anaconda3\envs\name_of_my_env\lib\site-packages\sensitivity\df.py in
6
7 import pandas as pd
----> 8 import pd_utils
9 from pandas.io.formats.style import Styler
10 import numpy as np
~\anaconda3\envs\name_of_my_env\lib\site-packages\pd_utils_init_.py in
37 join_col_strings
38 )
---> 39 from pd_utils.plot import plot_multi_axis
40
41
~\anaconda3\envs\name_of_my_env\lib\site-packages\pd_utils\plot.py in
2
3 import pandas as pd
----> 4 from pandas.plotting._matplotlib.style import _get_standard_colors
5 import matplotlib.pyplot as plt
6
ImportError: cannot import name '_get_standard_colors' from 'pandas.plotting._matplotlib.style' (C:\Users\DELL\anaconda3\envs\name_of_my_env\lib\site-packages\pandas\plotting_matplotlib\style.py)
There is a mistake on the plot.py script of the sensitivity library. You need to change the import from from pandas.plotting._matplotlib.style import _get_standard_colors to from pandas.plotting._matplotlib.style import get_standard_colors
Therefore just removing the underscore
I'm the creator of sensitivity and I just put a fix for this (see GitHub issue).
pip install --upgrade sensitivity should should get you v0.2.6 or newer and fix the issue (see releases).
It was caused by newer versions of Pandas that moved things around and broke another one of my packages, pd_utils, which this depended on. It turned out the functionality I needed from pd_utils was very small and unrelated to the part that was breaking, so I refactored things a bit to remove pd_utils as a dependency (this should keep it more stable going forward as well).

CUML fit functions throwing cp.full TypeError

I've been trying to run RAPIDS on Google Colab pro, and have successfully installed the cuml and cudf packages, however I am unable to run even the example scripts.
TLDR;
Anytime I try to run the fit function for cuml on Google Colab I get the following error. I get this when using the demo examples both for installation and then for cuml. This happens for a range of cuml examples (I first hit this trying to run UMAP).
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-c06fc2c31ca3> in <module>()
13 knn.fit(X_train, y_train)
14
---> 15 knn.predict(X_test)
5 frames
cuml/neighbors/kneighbors_regressor.pyx in cuml.neighbors.kneighbors_regressor.KNeighborsRegressor.predict()
cuml/neighbors/nearest_neighbors.pyx in cuml.neighbors.nearest_neighbors.NearestNeighbors.kneighbors()
cuml/neighbors/nearest_neighbors.pyx in cuml.neighbors.nearest_neighbors.NearestNeighbors._kneighbors()
cuml/neighbors/nearest_neighbors.pyx in cuml.neighbors.nearest_neighbors.NearestNeighbors._kneighbors_dense()
/usr/local/lib/python3.7/site-packages/cuml/common/array.py in full(cls, shape, value, dtype, order)
326 """
327
--> 328 return CumlArray(cp.full(shape, value, dtype, order))
329
330 #classmethod
TypeError: full() takes from 2 to 3 positional arguments but 4 were given
Steps taken on Google Colab Pro (to reproduce error)
Here's an example, I install the relevant packages using this example from Rapids (https://colab.research.google.com/drive/1rY7Ln6rEE1pOlfSHCYOVaqt8OvDO35J0#forceEdit=true&offline=true&sandboxMode=true):
# Install RAPIDS
!git clone https://github.com/rapidsai/rapidsai-csp-utils.git
!bash rapidsai-csp-utils/colab/rapids-colab.sh stable
import sys, os, shutil
sys.path.append('/usr/local/lib/python3.7/site-packages/')
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
os.environ["CONDA_PREFIX"] = "/usr/local"
for so in ['cudf', 'rmm', 'nccl', 'cuml', 'cugraph', 'xgboost', 'cuspatial']:
fn = 'lib'+so+'.so'
source_fn = '/usr/local/lib/'+fn
dest_fn = '/usr/lib/'+fn
if os.path.exists(source_fn):
print(f'Copying {source_fn} to {dest_fn}')
shutil.copyfile(source_fn, dest_fn)
# fix for BlazingSQL import issue
# ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by /usr/local/lib/python3.7/site-packages/../../libblazingsql-engine.so)
if not os.path.exists('/usr/lib64'):
os.makedirs('/usr/lib64')
for so_file in os.listdir('/usr/local/lib'):
if 'libstdc' in so_file:
shutil.copyfile('/usr/local/lib/'+so_file, '/usr/lib64/'+so_file)
shutil.copyfile('/usr/local/lib/'+so_file, '/usr/lib/x86_64-linux-gnu/'+so_file)
Then I try and run the example below from cuML (https://docs.rapids.ai/api/cuml/stable/api.html#k-means-clustering)
from cuml.neighbors import KNeighborsRegressor
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
X, y = make_blobs(n_samples=100, centers=5,
n_features=10)
knn = KNeighborsRegressor(n_neighbors=10)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.80)
knn.fit(X_train, y_train)
knn.predict(X_test)
This will result in the error at the start of the question.
Colab retains cupy==7.4.0 despite conda installing cupy==8.6.0 during the RAPIDS install. It is a custom install. I just had success pip installing cupy-cuda110==8.6.0 BEFORE installing RAPIDS, with
!pip install cupy-cuda110==8.6.0:
I'll be updating the script soon so that you won't have to do it manually, but want to test a few more things out. Thanks again for letting us know!
EDIT: script updated.

ImportError: cannot import name 'set_random_seed' from 'tensorflow' (C:\Users\polon\Anaconda3\lib\site-packages\tensorflow\__init__.py)

Good day,
Here is the error. Can somebody help how can i solve it?
ImportError Traceback (most recent call last)
<ipython-input-18-c29f17706012> in <module>
7 import numpy as np
8 import numpy.random as nr
----> 9 from tensorflow import set_random_seed
10 import matplotlib.pyplot as plt
11 get_ipython().run_line_magic('matplotlib', 'inline')
ImportError: cannot import name 'set_random_seed' from 'tensorflow' (C:\Users\polon\Anaconda3\lib\site-packages\tensorflow\__init__.py)
Looked for similar problems on Stack, but nothing worked for me.
In Tensoflow2 there is no need to perform
from tensorflow import set_random_seed
in order to run
set_random_seed(x)
(as it was in older version)
Only have to run
import tensorflow
tensorflow.random.set_seed(x)
Thanks to #David Buck
I too faced same error but instead of
from tensorflow import set_random_seed, I've used
import tensorflow as tf
tf.random.set_seed()
And it worked I think that method is useful for version 1 and the above snippet is useful for version 2
This code works for me:
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(1)
I got the same result of my neural network model every time.
TensorFlow API has been updated from set_random_seed() to set_seed()
You can use the following code :
from tensorflow.random import set_seed
Reference Link :
TensorFlow Random Seed
you can also try the following import statement
from tensorflow.python.framework.random_seed import set_random_seed
You want to use the random seed number.
You can try with this
import tensorflow as tf
tf.set_random_seed(1234)

Python: "ImportError: DLL load failed: The specified module could not be found." Problems when importing ffn (finance library for python)

Apologies if there does in fact exist a thread that has already figured this out (I've spent a few hours attentively searching multiple sites and the GitHubs for the dependencies that seem to cause the problems), however each solution seemed fairly specific to the particular library that so and so was attempting to use.
I've been messing around with quantitative finance/ algorithmic trading and have been trying to import a particular library ffn, however, per the question title, I've been receiving a somewhat lengthy error message detailing an ImportError, and how I'm supposedly missing certain, very specific dependencies that seem to be there. Honestly this may just be a dependency-ception (I'm missing dependencies of dependencies of ffn), but I've done my best to rule out this possibility.
Here's the full error:
ImportError Traceback (most recent call last)
<ipython-input-2-01bc82d8cf41> in <module>()
2 import numpy as np
3 import pandas as pd
----> 4 import ffn
5 import math
~\PycharmProjects\buff\venv\lib\site-packages\ffn\__init__.py in <module>()
----> 1 from . import core
2 from . import data
3
4 from .data import get
5 #from .core import year_frac, PerformanceStats, GroupStats, merge
~\PycharmProjects\buff\venv\lib\site-packages\ffn\core.py in <module>()
8 from pandas.core.base import PandasObject
9 from tabulate import tabulate
---> 10 import sklearn.manifold
11 import sklearn.cluster
12 import sklearn.covariance
~\PycharmProjects\buff\venv\lib\site-packages\sklearn\__init__.py in <module>()
132 else:
133 from . import __check_build
--> 134 from .base import clone
135 __check_build # avoid flakes unused variable error
136
~\PycharmProjects\buff\venv\lib\site-packages\sklearn\base.py in <module>()
11 from scipy import sparse
12 from .externals import six
---> 13 from .utils.fixes import signature
14 from . import __version__
15
~\PycharmProjects\buff\venv\lib\site-packages\sklearn\utils\__init__.py in <module>()
9
10 from .murmurhash import murmurhash3_32
---> 11 from .validation import (as_float_array,
12 assert_all_finite,
13 check_random_state, column_or_1d, check_array,
~\PycharmProjects\buff\venv\lib\site-packages\sklearn\utils\validation.py in <module>()
16
17 from ..externals import six
---> 18 from ..utils.fixes import signature
19 from .. import get_config as _get_config
20 from ..exceptions import NonBLASDotWarning
~\PycharmProjects\buff\venv\lib\site-packages\sklearn\utils\fixes.py in <module>()
142 from ._scipy_sparse_lsqr_backport import lsqr as sparse_lsqr
143 else:
--> 144 from scipy.sparse.linalg import lsqr as sparse_lsqr # noqa
145
146
~\PycharmProjects\buff\venv\lib\site-packages\scipy\sparse\linalg\__init__.py in <module>()
112 from __future__ import division, print_function, absolute_import
113
--> 114 from .isolve import *
115 from .dsolve import *
116 from .interface import *
~\PycharmProjects\buff\venv\lib\site-packages\scipy\sparse\linalg\isolve\__init__.py in <module>()
4
5 #from info import __doc__
----> 6 from .iterative import *
7 from .minres import minres
8 from .lgmres import lgmres
~\PycharmProjects\buff\venv\lib\site-packages\scipy\sparse\linalg\isolve\iterative.py in <module>()
8 import numpy as np
9
---> 10 from . import _iterative
11
12 from scipy.sparse.linalg.interface import LinearOperator
ImportError: DLL load failed: The specified module could not be found.
This particular message was from a failed Jupyter notebook trial (IPython console), though I've tried running the same code through a "normal" Python 3 file, only to get the same message. As I inferred earlier, I already have downloaded/ properly installed all the dependencies mentioned in the message (sklearn and scipy are the only problem children outside of ffn itself that the error mentions). The thing confusing me the most is that all of the things that these import statements within the dependencies/ within ffn reference are where they should and (to my knowledge) are accessible.
Perhaps I should've researched this more thoroughly, but the only thing that really made sense to me was that I had the wrong version of these libraries (which are, for the most part, well maintained and somewhat frequently updated) and that certain features that ffn and its dependencies need were deprecated and no longer exist. However, this theory was disproven (at least in part) when I took 30 seconds to figure out if sklearn.manifold existed, and to my apparent surprise, it does. I also checked my IDE's library manager/ interpreter settings menu and everything is up to date (I'm using PyCharm CE).
In short: why I am receiving this message when I seem to have everything it's searching for/ what exactly does it mean, and how do I fix this so that I can use the libraries I wanted to use?
If this helps at all, here's a summary:
All libraries/ dependencies are up to date (PyCharm maintains what versions each one is currently on, although I have to go in manually to tell it to execute the update).
Again, I'm on PyCharm CE 2018 (most recent version).
Here's the entire cell from the Jupyter notebook which yields the error (which also happens to be everything that's in the notebook):
from pylab import *
import numpy as np
import pandas as pd
import ffn
import math
Here's all of the contents of the Python document that yields the same error (virtually the same code):
import ffn
import math
import pandas as pd, numpy as np
import datetime
data1 = ffn.get('agg, hyg, spy, eem, efa', start='2018-01-01', end='2018-02-02')
print(data1.head())
I'm running Windows 10 64 bit
Your code is not able to locate your modules. In a Jupyter Notebook, you can make it so that it can. 'PYTHONPATH' is the environment variable which locates the custom modules in python. Now your modules are in your project directory, so you need to make sure your interpreter can locate your files.
Basically, you need to set the path in your Jupyter Notebook to locate imported user define modules.
"To set an env variable in a jupyter notebook, just use a "%" magic commands, either %env or %set_env, e.g., %env MY_VAR=MY_VALUE or %env MY_VAR MY_VALUE. (Use %env by itself to print out current environmental variables.)"
See: How to set env variable in Jupyter notebook

Categories