Cannot import sklearn.linear_model - python

When I try to import sklearn.linear_model, I get this error:
In [1]: from sklearn import linear_model
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-a6ebbebad697> in <module>()
----> 1 from sklearn import linear_model
/usr/local/lib/python2.7/dist-packages/sklearn/linear_model/__init__.py in <module>()
15 from .least_angle import (Lars, LassoLars, lars_path, LarsCV, LassoLarsCV,
16 LassoLarsIC)
---> 17 from .coordinate_descent import (Lasso, ElasticNet, LassoCV, ElasticNetCV,
18 lasso_path, enet_path, MultiTaskLasso,
19 MultiTaskElasticNet, MultiTaskElasticNetCV,
/usr/local/lib/python2.7/dist-packages/sklearn/linear_model/coordinate_descent.py in <module>()
27 from ..exceptions import ConvergenceWarning
28
---> 29 from . import cd_fast
30
31
ImportError: /usr/local/lib/python2.7/dist-packages/sklearn/linear_model/cd_fast.so: undefined symbol: ATL_dtger
I know it has something to do with Atlas, but I have no idea what. This exact code used to run smoothly on this very machine, and I do not know of any lib modification/installation.
Thank you.

I am not sure what the problem is. But why not uninstall and reinstall sklearn? I have had issues with some python libraries and this simple procedure sometimes works.

I solved it, but I'm not sure how. I reinstalled some things. I found out someone had installed Atlas via apt-get, so I removed it and recompiled pretty much everything.

Related

Error with from pycaret.classification import *

When I tried to run
from pycaret.classification import *
I received this error:
ImportError
Traceback (most recent call last)
<ipython-input-83-a8cb12878b37> in <module>()
----> 1 from pycaret.classification import *
8 frames
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/pairwise.py in <module>()
30 from ..utils._mask import _get_mask
31 from ..utils.validation import _deprecate_positional_args
---> 32 from ..utils.fixes import sp_version, parse_version
33
34 from ._pairwise_fast import _chi2_kernel_fast, _sparse_manhattan
ImportError: cannot import name 'parse_version' from 'sklearn.utils.fixes'
(/usr/local/lib/python3.7/dist-packages/sklearn/utils/fixes.py)
I uninstalled Python and re-installed it and now it works but I don't know why
Try running this code !pip install markupsafe==2.0.1. Let me know if this helps.
Please try:
import pycaret.classification
Let me know if this works
A possible resolution is to create a fresh environment. Sometimes conflicting packages cause issues so working with a fresh environment can help you determine whether this is the issue.
Let me know if this helps.
I also ran into the same issue and found this solution helpful when working on google colab, although on Jupiter it didn't, so I created a virtual environment to work in, and it worked out smoothly.

Yellowbrick Module NotFoundError in Python

I am trying to using Yellowbrick to make an elbow plot.(to make the k-means clustering)
I have installed Yellowbrick in jupyter notebook. but, it keeps returning the error message like below.
The error message and information are attached as pictures below.
I would be very happy if you could help me.
from yellowbrick.cluster import KElbowVisualizer
model = KMeans()
visualizer = KElbowVisualizer(model, k=(1,250))
visualizer.fit(x.reshape(-1,1))
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-84-390153c57930> in <module>
----> 1 from yellowbrick.cluster import KElbowVisualizer
2 model = KMeans()
3 visualizer = KElbowVisualizer(model, k=(1,250))
4 visualizer.fit(x.reshape(-1,1))
5
~/.local/lib/python3.7/site-packages/yellowbrick/__init__.py in <module>
37 from .anscombe import anscombe
38 from .datasaurus import datasaurus
---> 39 from .classifier import ROCAUC, ClassBalance, ClassificationScoreVisualizer
40
41 # from .classifier import crplot, rocplot
~/.local/lib/python3.7/site-packages/yellowbrick/classifier/__init__.py in <module>
24 from ..base import ScoreVisualizer
25 from .base import ClassificationScoreVisualizer
---> 26 from .class_prediction_error import ClassPredictionError, class_prediction_error
27 from .classification_report import ClassificationReport, classification_report
28 from .confusion_matrix import ConfusionMatrix, confusion_matrix
~/.local/lib/python3.7/site-packages/yellowbrick/classifier/class_prediction_error.py in <module>
22
23 from sklearn.utils.multiclass import unique_labels
---> 24 from sklearn.metrics._classification import _check_targets
25
26 from yellowbrick.draw import bar_stack
ModuleNotFoundError: No module named 'sklearn.metrics._classification'
Hello and thanks for checking out Yellowbrick!
The sklearn.metrics.classification module was deprecated in sklearn v0.22, so we have updated our package to import from sklearn.metrics._classification instead.
Try updating your version of scikit-learn (e.g. pip install -U scikit-learn or conda update scikit-learn) and see if that helps!
Looks like your yellowbrick has not been installed properly. Try installing only for user:
pip install -U yellowbrick
Try this:
conda install -c districtdatalabs yellowbrick
Source: https://anaconda.org/DistrictDataLabs/yellowbrick

ImportError: No module named request when importing BeakerX into Jupyter

I am trying to import beakerx into my jupyter environment like so:
from beakerx import *
However, I get the following error:
ImportError Traceback (most recent call last)
<ipython-input-19-4c368a35c7cf> in <module>()
----> 1 from beakerx import *
/Users/vivaksoni1/venv/lib/python2.7/site-packages/beakerx/__init__.py in <module>()
13 # limitations under the License.
14
---> 15 from .runtime import BeakerX
16 from .plot import *
17 from .easyform import *
/Users/vivaksoni1/venv/lib/python2.7/site-packages/beakerx/runtime.py in <module>()
16
17 import os, json, pandas, numpy
---> 18 import urllib.request, urllib.parse, urllib.error, urllib.request, urllib.error, urllib.parse, IPython, datetime, calendar, math, traceback, time
19 from traitlets import Unicode
20
ImportError: No module named request
I am not sure what this error means? Also, it seems to be looking into python2.7 directories even though this is a python 3 script? I installed beakerx using: pip3 install beakerx and can see the files in the right folder within the right folder:
anaconda3/pkgs/beakerx-0.12.2-py36_2/lib/python3.6/site-packages/beakerx
This seems to be how every other module is stored but I cannot get it working for some reason. Can anyone help?

Datashader has snappy error

I was using python's datashader 0.5.0 package to plot population density information, generally following the tutorial https://www.continuum.io/blog/developer-blog/analyzing-and-visualizing-big-data-interactively-your-laptop-datashading-2010-us . I installed datashader using conda install -c bokeh datashader=0.5.0.
All was fine. Though perhaps unrelated, things seemed to break as soon as I installed the haloviews and geoviews packages. After installing these additional packages, I can no longer import datashader and my once working code no longer runs. When importing datashader, I get the following error:
AttributeError: module 'snappy' has no attribute 'compress'
I am running on windows 10, anaconda python 3.5.3.
Perhaps I'm going down the wrong rabbit hole, but I thought perhaps it was the snappy package. I ran "conda install -c conda-forge snappy=1.1.4". conda list reveals that snappy is installed. Snappy does import. The snappy.compress object is not found. My issue seems related to the following SO post as I also had a fastparquet error when trying geoviews: error with snappy while importing fastparquet in python
When running import snappy, print(snappy.__filename__) gives the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-b8565733b383> in <module>()
----> 1 import snappy; print(snappy.__file__)
AttributeError: module 'snappy' has no attribute '__file__'
I also tried uninstalling through both conda and pip just in case. Still no joy.
Running "pip install python-snappy" results in a "failed building wheel for python-snappy" error preceded with " error: Microsoft Visual C++ 14.0 is required..." So I went and got the "Microsoft Visual C++ Redistributable for Visual Studio 2017" and ran it, but had no change.
Any thoughts on how to resolve this? For reference, the full error on datashader import is as follows:
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-7-3d7b1ff9e530> in <module>()
----> 1 import datashader
C:\Python\lib\site-packages\datashader\__init__.py in <module>()
3 __version__ = '0.5.0'
4
----> 5 from .core import Canvas
6 from .reductions import (count, any, sum, min, max, mean, std, var, count_cat,
7 summary)
C:\Python\lib\site-packages\datashader\core.py in <module>()
3 import numpy as np
4 from datashape.predicates import istabular
----> 5 from odo import discover
6 from xarray import DataArray
7
C:\Python\lib\site-packages\odo\__init__.py in <module>()
63 from .backends.url import URL
64 with ignoring(ImportError):
---> 65 from .backends.dask import dask
66
67
C:\Python\lib\site-packages\odo\backends\dask.py in <module>()
8
9 from dask.array.core import Array, from_array
---> 10 from dask.bag.core import Bag
11 import dask.bag as db
12 from dask.compatibility import long
C:\Python\lib\site-packages\dask\bag\__init__.py in <module>()
1 from __future__ import absolute_import, division, print_function
2
----> 3 from .core import (Bag, Item, from_sequence, from_url, to_textfiles, concat,
4 from_delayed, map_partitions, bag_range as range,
5 bag_zip as zip, bag_map as map)
C:\Python\lib\site-packages\dask\bag\core.py in <module>()
30
31 from ..base import Base, normalize_token, tokenize
---> 32 from ..bytes.core import write_bytes
33 from ..compatibility import apply, urlopen
34 from ..context import _globals, defer_to_globals
C:\Python\lib\site-packages\dask\bytes\__init__.py in <module>()
2
3 from ..utils import ignoring
----> 4 from .core import read_bytes, open_files, open_text_files
5
6 from . import local
C:\Python\lib\site-packages\dask\bytes\core.py in <module>()
7 from warnings import warn
8
----> 9 from .compression import seekable_files, files as compress_files
10 from .utils import (SeekableFile, read_block, infer_compression,
11 infer_storage_options, build_name_function)
C:\Python\lib\site-packages\dask\bytes\compression.py in <module>()
30 with ignoring(ImportError):
31 import snappy
---> 32 compress['snappy'] = snappy.compress
33 decompress['snappy'] = snappy.decompress
34
AttributeError: module 'snappy' has no attribute 'compress'
It turns out that in adding packages, something messed up the snappy install. I followed this solution: How to install snappy C libraries on Windows 10 for use with python-snappy in Anaconda?
It was a snappy error, not a datashader issue, but I'll leave the post in case anyone has the same series of issues.

AttributeError: module 'numpy' has no attribute 'version'

I am working on learning how to use pandas in ipython notebook:
import pandas as pd
But I get the following error:
AttributeError Traceback (most recent call last)
<ipython-input-17-c7ecb2b0a99d> in <module>()
----> 1 from pandas import *
D:\Anaconda\lib\site-packages\pandas\__init__.py in <module>()
20
21 # numpy compat
---> 22 from pandas.compat.numpy import *
23
24 try:
D:\Anaconda\lib\site-packages\pandas\compat\numpy\__init__.py in <module>()
8
9 # numpy versioning
---> 10 _np_version = np.version.short_version
11 _nlv = LooseVersion(_np_version)
12 _np_version_under1p8 = _nlv < '1.8'
AttributeError: module 'numpy' has no attribute 'version'
I have no idea about how to fix it, what is the problem?My python's version is 3.6
Numpy has dependencies and Anaconda has a history of getting them wrong leading to numpy failing to initialize properly. The AttributeError is most likely caused by numpy initialization failure. This error usually happens when updating numpy or other dependencies that change numpy versions via conda (that's why you can get numpy failing after updating Pandas...)
Example of such failure: https://github.com/ipython/ipyparallel/issues/326
The solution that always works for me is updating to a known working version of numpy. Currently, for me on Windows 10 x64, it is 1.15.1.
Please note it is a problem with Anaconda dependencies rather than numpy itself. Can't provide more specific guidance without details like OS, package versions, etc.

Categories