Install cleverHans on windows - python

I hope all of you are doing Great.
I need to install cleverhans on windows for some project but once installed it was unable to access any python file in there and I got a lot of errors like:
<ipython-input-12-4fbd91cef426> in <module>
10 import numpy as np
11
---> 12 from cleverhans.attacks import FastGradientMethod
13 from cleverhans.compat import flags
14 from cleverhans.dataset import MNIST
~\src\cleverhans\cleverhans\attacks\__init__.py in <module>
10
11 from cleverhans import utils
---> 12 from cleverhans.attacks.attack import Attack
13 from cleverhans.attacks.basic_iterative_method import BasicIterativeMethod
14 from cleverhans.attacks.carlini_wagner_l2 import CarliniWagnerL2
~\src\cleverhans\cleverhans\attacks\attack.py in <module>
11
12 from cleverhans.compat import reduce_max
---> 13 from cleverhans.model import Model
14 from cleverhans import utils
15
~\src\cleverhans\cleverhans\model.py in <module>
7 import tensorflow as tf
8
----> 9 from cleverhans import utils_tf
10
11
~\src\cleverhans\cleverhans\utils_tf.py in <module>
343
344 def kl_with_logits(p_logits, q_logits, scope=None,
--> 345 loss_collection=tf.GraphKeys.REGULARIZATION_LOSSES):
346 """Helper function to compute kl-divergence KL(p || q)
347 """
AttributeError: module 'tensorflow' has no attribute 'GraphKeys'
I hope to have a lot of answers from you.
Have a nice day.

Looks like cleverhans is designed to use an older version of tensorflow. To make it downward compatible replace
import tensorflow as tf
with
import tensorflow.compat.v1 as tf
in the cleverhans source code or look if there is an updated version of cleverhans available or uninstall tensorflow and install an older version (v1) instead.
See:
Tensorflow 2.1.0 Error, module 'tensorflow' has no attribute 'GraphKeys'
From cleverhans GitHub repo:
"Currently supported setups
Although CleverHans is likely to work on many other machine configurations, we currently test it it with Python 3.5 and TensorFlow {1.8, 1.12} on Ubuntu 14.04.5 LTS (Trusty Tahr)."

Related

ModuleNotFoundError: No module named 'thinc.neural'

I'm having trouble running my code, here are the results of the error I got :
--------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-35-d0706627a935> in <module>
1 # import the function to train using spaCy
----> 2 from train_spacy import train_spacy
~\TA\train_spacy.py in <module>
3 from spacy.util import minibatch, compounding
4 from spacy.util import decaying
----> 5 from thinc.neural.optimizers import Adam
6 import random
7 from matplotlib import pyplot as plt
ModuleNotFoundError: No module named 'thinc.neural'
The following are the related modules installed on my OS.
C:\Users\Anonymous>pip list | findstr spacy spacy 3.0.5 spacy-legacy 3.0.1 C:\Users\Anonymous>pip list | findstr thinc thinc 8.0.2
If there are other statements that I have not explained, please ask me, and I ask for your help because I have tried to reinstall the module but are still facing the same error, and I apologize if my language is wrong. Thank you ^ - ^
Since a few versions ago thinc was reorganized. Classes and functions should now be imported from thinc.api.
As you can see, Adam is referenced in https://github.com/explosion/thinc/blob/master/thinc/api.py.

Yellowbrick Module NotFoundError in Python

I am trying to using Yellowbrick to make an elbow plot.(to make the k-means clustering)
I have installed Yellowbrick in jupyter notebook. but, it keeps returning the error message like below.
The error message and information are attached as pictures below.
I would be very happy if you could help me.
from yellowbrick.cluster import KElbowVisualizer
model = KMeans()
visualizer = KElbowVisualizer(model, k=(1,250))
visualizer.fit(x.reshape(-1,1))
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-84-390153c57930> in <module>
----> 1 from yellowbrick.cluster import KElbowVisualizer
2 model = KMeans()
3 visualizer = KElbowVisualizer(model, k=(1,250))
4 visualizer.fit(x.reshape(-1,1))
5
~/.local/lib/python3.7/site-packages/yellowbrick/__init__.py in <module>
37 from .anscombe import anscombe
38 from .datasaurus import datasaurus
---> 39 from .classifier import ROCAUC, ClassBalance, ClassificationScoreVisualizer
40
41 # from .classifier import crplot, rocplot
~/.local/lib/python3.7/site-packages/yellowbrick/classifier/__init__.py in <module>
24 from ..base import ScoreVisualizer
25 from .base import ClassificationScoreVisualizer
---> 26 from .class_prediction_error import ClassPredictionError, class_prediction_error
27 from .classification_report import ClassificationReport, classification_report
28 from .confusion_matrix import ConfusionMatrix, confusion_matrix
~/.local/lib/python3.7/site-packages/yellowbrick/classifier/class_prediction_error.py in <module>
22
23 from sklearn.utils.multiclass import unique_labels
---> 24 from sklearn.metrics._classification import _check_targets
25
26 from yellowbrick.draw import bar_stack
ModuleNotFoundError: No module named 'sklearn.metrics._classification'
Hello and thanks for checking out Yellowbrick!
The sklearn.metrics.classification module was deprecated in sklearn v0.22, so we have updated our package to import from sklearn.metrics._classification instead.
Try updating your version of scikit-learn (e.g. pip install -U scikit-learn or conda update scikit-learn) and see if that helps!
Looks like your yellowbrick has not been installed properly. Try installing only for user:
pip install -U yellowbrick
Try this:
conda install -c districtdatalabs yellowbrick
Source: https://anaconda.org/DistrictDataLabs/yellowbrick

Importing wordcloud using Jupyter Notebook (Python)

I am using jupyter Notebook(conda root). The python version I am running is 2.7
I am having a hard time getting wordcloud installed into my environment. Here's the code
from wordcloud import WordCloud
But I got this error:
ImportErrorTraceback (most recent call last)
<ipython-input-21-8038e19af624> in <module>()
----> 1 from wordcloud import WordCloud
C:\Users\aneeq\Anaconda2\lib\site-packages\wordcloud\__init__.py in <module>()
----> 1 from .wordcloud import (WordCloud, STOPWORDS, random_color_func,
2 get_single_color_func)
3 from .color_from_image import ImageColorGenerator
4
5 __all__ = ['WordCloud', 'STOPWORDS', 'random_color_func',
C:\Users\aneeq\Anaconda2\lib\site-packages\wordcloud\wordcloud.py in <module>()
17 from operator import itemgetter
18
---> 19 from PIL import Image
20 from PIL import ImageColor
21 from PIL import ImageDraw
C:\Users\aneeq\Anaconda2\lib\site-packages\PIL\Image.py in <module>()
56 # Also note that Image.core is not a publicly documented interface,
57 # and should be considered private and subject to change.
---> 58 from . import _imaging as core
59 if PILLOW_VERSION != getattr(core, 'PILLOW_VERSION', None):
60 raise ImportError("The _imaging extension was built for another "
**ImportError: DLL load failed: The specified module could not be found.**
Can anyone explain what is this error?
I need to use world cloud for my homework assignment
You have problem with part of python Pillow module: with a library, that builds from _imaging.c. Try to reinstall python Pillow module from .exe package, not with pip.

Datashader has snappy error

I was using python's datashader 0.5.0 package to plot population density information, generally following the tutorial https://www.continuum.io/blog/developer-blog/analyzing-and-visualizing-big-data-interactively-your-laptop-datashading-2010-us . I installed datashader using conda install -c bokeh datashader=0.5.0.
All was fine. Though perhaps unrelated, things seemed to break as soon as I installed the haloviews and geoviews packages. After installing these additional packages, I can no longer import datashader and my once working code no longer runs. When importing datashader, I get the following error:
AttributeError: module 'snappy' has no attribute 'compress'
I am running on windows 10, anaconda python 3.5.3.
Perhaps I'm going down the wrong rabbit hole, but I thought perhaps it was the snappy package. I ran "conda install -c conda-forge snappy=1.1.4". conda list reveals that snappy is installed. Snappy does import. The snappy.compress object is not found. My issue seems related to the following SO post as I also had a fastparquet error when trying geoviews: error with snappy while importing fastparquet in python
When running import snappy, print(snappy.__filename__) gives the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-b8565733b383> in <module>()
----> 1 import snappy; print(snappy.__file__)
AttributeError: module 'snappy' has no attribute '__file__'
I also tried uninstalling through both conda and pip just in case. Still no joy.
Running "pip install python-snappy" results in a "failed building wheel for python-snappy" error preceded with " error: Microsoft Visual C++ 14.0 is required..." So I went and got the "Microsoft Visual C++ Redistributable for Visual Studio 2017" and ran it, but had no change.
Any thoughts on how to resolve this? For reference, the full error on datashader import is as follows:
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-7-3d7b1ff9e530> in <module>()
----> 1 import datashader
C:\Python\lib\site-packages\datashader\__init__.py in <module>()
3 __version__ = '0.5.0'
4
----> 5 from .core import Canvas
6 from .reductions import (count, any, sum, min, max, mean, std, var, count_cat,
7 summary)
C:\Python\lib\site-packages\datashader\core.py in <module>()
3 import numpy as np
4 from datashape.predicates import istabular
----> 5 from odo import discover
6 from xarray import DataArray
7
C:\Python\lib\site-packages\odo\__init__.py in <module>()
63 from .backends.url import URL
64 with ignoring(ImportError):
---> 65 from .backends.dask import dask
66
67
C:\Python\lib\site-packages\odo\backends\dask.py in <module>()
8
9 from dask.array.core import Array, from_array
---> 10 from dask.bag.core import Bag
11 import dask.bag as db
12 from dask.compatibility import long
C:\Python\lib\site-packages\dask\bag\__init__.py in <module>()
1 from __future__ import absolute_import, division, print_function
2
----> 3 from .core import (Bag, Item, from_sequence, from_url, to_textfiles, concat,
4 from_delayed, map_partitions, bag_range as range,
5 bag_zip as zip, bag_map as map)
C:\Python\lib\site-packages\dask\bag\core.py in <module>()
30
31 from ..base import Base, normalize_token, tokenize
---> 32 from ..bytes.core import write_bytes
33 from ..compatibility import apply, urlopen
34 from ..context import _globals, defer_to_globals
C:\Python\lib\site-packages\dask\bytes\__init__.py in <module>()
2
3 from ..utils import ignoring
----> 4 from .core import read_bytes, open_files, open_text_files
5
6 from . import local
C:\Python\lib\site-packages\dask\bytes\core.py in <module>()
7 from warnings import warn
8
----> 9 from .compression import seekable_files, files as compress_files
10 from .utils import (SeekableFile, read_block, infer_compression,
11 infer_storage_options, build_name_function)
C:\Python\lib\site-packages\dask\bytes\compression.py in <module>()
30 with ignoring(ImportError):
31 import snappy
---> 32 compress['snappy'] = snappy.compress
33 decompress['snappy'] = snappy.decompress
34
AttributeError: module 'snappy' has no attribute 'compress'
It turns out that in adding packages, something messed up the snappy install. I followed this solution: How to install snappy C libraries on Windows 10 for use with python-snappy in Anaconda?
It was a snappy error, not a datashader issue, but I'll leave the post in case anyone has the same series of issues.

Jupyter (IPython) notebook numpy/pandas/matplotlib error (FreeBSD)

I am trying to set up a Jupyter notebook server at home. It has taken me a long time, but I have build and installed Python 3.4 and all the required packages from FreeBSD ports successfully. The notebook server is up and running fine, except every time when I try to import numpy:
In[1]: import numpy
The following errors occur:
ImportError Traceback (most recent call last)
<ipython-input-1-5a0bd626bb1d> in <module>()
----> 1 import numpy
/usr/local/lib/python3.4/site-packages/numpy/__init__.py in <module>()
178 return loader(*packages, **options)
179
--> 180 from . import add_newdocs
181 __all__ = ['add_newdocs',
182 'ModuleDeprecationWarning',
/usr/local/lib/python3.4/site-packages/numpy/add_newdocs.py in <module>()
11 from __future__ import division, absolute_import, print_function
12
---> 13 from numpy.lib import add_newdoc
14
15 ###############################################################################
/usr/local/lib/python3.4/site-packages/numpy/lib/__init__.py in <module>()
6 from numpy.version import version as __version__
7
----> 8 from .type_check import *
9 from .index_tricks import *
10 from .function_base import *
/usr/local/lib/python3.4/site-packages/numpy/lib/type_check.py in <module>()
9 'common_type']
10
---> 11 import numpy.core.numeric as _nx
12 from numpy.core.numeric import asarray, asanyarray, array, isnan, \
13 obj2sctype, zeros
/usr/local/lib/python3.4/site-packages/numpy/core/__init__.py in <module>()
12 os.environ[envkey] = '1'
13 env_added.append(envkey)
---> 14 from . import multiarray
15 for envkey in env_added:
16 del os.environ[envkey]
ImportError: /lib/libgcc_s.so.1: version GCC_4.6.0 required by /usr/local/lib/gcc48/libgfortran.so.3 not found
The error messages for importing pandas and matplotlib are different, but I suspect that has something to do with this numpy import error.
Strangely, all 3 packages work fine in Python and IPython consoles with no problems at all!
I have googled and made the following attempts:
delete and reinstall numpy -> no change
append numpy directory to sys.path -> no change
install a lot of other external packages just to see if it's only related to numpy -> they are all working fine in both consoles and notebook, except scipy giving some error related to numpy
Thank you for your help!
My gcc is version 4.2.1.
I have fixed this by setting the LD_LBRARY_PATH to /usr/local/lib/gcc48. gcc48 is already installed in my system.
To avoid setting the path every time, I've added the following line to /.cshrc:
setenv LD_LIBRARY_PATH /usr/local/lib/gcc48
edit:
This won't work is you want to start the notebook server automatically by adding to crontab:
#reboot /usr/local/bin/jupyter-notebook
the same error appears when trying to import numpy and modules depending on numpy
I fixed this by making a copy of /usr/local/bin/jupyter-notebook and added the following lines:
import sys
import re
----------------- add these 2 lines below --------------
import os
os.environ['LD_LIBRARY_PATH'] = '/usr/local/lib/gcc48'
....
Add the new file to crontab instead of jupyter-notebook.
The issue is not with your python modules. The error message at the bottom, where it says ImportError: /lib/libgcc_s.so.1: version GCC_4.6.0 required by /usr/local/lib/gcc48/libgfortran.so.3 not found indicates that it's a dependency error with the Fortran library. Apparently it wants gcc 4.6 or higher, and apparently you have a lower version installed. Not being familiar with Python libraries or your setup, my guess is that it could be an issue with /usr/ports/devel/py-fortran. I would recommend checking the gcc version on your machine with gcc -v and whatever fortran-related ports you have installed with pkg info and then take it from there.

Categories