I am importing keras_adversarial and cloning its git too but it cannot import 'AdversarialModel'
i am working on gans, built generator and discriminator. now i want to combine them but colab is giving error while importing the modules of keras_adversarial
import keras_adversarial
from keras_adversarial import AdversarialModel, simple_gan, gan_targets
from keras_adversarial import AdversarialOptimizerSimultaneous,
normal_latent_sampling
1 import keras_adversarial
----> 2 from keras_adversarial import AdversarialModel, simple_gan,
gan_targets
3 from keras_adversarial import AdversarialOptimizerSimultaneous,
normal_latent_sampling
ImportError: cannot import name 'AdversarialModel'
The Colab environment only have a core set of packages installed. To add some third-party packages as keras_adversarial, you can execute installation script direcrly in Colab cells, with ! symbol before each command to indicate that these are command line bash code, not Python.
In your case, you need to do:
!git clone https://github.com/bstriner/keras_adversarial.git
!cd keras_adversarial
!python setup.py install
Related
I am following the tutorial from Microsoft ( https://learn.microsoft.com/nl-nl/azure/cognitive-services/Computer-vision/quickstarts-sdk/client-library?pivots=programming-language-python ) to use cognitive service. I use herefor Visual Code and install the Azure with pip using the commandline:
pip install azure-cognitiveservices-vision-customvision
I use the first peace of code(see code below) and try to run the code. But it returns the follwowing error:
(myvenv) PS C:\Users\erikh\OneDrive\Documenten\Git\Python Testlab> & "c:/Users/erikh/OneDrive/Documenten/Git/Python Testlab/myvenv/Scripts/python.exe" "c:/Users/erikh/OneDrive/Documenten/Git/Python Testlab/readText.py"
Traceback (most recent call last):
File "c:/Users/erikh/OneDrive/Documenten/Git/Python Testlab/readText.py", line 1, in <module>
from azure.cognitiveservices.vision.computervision import ComputerVisionClient
ModuleNotFoundError: No module named 'azure.cognitiveservices'
And here is the code which i try to execute:
from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from azure.cognitiveservices.vision.computervision.models import OperationStatusCodes
from azure.cognitiveservices.vision.computervision.models import VisualFeatureTypes
from msrest.authentication import CognitiveServicesCredentials
from array import array
import os
from PIL import Image
import sys
import time
# Add your Computer Vision subscription key to your environment variables.
if 'COMPUTER_VISION_SUBSCRIPTION_KEY' in os.environ:
subscription_key = os.environ['COMPUTER_VISION_SUBSCRIPTION_KEY']
else:
print("\nSet the COMPUTER_VISION_SUBSCRIPTION_KEY environment variable.\n**Restart your shell or IDE for changes to take effect.**")
sys.exit()
# Add your Computer Vision endpoint to your environment variables.
if 'COMPUTER_VISION_ENDPOINT' in os.environ:
endpoint = os.environ['COMPUTER_VISION_ENDPOINT']
else:
print("\nSet the COMPUTER_VISION_ENDPOINT environment variable.\n**Restart your shell or IDE for changes to take effect.**")
sys.exit()
I can reproduce your issue, you installed the wrong package, it should be azure-cognitiveservices-vision-computervision instead of azure-cognitiveservices-vision-customvision.
Run the line below, then it will work fine.
pip install azure-cognitiveservices-vision-computervision
I am trying to use the dygraphs plot function from [here] (https://github.com/dinkelk/PyDyGraphs) in my jupyter notebook. The documentation says to:
Installation
Simply clone this repository and include the dygraphs.graph module in >>your Jupyter Notebooks. Note: PyDyGraphs only supports Python 3.
The documentation says to use "import dygraphs.graph as dy"
I'm not sure where to put the repository once I've downloaded it.
The package I've downloaded and unzipped. I've included the "import dygraphs.graph" in my notebook.
This package requires pandas to be installed and imported which I have done.
import numpy as np
import dygraphs.graph as dy
_____
ModuleNotFoundError Traceback (most recent
call last)
<ipython-input-3-29a3c1e17595> in <module>
4 import pandas as pd
5 import time
----> 6 import dygraphs.graph as dy
ModuleNotFoundError: No module named 'dygraphs'
_____
I'm wondering where I should have put the files downloaded from github and if there is anything else I should be doing in order to use this package.
You need to add a system path (or install the package to the default python libary) before using "import dygraphs.graph". See the example .
Add something like this in your jupyter notebook:
sys.path.append("../")
Then change "../" to the relative path to "dygraphs" folder you downloaded.
I want to use the open source person re-identification library in Python
on Ubuntu 19.04
with Anaconda
no CUDA
in the terminal PyCharm (or not)
Python version 3.7.3
PyTorch version 1.1.0
For that I have to follow instruction like on their deposite git :
git clone https://github.com/Cysu/open-reid.git
cd open-reid
python setup.py install
python examples/softmax_loss.py -d viper -b 64 -j 2 -a resnet50 --logs-dir logs/softmax-loss/viper-resnet50
I receive the following error:
from sklearn.utils.extmath
import pinvh
ImportError: cannot import name 'pinvh'
I have tried to create virtual environments with previous versions of PyTorch (0.4.1, 0.4.0 and 1.0.1) but I always got:
File "examples/softmax_loss.py", line 12, in <module>
from reid import datasets
ModuleNotFoundError: No module named 'reid'
I do not know how to fix it.
EDIT :
Hi thanks for the answer, the problem is that the import are like :
from reid import datasets
from reid import models
from reid.dist_metric import DistanceMetric
from reid.trainers import Trainer
from reid.evaluators import Evaluator
from reid.utils.data import transforms as T
from reid.utils.data.preprocessor import Preprocessor
from reid.utils.logging import Logger
from reid.utils.serialization import load_checkpoint, save_checkpoint
I tried :
from ../reid import datasets
But I got a
File "examples/softmax_loss.py", line 12
from ../reid import datasets
^
SyntaxError: invalid syntax
EDIT 2 :
After re-installing Python 3.7.3 and pytorch 1.1.0 the problem persist with pinvh... I still got this message :
ImportError: cannot import name 'pinvh' from 'sklearn.utils.extmath'
If you can tell me how to fix it or try to tell me if it works please
Since the directory structure is as below:
/(root)-->|
|
|-->reid |--> (contents inside reid)
|
|
|-->examples | -->softmax_loss.py
|
|-->(Other contents in root directory)
It can be observed that reid is not in the same directory as softmax_loss.py, but instead in the parent directory.
So, in the file softmax_loss.py, at line number 12 and below, replace reid with ../reid, this looks for the directory reid in the parent directory.
The other method is to use: import ../reid as R or any other variable; Then use from R import datasets, and so on
utils.extmath.pinvh was deprecated in scikit-learn version 0.19 and removed in version 0.21. The easy fix is therefore to use an earlier version of scikit-learn.
I'm new to Google Cloud Platform, and have uploaded some machine learning code on Jupyter notebook in DataLab.
My issue is although, I installed Google Cloud Storage (using the command: pip install --upgrade google-cloud-storage), I'm unable to import this.
The following is how I'm importing this package:
>>import numpy
>>import pandas as pd
>>from google.cloud import storage
But I'm getting the following error:
ImportErrorTraceback (most recent call last)
in ()
----> 1 from google.cloud import storage
ImportError: cannot import name storage
Note:
This is the content of my JSON config file: {"TokenSources":["env"]}
I tried export GOOGLE_APPLICATION_CREDENTIALS="/path/to/file.json", but the error persists.
I verified that this package is indeed installed in my environment by typing pip freeze in the command shell:
google-cloud==0.34.0
google-cloud-datastore==1.7.0
google-cloud-spanner==1.4.0
google-cloud-storage==1.10.0
What am I missing here?
Have you installed the google-cloud-storage package in your DataLab environment, or on your local machine? You'll need to run the following command within DataLab:
!pip install google-cloud-storage
See https://cloud.google.com/datalab/docs/how-to/adding-libraries for more details
Also, the google-cloud package is deprecated, you shouldn't need to install it, see https://pypi.org/project/google-cloud/.
So I got it working upon importing storage as follows:
import google.datalab.storage as storage
To make your notebooks resilient to both datalab and non-datalab environments you can use one of the the following methods for handling your import statements:
try:
from google.cloud import storage
except ImportError:
from google.datalab import storage
or
if 'google.datalab' in sys.modules:
from google.datalab import storage
else:
from google.cloud import storage
Alternatively if you would like to switch datalab to using from google.cloud import storage
Run the following in a cell
!pip install google-cloud-storage
Followed by this cell to reset the IPython kernel
# Reset the IPython kernel
from IPython.core.display import HTML
HTML("<script>Jupyter.notebook.kernel.restart()</script>")
Note: You need to reset the Python kernel after installation otherwise you will a ContextualVersionConflict error from naming conflicts
I am very frustrated by this error, what I did is getting the code from tensor flow tutorial to import moist:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
However when I run the python shows:
File "/Users/kevinling/Desktop/Machine Learning/tensorflow.py", line 2, in
from tensorflow.examples.tutorials.mnist import input_data
ImportError: No module named examples.tutorials.mnist
When I check into the directory, the file is perfectly there:
And the directory is:
enter image description here
The input_data.py is like:
The input_data.py
Just rename your example from "tensorflow.py" to anything else and it will work. The interpreter is trying to import the necessary files from your script.
Did you already install tensorflow? If not, follow their install instructions or simply install using pip:
pip install tensorflow
Now, make sure you are NOT currently in a folder where tensorflow is located, and try running your script.
python your_script.py