I want to try out spacy in a Jupyter Notebook using Binder. When trying to run load on a model like:
nlp = en_core_web_sm.load()
I get the following error:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-8a5aa70d40b9> in <module>
----> 1 import en_core_web_sm
2 nlp = en_core_web_sm.load()
ModuleNotFoundError: No module named 'en_core_web_sm'
I tried downloading the model using the requirements.txt, but that didn't work or the model was download in an area I don't have access to. Not sure.
Here's the Github repo. Thank you.
It looks like you are trying to use environment.yml and requirements.txt. When your needs necessitate moving beyond a requirements.txt configuration file for Binderhub-served sessions, you should move the contents of requirements.txt to environment.yml following this example repo. In your case though one of your current requirements.txt lines is redundant (and conflicting) with the spacy line in environment.yml.
spaCy models are not installed using the requirements.txt. You have to install them in your environment by running
python -m spacy download en_core_web_sm
For more information, see https://spacy.io/usage/models.
Related
I am trying to perform sentiment analysis on an article from Wikipedia. I need to use the newspaper Python package and am having difficulties implementing it into my code. I have downloaded pip from the terminal and opened the venv virtual environment, and downloaded nltk, textblob, and newspaper3k. However, when I go to run the code it says no such newspaper module exists even though I have downloaded it from the terminal multiple times? I have updated pip and reran my code, but am receiving the same error. What could I do to resolve the issue?
Thank you!
I have tried researching the issue, I opened a virtual environment and downloaded the packages inside that. Along with that I updated pip and reran the code, and tried downloading newspaper3k multiple different times with the same result. I am using Google Colaboratory to run code.
Here is the error I am receiving.
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-71f9b211be5a> in <module>
1 from textblob import TextBlob
----> 2 from newspaper import Article
3
4 url = 'https://en.wikipedia.org/wiki/Mathematics'
5
ModuleNotFoundError: No module named 'newspaper'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-71f9b211be5a> in <module>
1 from textblob import TextBlob
----> 2 from newspaper import Article
3
4 url = 'https://en.wikipedia.org/wiki/Mathematics'
5
ModuleNotFoundError: No module named 'newspaper'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
But in the terminal, I have downloaded newspaper3k:
Requirement already satisfied: six in /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages (from feedfinder2>=0.0.4->newspaper3k) (1.15.0)
Please let me know what I could do to resolve the error I receive in Google Colaboratory - I am unclear what is causing it since I have downloaded the packages. I suspect the error might be caused with the directory I imported the modules in and the directory of the Google Colaboratory file not matching -- I created 2 virtual environments and downloaded the modules in those. I am not sure which directory my google colaboratory file is in and whether or not it matters if it is in the same directory as the modules. If this is the case, how would i go about to move the Colaboratory file to the same directory?
Thank you so much!
from gensim.models import Word2Vec
results in the following error
ImportError: cannot import name 'Word2Vec' from 'gensim.models' (unknown location)
from gensim.models.word2vec import Word2Vec
results in the same error
After deleting all conda installations of this package, pip uninstalling gensim, pip installing gensim and pip install --upgrade gensim, I can finally do
import gensim
but when I try to use gensim.models.Word2Vec it results in the error:
AttributeError: module 'gensim.models' has no attribute 'Word2Vec'
Edit: updated Numpy and Scipy as well
Note: I am using a jupyter notebook that I run from my local machine. I have not had this problem using Pycharm where I was running gensim from a conda env (but I'm working on a group project in notebook so it would be nice if I didn't constantly have to copy paste between these 2 workspaces...)
Any help would be appreciated
First, try to restart your kernel after the installation and see if it works. Then, check that your are in the right virtual environment. Then, please check you gensim version. In a notebook cell run
import gensim
gensim.__version__ # should be 4.1.2. If it's not update via pip.
You can also manually inspect if gensim.models.word2vec is actually there. In a notebook cell run
gensim.__path__
and go to the folder. Here you can see if there's indeed a folder named models and a script named word2vec. If not, there's something wrong with your installation. Hope this helps a bit.
I am trying to run TensorFlow Object Detection API on Google Colab to train SSD-Mobilenet model on a custom dataset. But I am facing this NoModuleError. It is not finding the module 'nets'. I have already found people facing similar problem although they are not running the trining in Google Colab. Following are some of the links:
ImportError: No module named 'nets'
ModuleNotFoundError: No module named 'nets' (TensorFlow)
Everywhere above I found the suggestion of adding PYTHONPATH of slim and research folders and I did them all. Following are the paths I have already added:
! echo $PYTHONPATH
import os
os.environ['PYTHONPATH'] += ":/models"
os.environ['PYTHONPATH'] += ":/models/research"
os.environ['PYTHONPATH'] += ":/models/research/slim"
# I copied the `nets` folder inside models folder and
# additionally here adding this folder to python path such that it becomes available to `faster_rcnn_inception_resnet_v2_feature_extractor.py` file for importing.
os.environ['PYTHONPATH'] += ":/models/nets"
! echo $PYTHONPATH
%cd '/content/gdrive/My Drive/Computer_vision_with_deep_learning/TFOD/models/research/'
!python setup.py build
!python setup.py install
%cd '/content/gdrive/My Drive/Computer_vision_with_deep_learning/TFOD'
But still getting this error. Following is the error I am getting on Colab:
Traceback (most recent call last):
File "training/train.py", line 26, in <module>
from object_detection import model_lib
File "/content/gdrive/My Drive/Computer_vision_with_deep_learning/TFOD/training/object_detection/model_lib.py", line 28, in <module>
from object_detection import exporter as exporter_lib
File "/content/gdrive/My Drive/Computer_vision_with_deep_learning/TFOD/training/object_detection/exporter.py", line 23, in <module>
from object_detection.builders import model_builder
File "/content/gdrive/My Drive/Computer_vision_with_deep_learning/TFOD/training/object_detection/builders/model_builder.py", line 59, in <module>
from object_detection.models import faster_rcnn_inception_resnet_v2_feature_extractor as frcnn_inc_res
File "/content/gdrive/My Drive/Computer_vision_with_deep_learning/TFOD/training/object_detection/models/faster_rcnn_inception_resnet_v2_feature_extractor.py", line 30, in <module>
from nets import inception_resnet_v2
ModuleNotFoundError: No module named 'nets'
As I have noticed the error producing line is from nets import inception_resnet_v2 of the file faster_rcnn_inception_resnet_v2_feature_extractor.py. Hence, I additionally copied the nets folder inside it's scope such that it can find the module. But it is still saying the same although now there should be no point of not finding this module. What else probably went wrong here?
I had the same error, but I found a probable solution.
You need to run the code above at slim directory.
%cd drive/My\ Drive/<path to slim>/slim
!python setup.py build
!python setup.py install
This code runs setup.py for slim, and in fact it sets all the modules needed.
You also may need to add path to slim to your environment variable.
os.environ['PYTHONPATH'] = '/env/python/drive/My Drive/slim'
Or
! export PYTHONPATH=$PYTHONPATH:pwd:pwd/slim
Here are links that were useful for me.
https://github.com/tensorflow/models/issues/1842
https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10/issues/150
Hope this will help.
Alright! I managed to solve it using the following way in Colab. If you think that all the required packages are already installed and are ready to use properly then start from point number 4:
Install model using the following command:
!git clone --depth 1 https://github.com/tensorflow/models
Also install the following packages in the same directory:
!apt-get install -qq protobuf-compiler python-pil python-lxml python-tk
!pip install -q Cython contextlib2 pillow lxml matplotlib
!pip install -q pycocotools
Now go to research folder to compile .proto files. To do this first go to the research folder by running the following command:
%cd /content/models/research
And now compile the .proto files:
!protoc object_detection/protos/*.proto --python_out=.
Now add the python-path executing the following code:
import os
os.environ['PYTHONPATH'] += ':/content/models/research/:/content/models/research/slim/'
If you face problem regarding tf-slim than also install the following package:
!pip install git+https://github.com/google-research/tf-slim
Done!
NB:
I have found this notebook helpful to solve the problem.
I am working with tensorflow 1.x which is basically tensorflow 1.15.2 provided by Colab.
I just cloned the repository from github and Rerun the code cell where the ModuleNotFoundError occured.
Reason:It looks for the file within the specific package,that i cloned,and if not found throws the error.
In order to feed text strings stored in a .csv file into Google's named entity extractor, I would like to use the Google NLP API and pandas together.
I am currently using conda to manage my python environments.
These are the steps I have taken:
Part 1: Install google-cloud-language in conda environment
conda create —n myenv
conda activate myenv
pip install google-cloud-language
Run example program:
python language_entities_text.py
Output:
Representative name for the entity: California
Entity type: LOCATION
Salience score: 1.0
wikipedia_url: https://en.wikipedia.org/wiki/California
mid: /m/01n7q
Mention text: California
Mention type: PROPER
Mention text: state
Mention type: COMMON
Language of the text: en
So far so good. Google's example program works
Part 2: Install pandas and run the same program
conda install pandas
or
pip install pandas
Run the same program:
python language_entities_text.py
After I install pandas using conda or pip and run the same program, I get an error.
Error message:
"Traceback (most recent call last):
File "language_entities_text.py", line 28, in <module>
from google.cloud import language_v1
ModuleNotFoundError: No module named 'google'
What is going on? Is there a conflict between google-cloud-language and pandas? How can I get both libraries to work together? Ideally, I would like to keep using conda because that is what I am familiar with but is conda the problem?
I installed openface using the link
But when i am importing openface in python it is giving an error
./util/align-dlib.py ./home/admin1/Dhoni/ align outerEyesAndNose ./aligned-images/ --size 96
Traceback (most recent call last):
File "./util/align-dlib.py", line 24, in <module>
import openface
ImportError: No module named openface
Please help me!!!!!
If you are on Linux,
Clone the project :https://github.com/cmusatyalab/openface
cd openface
python setup.py install
Then create an empty file named __init__.py inside the openface/ directory
Try running the demo command : ./demos/compare.py images/examples/{lennon*,clapton*}
Note: If the demo fails due to dependency issue, try to install those dependencies. Like in my case I had to install following as well
luarocks install torch
luarocks install nn
luarocks install dpnn
Do this in your terminal:
cd /home/your_username/openface
python setup.py install
This should help you.