Python: select one of multiple installed module versions - python

On my system, I have several modules installed multiple times. To give an example, numpy 1.6.1 is installed in the standard path at /usr/lib/python2.7/dist-packages, and I have an updated version of numpy 1.8.0 installed at /local/python/lib/python2.7/site-packages/.
The reason I cannot simply remove the old version is that I do not have permissions to change anything on my work computer. I however need to use the new numpy version.
I have added /local/python/lib/python2.7/site-packages/ to my PYTHONPATH. Unfortunately, this does not help, since /usr/lib/python2.7/dist-packages is inserted into the path first and therefore, numpy 1.6.1 will be loaded. Here's an example:
>>> import os
>>> print os.environ['PYTHONPATH']
/local/python/lib/python2.7/site-packages
>>> import pprint
>>> import sys
>>> pprint.pprint(sys.path)
['',
'/local/python/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-linux-x86_64.egg',
'/local/python/lib/python2.7/site-packages/pyparsing-2.0.1-py2.7.egg',
'~/.local/lib/python2.7/site-packages/setuptools-3.4.4-py2.7.egg',
'~/.local/lib/python2.7/site-packages/mpldatacursor-0.5_dev-py2.7.egg',
'/usr/lib/python2.7/dist-packages',
'/local/python/lib/python2.7/site-packages',
'/usr/lib/python2.7',
...,
'~/.local/lib/python2.7/dist-packages',
...]
So, it seems that the import order is
current directory
eggs from PYTHONPATH
eggs from local module path (~/.local/lib/python2.7/site-packages/*.egg)
system-wide module path (~/usr/lib/python2.7/dist-packages/)
directories from PYTHONPATH
intermediate paths (omitted for brevity)
userbase directory (~/.local/lib/python2.7/site-packages/)
My problem is that I would need to put item 5. before items 3. and 4. for my code to work properly. Right now, if I import a module that was compiled against numpy 1.8.0 from the /local/* directory, and this module imports numpy, it will still take numpy from the /usr/* directory and fail.
I have circumvented this problem by placing something like this in my scripts:
import sys
sys.path.insert(0, '/local/python/lib/python2.7/site-packages/')
Thereby I can force Python to use the right import order, but of course this is not a solution, since I would have to do this in every single script.

Besides the suggestions already given in the comment section, have you thought about using virtualenv? This would give you fine-grained control over every module that you want to use. If you're not familiar with virtualenv you'll want to read the documentation to get a feel for how it works.
Purely for example, you could install and set it up, like so (virtualenv-1.11.6 looks to be the most recent version currently):
$ curl -O https://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.11.6.tar.gz
$ tar xvfz virtualenv-1.11.6.tar.gz
$ cd virtualenv-1.11.6
$ python virtualenv.py ../numpyvenv
$ cd ../numpyvenv
$ source ./bin/activate
(numpyvenv) $ pip install numpy
# downloads, compiles, and installs numpy into the virtual environemnt
(numpyvenv) $ python
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> numpy.version.version
'1.9.1'
>>> quit()
(numpyvenv) $ deactivate
$ # the virtual environment has been deactivated
Above, we created a virtual environment named "numpyvenv", activated the environment, installed numpy, printed the numpy version (to show it works), quit python, and deactivated the environment. Next time you activate the environment, numpy will be there along with whatever other modules you install. You may run into hiccups while trying this, but it should get you started.

I had this problem on a Mac I was using without administrator access. My solution was the following:
Find the directory of the numpy version you want to use. For me this was /Library/Python/2.7/site-packages
Create a file ~/.startup.py and point to it with PYTHONSTARTUP=~/.startup.py in your .bashrc file
In .startup.py:
import sys
sys.path.insert(0,'/Library/Python/2.7/site-packages/') <--- imports this BEFORE the standard parts
import numpy
print("Importing numpy version"+numpy.__version__) <---- To remind that we have changed the numpy version
This seems to work fine for me. I hope it helps.

While a virtualenv seems the way to go, as of Force python to use an older version of module (than what I have installed now) you can also use a modification of
import pkg_resources
pkg_resources.require("Twisted==8.2.0")
import twisted

I had the same issue on Debian Wheezy after installing the latest numpy module with easy_install.
The new numpy module was installed in /usr/local/lib/python2.7/dist-packages/numpy while the old module was in /usr/lib/pymodules/python2.7/numpy. When I tried to import the numpy module, the older version was imported.
And as you say, adding to PYTHONPATH the new module path does not help, because is added in the sys.path below the older entry.
The issue seem to be in easy-install, because it creates a file easy-install.pth that imports /usr/lib/pymodules/python2.7 before any local module.
To fix the issue I just edited the file /usr/local/lib/python2.7/dist-packages/easy-install.pth and commented out the line /usr/lib/pymodules/python2.7 so this line will be placed below in the sys.path.

Related

No module named 'stix2' [duplicate]

After installing mechanize, I don't seem to be able to import it.
I have tried installing from pip, easy_install, and via python setup.py install from this repo: https://github.com/abielr/mechanize. All of this to no avail, as each time I enter my Python interactive I get:
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mechanize
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named mechanize
>>>
The installations I ran previously reported that they had completed successfully, so I expect the import to work. What could be causing this error?
In my case, it is permission problem. The package was somehow installed with root rw permission only, other user just cannot rw to it!
I had the same problem: script with import colorama was throwing an ImportError, but sudo pip install colorama was telling me "package already installed".
My fix: run pip without sudo: pip install colorama. Then pip agreed it needed to be installed, installed it, and my script ran. Or even better, use python -m pip install <package>. The benefit of this is, since you are executing the specific version of python that you want the package in, pip will unequivocally install the package into the "right" python. Again, don't use sudo in this case... then you get the package in the right place, but possibly with (unwanted) root permissions.
My environment is Ubuntu 14.04 32-bit; I think I saw this before and after I activated my virtualenv.
I was able to correct this issue with a combined approach. First, I followed Chris' advice, opened a command line and typed 'pip show packagename'
This provided the location of the installed package.
Next, I opened python and typed 'import sys', then 'sys.path' to show where my python searches for any packages I import. Alas, the location shown in the first step was NOT in the list.
Final step, I typed 'sys.path.append('package_location_seen_in_step_1'). You optionally can repeat step two to see the location is now in the list.
Test step, try to import the package again... it works.
The downside? It is temporary, and you need to add it to the list each time.
It's the python path problem.
In my case, I have python installed in:
/Library/Frameworks/Python.framework/Versions/2.6/bin/python,
and there is no site-packages directory within the python2.6.
The package(SOAPpy) I installed by pip is located
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/
And site-package is not in the python path, all I did is add site-packages to PYTHONPATH permanently.
Open up Terminal
Type open .bash_profile
In the text file that pops up, add this line at the end:
export PYTHONPATH=$PYTHONPATH:/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/
Save the file, restart the Terminal, and you're done
The Python import mechanism works, really, so, either:
Your PYTHONPATH is wrong,
Your library is not installed where you think it is
You have another library with the same name masking this one
I have been banging my head against my monitor on this until a young-hip intern told me the secret is to "python setup.py install" inside the module directory.
For some reason, running the setup from there makes it just work.
To be clear, if your module's name is "foo":
[burnc7 (2016-06-21 15:28:49) git]# ls -l
total 1
drwxr-xr-x 7 root root 118 Jun 21 15:22 foo
[burnc7 (2016-06-21 15:28:51) git]# cd foo
[burnc7 (2016-06-21 15:28:53) foo]# ls -l
total 2
drwxr-xr-x 2 root root 93 Jun 21 15:23 foo
-rw-r--r-- 1 root root 416 May 31 12:26 setup.py
[burnc7 (2016-06-21 15:28:54) foo]# python setup.py install
<--snip-->
If you try to run setup.py from any other directory by calling out its path, you end up with a borked install.
DOES NOT WORK:
python /root/foo/setup.py install
DOES WORK:
cd /root/foo
python setup.py install
I encountered this while trying to use keyring which I installed via sudo pip install keyring. As mentioned in the other answers, it's a permissions issue in my case.
What worked for me:
Uninstalled keyring:
sudo pip uninstall keyring
I used sudo's -H option and reinstalled keyring:
sudo -H pip install keyring
In PyCharm, I fixed this issue by changing the project interpreter path.
File -> Settings -> Project -> Project Interpreter
File -> Invalidate Caches… may be required afterwards.
I couldn't get my PYTHONPATH to work properly. I realized adding export fixed the issue:
(did work)
export PYTHONPATH=$PYTHONPATH:~/test/site-packages
vs.
(did not work)
PYTHONPATH=$PYTHONPATH:~/test/site-packages
This problem can also occur with a relocated virtual environment (venv).
I had a project with a venv set up inside the root directory. Later I created a new user and decided to move the project to this user. Instead of moving only the source files and installing the dependencies freshly, I moved the entire project along with the venv folder to the new user.
After that, the dependencies that I installed were getting added to the global site-packages folder instead of the one inside the venv, so the code running inside this env was not able to access those dependencies.
To solve this problem, just remove the venv folder and recreate it again, like so:
$ deactivate
$ rm -rf venv
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt
Something that worked for me was:
python -m pip install -user {package name}
The command does not require sudo. This was tested on OSX Mojave.
In my case I had run pip install Django==1.11 and it would not import from the python interpreter.
Browsing through pip's commands I found pip show which looked like this:
> pip show Django
Name: Django
Version: 1.11
...
Location: /usr/lib/python3.4/site-packages
...
Notice the location says '3.4'. I found that the python-command was linked to python2.7
/usr/bin> ls -l python
lrwxrwxrwx 1 root root 9 Mar 14 15:48 python -> python2.7
Right next to that I found a link called python3 so I used that. You could also change the link to python3.4. That would fix it, too.
In my case it was a problem with a missing init.py file in the module, that I wanted to import in a Python 2.7 environment.
Python 3.3+ has Implicit Namespace Packages that allow it to create a packages without an init.py file.
Had this problem too.. the package was installed on Python 3.8.0 but VS Code was running my script using an older version (3.4)
fix in terminal:
py .py
Make sure you're installing the package on the right Python Version
I had colorama installed via pip and I was getting "ImportError: No module named colorama"
So I searched with "find", found the absolute path and added it in the script like this:
import sys
sys.path.append("/usr/local/lib/python3.8/dist-packages/")
import colorama
And it worked.
I had just the same problem, and updating setuptools helped:
python3 -m pip install --upgrade pip setuptools wheel
After that, reinstall the package, and it should work fine :)
The thing is, the package is built incorrectly if setuptools is old.
If the other answers mentioned do not work for you, try deleting your pip cache and reinstalling the package. My machine runs Ubuntu14.04 and it was located under ~/.cache/pip. Deleting this folder did the trick for me.
Also, make sure that you do not confuse pip3 with pip. What I found was that package installed with pip was not working with python3 and vice-versa.
I had similar problem (on Windows) and the root cause in my case was ANTIVIRUS software! It has "Auto-Containment" feature, that wraps running process with some kind of a virtual machine.
Symptoms are: pip install somemodule works fine in one cmd-line window and import somemodule fails when executed from another process with the error
ModuleNotFoundError: No module named 'somemodule'
In my case (an Ubuntu 20.04 VM on WIN10 Host), I have a disordered situation with many version of Python installed and variuos point of Shared Library (installed with pip in many points of the File System). I'm referring to 3.8.10 Python version.
After many tests, I've found a suggestion searching with google (but' I'm sorry, I haven't the link). This is what I've done to resolve the problem :
From shell session on Ubuntu 20.04 VM, (inside the Home, in my case /home/hduser), I've started a Jupyter Notebook session with the command "jupyter notebook".
Then, when jupyter was running I've opened a .ipynb file to give commands.
First : pip list --> give me the list of packages installed, and, sympy
wasn't present (although I had installed it with "sudo pip install sympy"
command.
Last with the command !pip3 install sympy (inside jupyter notebook
session) I've solved the problem, here the screen-shot :
Now, with !pip list the package "sympy" is present, and working :
In my case, I assumed a package was installed because it showed up in the output of pip freeze. However, just the site-packages/*.dist-info folder is enough for pip to list it as installed despite missing the actual package contents (perhaps from an accidental deletion). This happens even when all the path settings are correct, and if you try pip install <pkg> it will say "requirement already satisfied".
The solution is to manually remove the dist-info folder so that pip realizes the package contents are missing. Then, doing a fresh install should re-populate anything that was accidentally removed
When you install via easy_install or pip, is it completing successfully? What is the full output? Which python installation are you using? You may need to use sudo before your installation command, if you are installing modules to a system directory (if you are using the system python installation, perhaps). There's not a lot of useful information in your question to go off of, but some tools that will probably help include:
echo $PYTHONPATH and/or echo $PATH: when importing modules, Python searches one of these environment variables (lists of directories, : delimited) for the module you want. Importing problems are often due to the right directory being absent from these lists
which python, which pip, or which easy_install: these will tell you the location of each executable. It may help to know.
Use virtualenv, like #JesseBriggs suggests. It works very well with pip to help you isolate and manage the modules and environment for separate Python projects.
I had this exact problem, but none of the answers above worked. It drove me crazy until I noticed that sys.path was different after I had imported from the parent project. It turned out that I had used importlib to write a little function in order to import a file not in the project hierarchy. Bad idea: I forgot that I had done this. Even worse, the import process mucked with the sys.path--and left it that way. Very bad idea.
The solution was to stop that, and simply put the file I needed to import into the project. Another approach would have been to put the file into its own project, as it needs to be rebuilt from time to time, and the rebuild may or may not coincide with the rebuild of the main project.
I had this problem with 2.7 and 3.5 installed on my system trying to test a telegram bot with Python-Telegram-Bot.
I couldn't get it to work after installing with pip and pip3, with sudo or without. I always got:
Traceback (most recent call last):
File "telegram.py", line 2, in <module>
from telegram.ext import Updater
File "$USER/telegram.py", line 2, in <module>
from telegram.ext import Updater
ImportError: No module named 'telegram.ext'; 'telegram' is not a package
Reading the error message correctly tells me that python is looking in the current directory for a telegram.py. And right, I had a script lying there called telegram.py and this was loaded by python when I called import.
Conclusion, make sure you don't have any package.py in your current working dir when trying to import. (And read error message thoroughly).
I had a similar problem using Django. In my case, I could import the module from the Django shell, but not from a .py which imported the module.
The problem was that I was running the Django server (therefore, executing the .py) from a different virtualenv from which the module had been installed.
Instead, the shell instance was being run in the correct virtualenv. Hence, why it worked.
This Works!!!
This often happens when module is installed to an older version of python or another directory, no worries as solution is simple.
- import module from directory in which module is installed.
You can do this by first importing the python sys module then importing from the path in which the module is installed
import sys
sys.path.append("directory in which module is installed")
import <module_name>
Most of the possible cases have been already covered in solutions, just sharing my case, it happened to me that I installed a package in one environment (e.g. X) and I was importing the package in another environment (e.g. Y). So, always make sure that you're importing the package from the environment in which you installed the package.
For me it was ensuring the version of the module aligned with the version of Python I was using.. I built the image on a box with Python 3.6 and then injected into a Docker image that happened to have 3.7 installed, and then banging my head when Python was telling me the module wasn't installed...
36m for Python 3.6
bsonnumpy.cpython-36m-x86_64-linux-gnu.so
37m for Python 3.7 bsonnumpy.cpython-37m-x86_64-linux-gnu.so
I know this is a super old post but for me, I had an issue with a 32 bit python and 64 bit python installed. Once I uninstalled the 32 bit python, everything worked as it should.
I have solved my issue that same libraries were working fine in one project(A) but importing those same libraries in another project(B) caused error. I am using Pycharm as IDE at Windows OS.
So, after trying many potential solutions and failing to solve the issue, I did these two things (deleted "Venv" folder, and reconfigured interpreter):
1-In project(B), there was a folder named("venv"), located in External Libraries/. I deleted that folder.
2-Step 1 (deleting "venv" folder) causes error in Python Interpreter Configuration, and
there is a message shown at top of screen saying "Invalid python interpreter selected
for the project" and "configure python interpreter", select that link and it opens a
new window. There in "Project Interpreter" drop-down list, there is a Red colored line
showing previous invalid interpreter. Now, Open this list and select the Python
Interpreter(in my case, it is Python 3.7). Press "Apply" and "OK" at the bottom and you
are good to go.
Note: It was potentially the issue where Virtual Environment of my Project(B) was not recognizing the already installed and working libraries.

Different numpy version in Anaconda and numpy.__version__ in IPython Shell

I used How do I check which version of NumPy I'm using? to learn how to get the version of numpy. However, when I run conda list | grep numpy, I get:
numpy 1.15.2 py36ha559c80_0
numpy-base 1.15.2 py36h8128ebf_0
numpydoc 0.8.0 py36_0
However, when I run version from IPython shell, I get:
import numpy as np
np.__version__
Out: '1.13.3'
np.version.version
Out: '1.13.3'
np.version.full_version
Out: '1.13.3'
Why are the two versions different? Which one should I trust? Thanks for any help.
Please note that I am not using venv (i.e. virtual environment). I am directly accessing Anaconda's packages. So, there is no issue about versioning.
Here's what PyCharm is showing me:
As per Conda's version information on package doesn't correspond to __version__, here's __file__ and sys.path. Please note that I have hidden my name for privacy issues.
It seems that you have besides your python 3 environment in anaconda, another python with IPython and numpy installed.
It looks like that PyCharm and Anaconda see (correctly) the same numpy versions, while IPython which, I assume you didn't start from within your anaconda environment, sees another python installation with the older numpy. In fact, your output shows, that there is another python3.6 in C:\Users\... which doesn't belong to anaconda.
To make numpy 1.15 available in IPython you can either start IPython from within your anaconda environment by typing in the terminal (easier solution)
C:\>activate <your_anaconda_environment_name>
(<your_anaconda_environment_name>) C:\>ipython
or you make your local IPython load the modules from the anaconda environment by having a look at this answer. This will be not a recommended option in this case, given the resulting cross linkings of two python installations.
The issue is that PyCharm reads older python version from location App-data\roaming... What I did is that in start-up script, I added the following code.
print("Correcting sys paths now...")
paths = [
'C:\\Anaconda3\\python36.zip',
'C:\\Anaconda3\\DLLs',
'C:\\Anaconda3\\lib',
'C:\\Anaconda3',
'C:\\Anaconda3\\lib\\site-packages',
'C:\\Anaconda3\\lib\\site-packages\\win32',
'C:\\Anaconda3\\lib\\site-packages\\win32\\lib',
'C:\\Anaconda3\\lib\\site-packages\\Pythonwin',
'C:\\Anaconda3\\lib\\site-packages\\IPython\\extensions',
]
import sys
for path in reversed(paths):
sys.path.insert(0,path)
print("Completed correcting sys paths now...")
del path
del paths
Above code will force Python to read latest files from Anaconda. However, if you are using virtual environment, you would need to point to that environment.
If you want to know where is Python installed, you can run:
import os
import sys
os.path.dirname(sys.executable)
Above answer is inspired from conda python isn't using the numpy version I try install if I also specify that it should use python 2. It doesn't provide the solution. I have posted a solution above.

Python: No Module named xxx

Im getting nowhere with the following error on my Raspberry Pi:
My own Python script calls a function from another module named BlackBean.py which in turn imports other modules called "netaddr" and "configparser". The problem is that I just cant seem to get past the import error which tells me " No Module named netaddr, or if I comment out that import then it also errors with No Module named configparser. So I know its a path issue but I just cant seem to get it fixed!
The Blackbean.Py script starts like this:
import broadlink
import ConfigParser
import sys, getopt
import time, binascii
import netaddr
import BlackBeanSettings
import re
from os import path
from Crypto.Cipher import AES
SettingsFile = ConfigParser.ConfigParser()
SettingsFile.optionxform = str
SettingsFile.read(BlackBeanSettings.BlackBeanControlSettings)
def execute_command(etc.........
The BlackBean.py file is in my project SkyHD folder at /home/pi/SkyHD.
The "netaddr" and "configparser" files & folders were installed by pip in /home/pi/.local/lib/python2.7(and python3.5)/site-package folders.
sys.path has the above folders in its list and Ive also edited .bashrc and added PYTHONPATH=${PYTHONPATH}:/home/pi/.local/lib/python2.7/site-package:/home/pi/.local/lib/python3.5/site-package:/home/pi/SkyHD:../
but none of this works. I guess it must be something basic but I just cant work it out! help!
Also, some more info, when I first install all the files and run my program everything works fine and it finds the files ok with no problems, its only when I reboot it fails to find the files.
Its fixed.
Python looks for imported modules in 3 places, the first being the folder you launched the python script from; so for me the obvious answer is to import the modules I need directly into my own Project folder (/home/pi/myproject). This worked just fine, it works every time even after reboot, which was my main problem before. No need to create or alter PYTHONPATH, no need to mess around with entries in .bashrc or try to change the python path entries. Here are the steps:
Upgrade PIP to version 9.0.3 (not ver 10) with
pip install --upgrade pip==9.0.3
then install the required modules with the following
pip install --target=/home/pi/your_project_folder module_name
so for me it was... pip install --target=/home/pi/SkyHD netaddr
Im sure this is not best practice, but my Raspberry Pi only has this one project to run and having modules imported into the Projects folder just isnt an issue.
Hope this helps some others with the same problem.
You've provided insufficient information. Specifically, details about the python command being used to run your script such as its version (python -V) and its module search path if you do
env -u PYTHONPATH python -c 'import sys; print(sys.path);'
Similarly you can easily simplify the problem. What happens if you do python -m netaddr?
Obviously in the above commands substitute the actual python command being used to run your script.
And, as #BoarGules mentioned in his comments to your question, you should never, ever add directories to PYTHONPATH for different python versions unless you know that the modules in those directories has been written to work with python2 and python3.

Cannot import name random/multiarray in conda environment

I'm trying to run tensorflow in a conda environment. I started off by creating a python 2.7 environment with conda create --name py27 python=2.7 and then activated it. Within the environment, I ran conda install -c https://conda.anaconda.org/jjhelmus tensorflow, which has tensorflow and numpy in the package, so hypothetically there shouldn't be any issues running numpy.
When I open up the python console within the environment, however, I'm continually getting ImportError: No module named multiarray and ImportError: cannot import name Random (I can import random with no issues, but then I get the multiarray issue) no matter how many times I uninstall/reinstall numpy/matplotlib (at one point I even uninstalled/reinstalled python) and no matter what versions of these I try to use, I keep on getting the same issue. What should I do?
There is an answer here.
Shortly: that issue has something with the version of numpy which is upgraded by another package by whatever reason. Try to specify version: conda create -n NAME numpy=1.9.3 other_package.
If that doesn't work, check if you have files in your working directory which names matches the names of some packages. For example, I had a similar problem after renaming numpy.py.txt (which is a sort of handmade cheatsheet) into just numpy.py and trying to import numpy within Python shell when I was in that directory.

After installing NumPy in Python - I still get error: "no module named numpy"

I need to install and use the Python NumPy module (and then later the Pandas module) in order to process heavy data in Python.
I downloaded and installed ENTHOUGHT, but it wasn't what I wanted all that extra clutter of extra modules (which defeats the purpose of importing Python modules only as needed), but the uninstall did not work properly (i.e. it left garbage folders and ENTHOUGHT remnants all over my computer).
I have tried installing NumPy via EASY_INSTALL and PIP (two package managers if I understand correctly) - but with no success. Every time I try to run my program, I get the error: "no module named numpy".
I have searched the questions here and have tried to alter my ENVIRONMENT VARIABLE as per the following video, but again, no success:
https://www.youtube.com/watch?v=ddpYVA-7wq4
C:\Python34
...still the same error!
I downloaded Anaconda (with all its extra clutter and installed, but I don't like the development environment - I want my Vanilla Python IDLE to run Vanilla NumPy with no extra clutter modules...) and when I tried to again install Numpy I received a message that it was already installed with a path to:
C:\users\yoni\anaconda3\lib\site-packages
....so I ALSO added this PYTHONPATH to the ENVIRONMENT VARIABLE in hopes that it would now recognize where the NumPy installation was (currently with Anaconda3 - but I hoped to be able to import NumPy to my vanilla Python IDLE):
C:\Python34;C:\users\yoni\anaconda3\lib\site-packages
I don't find a clear answer - I see others have the same problem, and nothing is working for me. How can I finish this installation of NumPy so that it works for me when I do a simple import of module?
This is a temporary solution until you can resolve your path issue.
It will be environment specific.
import sys
sys.path.append('C:\users\yoni\anaconda3\lib\site-packages[PackageName]')
import PakcageName

Categories