I would like to create a copy of a package on github that I can edit and use in spyder. I currently use the anaconda package manager for my python packages.
Here are the steps that I have taken so far:
fork repo
clone repo onto my local directory
The package is called 'Nilearn'. I currently use anaconda and have installed nilearn via 'conda install nilearn'.
I would like to be able to use my own copy of nilearn inside spyder alongside nilearn. I have tried renamine the repo to nilearn_copy, but this doesn't appear to work.
If this is not possible or not the ideal solution, then please help suggest an alternative, I'm new to github and python.
Thanks a lot,
Joe
You need to open an IPython console, then run this command
In [1]: %cd /path/to/nilearn/parent
By this I mean that you to go with the %cd magic to the parent directory where nilearn is placed. After that you can run
In [2]: import nilearn
and that should import your local copy of nilearn.
Note: If you are planning to do changes to nilearn and want your changes to be picked up in the same console, you need to run these commands before the previous ones:
In [3]: %load_ext autoreload
In [4]: %autoreload 2
Related
I have absolute no further idea, how i could manage the installation of the epftoolbox in python. I have tried the steps from https://epftoolbox.readthedocs.io/en/latest/modules/started.html in various ways and directions, but it still doesn't work and i get the following error when running the file:
ModuleNotFoundError: No module named 'epftoolbox.evaluation'
Can anyone suggest a step by step video or something like that, where the installation is showed for 'dummies'?
Any help would be veeeeery appreciated!
PS: I'm working with pycharm
Since you're working with pycharm first create a new project with a virtual environment and open the terminal and type the following clone statement : git clone https://github.com/jeslago/epftoolbox.git
then move to the cloned directory by typing this command:
cd epftoolbox
once your'e inside this directory run your pip install command:
pip install .
you should be able to work with the library here.
since you already created the virtual environment..
Hope it solved your problem.
I've been trying to add one of my folders where I hold my python modules and, so far, I haven't been able to do it through AWS's terminal. The folder with the .py files is inside the main SageMaker folder, so I'm trying (I've also tried it with SageMaker/zds, which is the folder that holds the modules):
export PYTHONPATH="${PYTHONPATH}:SageMaker/"
After printing the directories of the PYTHONPATH through the terminal with python -c "import sys; print('\n'.join(sys.path))", I get that indeed my new path is included in the PYTHONPATH. However, when I try to import any module from any notebook (with from zds.module import * or from module import *), I get the error that the module doesn't exist. If I print the paths from the PYTHONPATH directly inside the notebook I no longer see the previously added path in the list.
Am I missing something basic here or is it not possible to add paths to the PYTHONPATH inside AWS SageMaker? For now, I'm having to use import sys, os
sys.path.insert(0, os.path.abspath('..')) inside basically every notebook as a fix to the problem.
Adding this to the lifecycle script worked for me
sudo -i <<'EOF'
touch /etc/profile.d/jupyter-env.sh
echo export PYTHONPATH="$PYTHONPATH:/home/ec2-user/SageMaker/repo-name/src" >> /etc/profile.d/jupyter-env.sh
EOF
Thanks for using Amazon SageMaker!
Copying from the https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html
Amazon SageMaker notebook instances use conda environments to implement different kernels for Jupyter notebooks. If you want to install packages that are available to one or more notebook kernels, enclose the commands to install the packages with conda environment commands that activate the conda environment that contains the kernel where you want to install the package
For example, if you want to install a package only in for the python3 environment, use the following code:
# This will affect only the Jupyter kernel called "conda_python3".
source activate python3
# Replace myPackage with the name of the package you want to install.
pip install myPackage
# You can also perform "conda install" here as well.
source deactivate
If you do installation in above suggested way you should be able to import your package from the Notebook corresponding Kernel which you are using. Let us know if it doesn't help.
I would like to make changes (and possibly contribute if its any good) to a public project on GitHub. I've forked and cloned the module but Im unclear how to get my program to import the local library instead of the 'official' installed module.
I tried cloning it into my project folder but when I imported it and tried to use it things got weird calmap\calmap.plot()
I also tried doing sys.path.append and the folder location. But it seems to still import the official one instead of the forked.
I'm assuming that I could put my program inside the module folder so that module would be found first but I can't image thats the 'correct' way to do it.
|
|-->My_Project_Folder/
|
|-->Forked_Module/
|-->docs/
|-->Forked_Module/
|-->__init__.py
If you're already using anaconda, then you can create a new environment just for the development of this feature.
First, create a new environment:
# develop_lib is the name of the environment.
# You can pick anything that is memorable instead.
# You can also use whatever python version you require ...
conda create -n develop_lib python3.5
Once you have the environment, then you probably want to enter that environment in your current session:
source activate develop_lib
Ok, now that you have the environment set up, you'll probably need to install some requirements for whatever third party library you're developing. I don't know what those dependencies are, but you can install them in your environment using conda install (if they're available) or using pip. Now you're ready to start working with the library that you want to update. python setup.py develop should be available assuming that the package has a standard build process. After you've run that, things should be good to go. You can make changes, run tests, etc.
If you use sys.path.append() the new "path" will be used if none of the previous contains the module you are importing. If you want that the "added path" has precedence over all the older, you have to use
sys.path.insert(0, "path")
In this way, if you print the sys.path you will see that the added path is at the beginning of the list and the module you are importing will be loaded from the path you have specified.
to import from the forked repo instead of python package you should
make a virtual environment for the cloned project then activate it, that way the environment is isolated from the globally installed packages.
1- you need to fork your repo;
2- create a virtual env and activate it;
3- clone your repo.
now if you print your import you will see the path of the forked repo.
import any_module
print(any_module)
I have the following package (and working directory):
WorkingDirectory--
|--MyPackage--
| |--__init__.py
| |--module1.py
| |--module2.py
|
|--notebook.ipynb
In __init__.py I have:
import module1
import module2
If I try to import MyPackage into my notebook:
import MyPackage as mp
I will get ModuleNotFoundError: No module named 'module1'. But import works fine if I execute the script outside a notebook: if I create test.py in the same directory and do the same as in the notebook the import would work properly. It will work inside the notebook if I use fully qualified name in __init__.py (import MyPackage.module1).
What's the reason for different import behavior?
I have confirmed the working directory of the notebook is WorkingDirectory.
---Update---------
Exact error is:
C:\Users\Me\Documents\Working Directory\MyPackage\__init__.py in <module>()
---> 17 import module1
ModuleNotFoundError: No module named 'module1'
My problem differs from the possible duplicate:
The notebook was able to find the package, but only unable to load the module. This was inferred from substituting module1 with MyPackage.module1 worked well and suggests it may not be a problem related with PATH.
I cded into WorkingDirectory and started the server there. The working directory should be the folder containing my package.
I'm pretty sure this issue is related and the answer there will help you: https://stackoverflow.com/a/15622021/7458681
tl;dr the cwd of the notebook server is always the base path where you started the server, no matter was running import os os.getcwd() says. Use import sys sys.path.append("/path/to/your/module/folder").
I ran it with some dummy modules in the same structure as you had specified, and before modifying sys.path it wouldn't run and after it would
understand this two functions, your problem will be solved.
#list the current work dir
os.getcwd()
#change the current work dir
os.chdir()
change the path, and import module, have fun.
sometime it won't work.try this
import sys
# sys.path is a list of absolute path strings
sys.path.append('/path/to/application/app/folder')
import file
-, -
if you face module not found on jupyter environment you had to install it on jupyter environment instead of installing it on command prompt
by this command(for windows) on jupyter
!pip install module name
after that you can easily import and use it.
Whenever you want to tell jupyter that this is system command you should put ( ! ) before your command.
The best way to tackle this issue is to create a virtual env and point your kernel to that virtual environment:
Steps:
python -m venv venv
source venv/bin/activate
ipython kernel install --user --name=venv
jupyter lab
go to the jupyter lab ->kernel-->change kernel-->add the venv from the dropdown
Now if your venv has the package installed, jupyter lab can also see the package and will have no problem importing the package.
You can do that by installing the import_ipynb package.
pip install import_ipynb
Suppose you want to import B.ipynb in A.ipynb, you can do as follows:
In A.ipynb:
import import_ipynb
import B as b
Then you may use all the functions of B.ipynb in A.
My problem was that I used the wrong conda enviroment when using Vs Code.
Enter your conda enviroment
conda activate **enviroment_name**
To check where a module is installed you can enter python interactive mode by writing python or python3. Then importing cv2
import cv2
Then to see where this module is installed
print(cv2.__file__)
You will see the installed path of the module. My problem was that my vs code kernel was set to the wrong enviroment. This can be changed in the top right corner for vs code.
hope this helps
this happened to me when I moved my journal into a new directory while the Jupyter lab server was running. The import broke for that journal, but when I made a new journal in the same directory I just moved to and used the same import, it worked. To fix this I:
Went to the root dir for my project.
Searched for all folders labeled “pycache”
Deleted all “pycache” folders that were found in my root and subfolders.
Restarted Jupyter lab server
Once Jupyter lab restarts and compiles your code, the “pycache” folders will be regenerated. Also the pycache folders have two leading and trailing “_”, but stackoverflow is formatting the pycache’s without them
The best solution by far (for me) is to have a kernel for each environment you are working in. Then, with that kernel defined, all you have to do is to update this kernel's environment variables to look at your project folder where your modules are located.
Steps (using pip):
pip install ipykernel (if not installed already)
source activate <your environment name>
python -m ipykernel install --user --name <your environment name> --display-name "<a display name>" (where is the name you want to give to your kernel and is just a name used for display by jupyter.
Once you ran the command above, it will output the location of the kernel configuration files. E.g.: C:\Users\<your user name>\AppData\Roaming\jupyter\kernels\<selected environment name>. Go to this folder and open the kernel.json file.
Add the following entry to this file:
"env": {
"PYTHONPATH": "${PYTHONPATH};<the path to your project with your modules>
}
Good reference about the kernel install command here.
The reason is that your MyPackage/__init__.py is ran from the current working directory. E.g. from WorkingDirectory in this case. It means, that interpreter cannot find the module named module1 since it is not located in either current or global packages directory.
There are few workarounds for this. For example, you can temporarily override a current working directory like this
cwd = os.getcwd()
csd = __path__[0]
os.chdir(csd)
and then, after all a package initialization actions like import module1 are done, restore "caller's" working directory with os.chdir(cwd).
This is quite a bad approach as for me, since, for example, if an exception is raised on initialization actions, a working directory would not be restored. You'll need to play with try..except statements to fix this.
Another approach would be using relative imports. Refer to the documentation for more details.
Here is an example of MyPackage/__init__.py that will work for your example:
from .module1 import *
But it has few disadvantages that are found rather empirically then through the documentation. For example, you cannot write something like import .module1.
Upd:
I've found this exception to be raised even if import MyPackage is ran from usual python console. Not from IPython or Jupyter Notebook. So this seems to be not an IPython itself issue.
I'd like to start developing an existing Python module. It has a source folder and the setup.py script to build and install it. The build script just copies the source files since they're all python scripts.
Currently, I have put the source folder under version control and whenever I make a change I re-build and re-install. This seems a little slow, and it doesn't settle well with me to "commit" my changes to my python install each time I make a modification. How can I cause my import statement to redirect to my development directory?
Use a virtualenv and use python setup.py develop to link your module to the virtual Python environment. This will make your project's Python packages/modules show up on the sys.path without having to run install.
Example:
% virtualenv ~/virtenv
% . ~/virtenv/bin/activate
(virtenv)% cd ~/myproject
(virtenv)% python setup.py develop
Virtualenv was already mentioned.
And as your files are already under version control you could go one step further and use Pip to install your repo (or a specific branch or tag) into your working environment.
See the docs for Pip's editable option:
-e VCS+REPOS_URL[#REV]#egg=PACKAGE, --editable=VCS+REPOS_URL[#REV]#egg=PACKAGE
Install a package directly from a checkout. Source
will be checked out into src/PACKAGE (lower-case) and
installed in-place (using setup.py develop).
Now you can work on the files that pip automatically checked out for you and when you feel like it, you commit your stuff and push it back to the originating repository.
To get a good, general overview concerning Pip and Virtualenv see this post: http://www.saltycrane.com/blog/2009/05/notes-using-pip-and-virtualenv-django
Install the distrubute package then use the developer mode. Just use python setup.py develop --user and that will place path pointers in your user dir location to your workspace.
Change the PYTHONPATH to your source directory. A good idea is to work with an IDE like ECLIPSE that overrides the default PYTHONPATH.