I do a lot of interactive work in iPython. Currently, I'm working with Jupyter QtConsole. Suppose I start with this:
from myFuncs import func1
Then I go out to myFuncs.py and add a new function, func2. If I try this:
from myFuncs import func2
It doesn't see it. Presumably myFuncs is somehow cached. I have read about reload, but it seems to only work with entire modules, not cherry picked functions. autoreload also seems ineffective here. Is there a way around, short of restarting the kernel?
Incidentally, ipython within Spyder is fine with files changing while interacting. It is also unusably slow, so maybe related?
As #jss367 mentioned here, you can achieve this with importlib and sys modules:
import importlib
import sys
importlib.reload(sys.modules['myFuncs'])
from myFuncs import func2
Related
I cannot properly access submodules in pycharm. Let's say I have
import numpy as np
And then I tried accessing the testing submodule as np.testing.assert_all_close(). The code runs, but pycharm will not recognise the testing submodule, or do any kind of autocompletion. If I import like this however, it works:
import numpy as np
import numpy.testing
Then it works as np.testing.... This did not happen before, maybe it is an interpreter issue? I am using poetry as environment:
https://plugins.jetbrains.com/plugin/14307-poetry
If I open an iPython console using this same environment, there is no issue, autocomplete works perfectly.
I'm trying to understand the best workflow for impotring script files into a jupyter notebook.
I have a notebook that does somethig like:
%load_ext autoreload
%autoreload 2
import functions as F
Inside functions.py, I further do imports such as
import numpy as np
import mymodule
It seems then that, for example, numpy will get reloaded every time I execute a cell, which makes things a bit slow. How could I automatically reload functions.py without reloading the imports there that I never change?
I don't quite understand your question. The main functionality of %autoreload is to automatically reload modules, what it does, according to you. You can read about it here, I find it pretty well explained.
However, if you need to access import internals, you should take a look at importlib and especially importlib.reload():
import importlib
importlib.reload(my_module)
or
from importlib import reload
reload(my_module)
It is available starting from Python 3.1.
This can be achieved by specifying the configuration option(%autoreload 1) to auto-reload a selected module.
However, the module must be imported as %aimport my_module.
I have two files:
MyModule.py
MyNotebook.ipynb
I am using Jupyter Notebook, latest, and Python, latest. I have two code cells in my Notebook.
Cell #1
import some stuff
Run some code
(Keep everything in the environment takes about five minutes to run this cell).
Cell #2
import MyModule
Execute code from MyModule
I would like to make code changes in MyModule.py and rerun Cell #2 but without restarting the kernel (Cell #1 did a fair amount of work which I don't want to rerun each time).
If I simply run the second cell, those changes made to MyModule.py do not propagate through. I did some digging, and I tried using importlib.reload. The actual code for Cell #2 :
from Nash.IOEngineNash import *
import importlib
importlib.reload(Nash.IOEngineNash)
Unfortunately, this isn't quite working. How can I push those changes in MyModule.py (or Nash/IOEngineNash.py in actual fact) into my Notebook without restarting the kernel and running from scratch?
I faced a similar issue , while importing a custom script in jupyter notebook
Try importing the module as an alias then reloading it
import Nash as nash
from importlib import reload
reload(nash)
To make sure that all references to the old version of the module are updated, you might want to re-import it after reloading, e.g.
import mymodule
reload(mymodule)
import mymodule
Issues may arise due to implicit dependencies, because importlib.reload only reloads the (top-level) module that you ask it to reload, not its submodules or external modules.
If you want to reload a module recursively, you might have a look at this gist, which allows you to simply add a line like this at the top of a notebook cell:
%reload mymodule
import mymodule
This recursively reloads the mymodule and all of its submodules.
I was having issues getting the scikit-learn module to import into python. It would import into the python shell, but not when I did it through my IDE. After reading lots of things online, I got it to work by using:
import sys
sys.path.append(r"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages")
import sklearn
Does anyone have a suggestion so that I don't need to do the sys thing every time I want to use the module?
In your IDE, find where you can change the "Python interpreter" setting, and point it to /Library/Frameworks/Python.framework/Versions/2.7/bin/python
First of all let me tell you that I'm a new user and I'm just starting to learn Python in College so my apologies if this question is answered in other topic, but I searched and I can't seem to find it.
I received a file work.pyc from my teacher and he says I have to import it in my Wing IDE using the command from work import *, the question is I don't know where to put the file to import it.
It just says ImportError: No module named work.
Thank you
There are several options for this.
The most straightforward is to just place it in the same folder as the py file that wants to import it.
You may also want to have a look at this
if you're using the python interpreter (the one that lets you directly input python code into it and executes) you'll have to do this:
sys.path.append('newpath')
from work import *
where newpath is the path on your filesystem containing your work.pyc file
If you're working on a script called main.py in the folder project, one option is to place it at project/work.pyc
This will make the module importable because it's in the same working directory as your code.
The way Python resolves import statements works like this (simplified):
The Python interpreter you're using (/usr/bin/python2.6 for example, there can be several on your system) has a list of search paths where it looks for importable code. This list is in sys.path and you can look at it by firing up your interpreter and printing it out like this:
>>> import sys
>>> from pprint import pprint
>>> pprint(sys.path)
sys.path usually contains the path to modules from the standard library, additional installed packages (usually in site-packages) and possibly other 3rd party modules.
When you do something like import foo, Python will first look if there is a module called foo.py in the directory your script lives. If not, it will search sys.path and try to import it from there.
As I said, this explanation is a bit simplified. The details are explained in the section about the module search path.
Note 1:
The *.pyc you got handed is compiled Python bytecode. That means it's contents are binary, it contains instructions to be executed by a Python virtual machine as opposed to source code in *.py that you will normally deal with.
Note 2:
The advice your teacher gave you to do from work import * is rather bad advice. It might be ok to do this for testing purposes in the interactive interpreter, but your should never do that in actual code. Instead you should do something like from work import chop, hack
Main reasons:
Namespace pollution. You're likely to import things you don't need but still pollute your global namespace.
Readability. If you ever read someone elses code and wonder where foo came from, just scroll up and look at the imports, and you'll see exactly where it's being imported from. If that person used import *, you can't do that.