how to make vscode detect / auto reload modules after editing them? - python

I've seen a few questions asking this, but none of the solutions worked for me.
I am developing a few functions/classes in different modules and have a main.py script that calls everything.
The problem is, when I make a change to a function in another module i.e. module1.py, VSCode does not detect the changes when I call the function in main.py after updating, it's still the older version.
I can get around this by doing something like:
from importlib import reload
reload module1
but this gets old real quick especially when I'm importing specific functions or classes from a module.
Simply re-running the imports at the top of my main.py doesn't actually do anything, I can only do that if I kill the shell and reopen it from the begining, which is not ideal if I am incrementally developing something.
I've read on a few questions that I could include this:
"files.useExperimentalFileWatcher" : true
into my settings.json, but it does not seem to be a known configuration setting in my version, 1.45.1.
This is something Spyder handles by default, and makes it very easy to code incrementally when calling functions and classes from multiple modules in the pkg you are developing.
How can I achieve this in VSCode? To be clear, I don't want to use IPython autoreload magic command.
Much appreciated
FYI here are the other questions I saw, but did not get a working solution out of, amongst others with similar questions/answers :
link1
link2

There is no support for this in VS Code as Python's reload mechanism is not reliable enough to use outside of the REPL, and even then you should be careful. It isn't a perfect solution and can lead to stale code lying about which can easily trip you up (and I know this because I wrote importlib.reload() 😁).

Related

How does a good python debugging workflow look like?

My latest python debugging workflow appears extremely slow to me, and little satifying. How can I improve?
Setting: I work with some third-party python packages from github.
Workflow:
run into error after entering some command to the terminal (Ubuntu WSL, python 3.7)
read terminal error message output, most likely the first or last one is helpful
from the last message i take the code reference (ctrl+left mouse in vscode) and look at the code
i find some function call in the third party module that looks very unrelated to the problem
i add import pdb to the module, and a pdb.set_trace() before that function call
i run the program again, and it stops at the breakpoint
using n,r,u,d i try to navigate closer to the source of the error
i eventually find some error raise condition in some other module, where some property of a certain variable is checked. the variable itself is defined some levels up in the stack
re-running the program and stopping at the same breakpoint as before, i try to navigate to the point where the variable is set. I don't know on which level of the stack it is set, so i miss it sometimes. I set intermediate breakpoints to save me some work when re-running
i finally find the actual cause of the error. I can check out the workspace and eventually fix the error.
i go through all the modules and remove the import pdb and the pdb.set_trace
Thanks for any suggestions
are you using an IDE, not fully clear in your question?
they tend to have graphic ways of setting breakpoints and stepping,
and it saves the hassle of changing the source.
not going into ide opinions, but examples of ide's with debuggers are spyder, thonny and others.
you can also run the debugger via commandline to avoid changing source, but I don't think that's the way to go if you are looking to simplify the cognotive load.
Yes these things you have to do and in extra you can do include logging everywhere as applicable to get exact point where it got occurred.

document scripts not being part of modules in sphinx

I looked for several packages (sphinx-gallery, autoprogram,...), but found nothing on how to easily use a docstring from a python script for documentation. So I somehow want to do autodoc on a specific file.
Is somebody aware of the possibility to automatically generate sphinx documentation out of python scripts?
Like I have a docstring in the beginning of the script and maybe some functions with docstrings in there and just want to autogenerate some documentation like I can do with the .. automodule::directive, but unfortunately that won't work for relative paths / scripts.
EDIT:
The scripts I want to create some docstrings out are not cli scripts, it are just some python scripts which are getting called by a cron job in general. So unfortunately autoprogram won't help here as far as I see.
EDIT2:
Okay, so now I got a little bit clearer on that after re-reading the documentation and trying around. What I wanted to do is automatically taking the docstring of a python file and put that to documentation without executing the whole file (because for some reasons I can't or don't want to hide everything behind a main routine). I got autodoc to document a specific file (there was some misconfiguration why that didn't worked), but like stated in its documentation, it executes the file. That's my true problem right here. I'd be happy if one has a solution to achieve this, but would totally understand if this is not possible without big effort.
What I wanted to do is automatically taking the docstring of a python file and put that to documentation without executing the whole file.
You cannot do that. To get the docstring from a module, the module needs to be imported. Importing the module executes the whole file.
If you don't want code to be executed upon a "simple import", you can use a if __name__ == __name__ block or setuptools entry_point based automatic script generation.

Pylint import check when directories are not the same than the import

I've an unusual question, and I don't find the answer because no one is doing like that :p
I want to use PyLint in order to resolve errors before running the script, especially calling methods from other modules and heritage.
Thing is, my python scripts are not organized the same way than the import.
For exemple I have one module in src/project/module/file.py and the import is like that : progName.app.module
It's because the scripts are precompiled (with operations done on it such as Macros replacement)
But because of that, PyLint is unable to find the folder for the right import.
I can't modify the directories hierarchy, so I have to find a way to tell PyLint that it has to find progName.app.module in src/project/module
If someone ever had this issue...
Thanks !
There are possibly multiple solutions to this. First, you can try to write a failure import hook, using astroid's API, which might look as this: https://github.com/PyCQA/astroid/blob/master/astroid/brain/brain_six.py#L273. After defining it, you can use the same API for loading a plugin, as in: pylint your_project --load-plugins=qualified.name.for.your.plugin.
This might work most of the time, but if it does not, a normal transform could be used (you can take a look at most of the files in that directory, since they are used just for that, for modifying the AST before running the code).
Alternatively, you could try to play with --init-hook="...", in which you can modify the paths before the analysis, but it does not guarantee that your custom locations would be picked by pylint.

Python twisted web server caching and executing outdated code

Background: Working on a web application that allows users to upload python scripts to a server (Twisted web server). The UI provides full CRUD functionality on these python scripts. After uploading a script the user can then select the script and run it on the server and get results back on the UI. Everything works fine...
Problem: ...except when the user edits the python code inline (via the UI) or updates a script by uploading a new script overwriting one which already exists. It seems that twisted caches the code (both old and new) and runs new code sometimes and sometimes runs the old code.
Example: I upload a script hello.py on the server which has a function called run() which does: print 'hello world'. Someone else comes along and uploads another script named hello.py which does: print 'goodbye world'. Then, I go back and execute the run() function on the script 10 times. Half of the times it will say 'hello world' and half of the times it will say 'goodbye world'.
Tried so far: Several different ways to reload the script into memory before executing it, including:
python's builtin reload():
module = __import__('hello')
reload(module)
module.run()
imp module reload():
import imp
module = __import__('hello')
imp.reload(module)
module.run()
twisted.python.rebuild()
from twisted.python.rebuild import rebuild
module = __import__('hello')
rebuild(module)
module.run()
figured that perhaps if we force python to not write bytecode, that would solve the issue: sys.dont_write_bytecode = True
restart twisted server
a number of other things which I can't remember
And the only way to make sure that the most up to date python code executes is to restart twisted server manually. I have been researching for quite some time and have not found any better way of doing it, which works 100% of the time. This leads me to believe that bouncing twisted is the only way.
Question: Is there a better way to accomplish this (i.e. always execute the most recent code) without having to bounce twisted? Perhaps by preventing twisted from caching scripts into memory, or by clearing twisted cache before importing/reloading modules.
I'm fairly new to twisted web server, so it's possible that I may have overlooked obvious way to resolve this issue, or may have a completely wrong way of approaching this. Some insight into solving this issue would be greatly appreciated.
Thanks
T
Twisted doesn't cache Python code in memory. Python's module system works by evaluating source files once and then placing a module object into sys.modules. Future imports of the module do not re-evaluate the source files - they just pull the module object from sys.modules.
What parts of Twisted will do is keep references to objects that it is using. This is just how you write Python programs. If you don't have references to objects, you can't use them. The Twisted Web server can't call the run function unless it has a reference to the module that defines that function.
The trouble with reload is that it re-evaluates the source file defining the module but it can't track down and replace all of the references to the old version of the objects that module defined - for example, your run function. The imp.reload function is essentially the same.
twisted.python.rebuild tries to address this problem but using it correctly takes some care (and more likely than not there are edge cases that it still doesn't handle properly).
Whether any of these code reloading tools will work in your application or not is extremely sensitive to the minute, seemingly irrelevant details of how your application is written.
For example,
import somemodule
reload(somemodule)
somemodule.foo()
can be expected to run the newest version of somemodule.foo. But...
from somemodule import foo
import somemodule
reload(somemodule)
foo()
Can be expected not to run the newest version of somemodule.foo. There are even more subtle rules for using twisted.python.rebuild successfully.
Since your question doesn't include any of the actual code from your application, there's no way to know which of these cases you've run into (resulting in the inability to reliably update your objects to reflect the latest version of their source code).
There aren't any great solutions here. The solution that works the most reliably is to restart the process. This certainly clears out any old code/objects and lets things run with the newest version (though not 100% of the time - for example, timestamp problems on .py and .pyc files can result in an older .pyc file being used instead of a new .py file - but this is pretty rare).
Another approach is to use execfile (or exec) instead of import. This bypasses the entire module system (and therefore its layer of "caching"). It puts the entire burden of managing the lifetime of the objects defined by the source you're loading onto you. It's more work but it also means there are few surprises coming from other levels of the runtime.
And of course it is possible to do this with reload or twisted.python.rebuild if you're willing to go through all of your code for interacting with user modules and carefully audit it for left-over references to old objects. Oh, and any library code you're using that might have been able to get a reference to those objects, too.

Code completion for custom modules not working with PyDev

Let's say I make a module called mylib.py. In eclipse I type
import mylib
Then I type mylib. and hit CTRL+SPACE. This should suggest functions/variables in mylib, but it doesn't do anything. If I do something like import os and type os., suggestions immediately pop up, so I know code completion works in general, just not for my modules. Any reason why?
In order to get completion for custom modules, PyDev has to index it (if possible) and introspect the classes, functions, variables and imports defined there. To do so, you should add your module to the eclipse's PYTHONPATH and then reindex your venv (the one defined in PyDev).
Most of the times this is done automatically by the IDE but it doesn't work quite well (at least it is not perfect).
I really suggest you not to rely at 100% on the IDE completion.

Categories