I've installed a user module with the command pip --ignore-installed --user requests[security] and realized that the python interpreter, which is embedded in a tool, is ignoring that and is loading first the System wide installed Module i:
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/OpenSSL
So went ahead and inserted manually my user path and tried to reload every module in OpenSSL like this:
import sys
sys.path.insert(0, /Users/MYUSERNAME/Library/Python/2.7/lib/python/site-packages/)
reload(OpenSSL.SSL)
reload(OpenSSL.SSL)
reload(OpenSSL._util)
reload(OpenSSL.crypto)
reload(OpenSSL.rand)
reload(OpenSSL.version)
However, I've realized that that OpenSSL comes with so files. Is reload reloading so files as well?
Let me know if more info is needed.
https://docs.python.org/3/library/imp.html?highlight=reload#imp.reload
There are a number of other caveats:
It is legal though generally not very useful to reload built-in or dynamically loaded modules, except for sys, __main__ and builtins. In many cases, however, extension modules are not designed to be initialized more than once, and may fail in arbitrary ways when reloaded.
(Emphasize mine — phd.)
Related
I followed this post to hide the vast majority of the files in my PyInstaller compilation, going from 108 files/folders to just 6. But one of those 6 is the PIL folder, since you have to do a from-import on it to access Image, and I would love to hide that folder as well.
I've experimented with adding it to sys.path in my hook, changing my imports to from <foldernamehere>.PIL import Image, and setting os.chdir immediately before and after the import but nothing has worked. The error is always the same:
ImportError: cannot import name '_imaging' from 'PIL' (<pathtobasefolder>\PIL\__init__.pyc)
One important thing to note is that I do not import PIL immediately. It's only imported after launch when the user performs specific actions, since it serves no purpose otherwise. I'm not sure how much that affects things.
Is this possible? Maybe importlib can be used, or maybe editing PyInstaller's native hooks would work?
Use the following command : pip install -U Pillow
In my company we decided to structure own python modules using this convention:
dsc.<package_name>
It works without any problem when two modules is used in other project that doesn't follow this convention. However, when in a develop environment I try to develop a new module "dsc.new_module" that references to other, for example, "dsc.other_module", the import raises a not module found exception. Is there any way to solve this?
If I package the module and install, everything is correct but not when I'm developing the module that is not able to find it. The only way I overcome this problem is doing that:
try:
from dsc.other_module import send_message
except ImportError:
def dummy(a, b):
pass
send_message = dummy
Beacause the function is not essential.
What you can do is install your packages in development mode. pip install -e . (from the parent folder) After this the imports should work as you envision them, so the same as other packages that use them.
Development mode is not required, but it adds the benefit of implementing changes made to your code immediately.
My code depends on functions from a module external_module which is in my pythonpath path and which I include as
# global import
import external_module.sub_mod_one as smo
Now I want to share my code but I don't want to force my collaborators to checkout my other git repos and add them to their environment.
So, I thought I can copy the files to the local directory and rewrite the import as
# local import
import sub_mod_one as smo
However, since development goes on, I don't want to do this manually.
Question Is there a python module or vim plugin or something else that does this for me? Namely, copying the the included modules to the current folder and rewriting the import statements?
The "right" solution is to
properly package your "external_module" so it can be installed with pip,
add to your project(s) a pip requirements file referencing your package
then have everybody using virtualenvs
This way the package will be cleanly installed (and at the right version), you don't have to mess with your exports, and you dont have out of sync copies of your package everywhere.
You could use conditional imports:
try:
import external_module.sub_mod_one as smo
except ImportError:
import sub_mod_one as smo
I am experience this annoying error message, every time after I have updated my module and try to reload it.
I do have a module mymodule in a package mypackage that has a __init___.py file in it.
When I do
from mypackage import mymodule
everything is ok.
After I update the module and reload it with
reload(mymodule)
Error pops up:
In [4]:
...: reload(constants)
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-4-264a569b44f9> in <module>()
1
----> 2 reload(mymodule)
ImportError: No module named mymodule
To resolve this, I have to kill my interpreter and re-import everything when I want to reload one module, which is extremely time-consuming and annoying. How may I fix it?
PS:
I suspect this is something wrong with PYTHONPATH, but since I am using Python tool for Visual Studio, I cannot find the PYTHONPATH option.
Update
As far as I remember, it seems that things start going wrong immediately after I have this
import os
os.chdir(constants.PROJECT_PATH + '//data//')
in one of the modules. Yet does it really matter?
I don't think it matters, as the path in the brackets is exactly my project path.
Try it:
import os, sys
my_lib_path = os.path.abspath('../../../mypackage')
sys.path.append(my_lib_path)
from mypackage import mymodule
or add your package into PYTHONPATH. For unix it:
$ export PYTHONPATH=/absolute/path/to/mypackage
Is your package in the present working directory?
When the interpreter comes across an import libraryname statement, it looks for libraryname in several locations: the present working directory, directories specified by the PYTHONPATH environment variable, and some installation dependent paths.
So as long as your module is in the present working directory, the interpreter is able to find it. However, once the pwd changes, the interpreter isn't able to find the module anymore, and the import fails. You really have two options:
Install your module in a location where Python can find it. Typically, Python packages are located in /usr/lib on Linux systems (not sure about Windows, but you could easily find out). If you put your package there, then the interpreter will pick it up. I wouldn't recommend doing this by hand; write a simple setup.py script to handle the installation for you.
Tell Python where to explicitly look for the package. This is usually done through the PYTHONPATH environment variable. Set that at the command line before invoking the interpreter (or at the system-level, if you must).
If you can't change PYTHONPATH for some reason, then you can modify the path during runtime:
import sys
sys.path.append(your_directory_here)
This is a pretty ugly way to deal with the problem, so should be a last resort.
Is there any way to get Python to use my ActiveTcl installation instead of having to copy the ActiveTcl libraries into the Python/tcl directory?
Not familiar with ActiveTcl, but in general here is how to get a package/module to be loaded when that name already exists in the standard library:
import sys
dir_name="/usr/lib/mydir"
sys.path.insert(0,dir_name)
Substitute the value for dir_name with the path to the directory containing your package/module, and run the above code before anything is imported. This is often done through a 'sitecustomize.py' file so that it will take effect as soon as the interpreter starts up so you won't need to worry about import ordering.