python load module get error after install - python

Why script can't find new module after using system command to install package when the script in the running status:
what directory structure looks like:
mymoduledir
|- target_module_dir
|- main.py
main.py code like this:
if __name__ == "__main__":
try:
import target_module
print("module already exist")
# to-do something
except ImportError:
print("has not target_module, start install")
os.system("cd target-module-dir && python setup.py install")
print("install finished")
import target_module
# to-do something
I found that:
if python environment has no target module, my script will auto install it successfully, but I got import error. log display:
has not target_module, start install
running install
.....
Finished processing dependencies for target_module
install finished
Traceback (most recent call last):
File ".\main.py", line 237, in
import target_module
ImportError: No module named target_module_name
It means that the target module was installed successfully, but I met an importerror when I want to import it. To prove my conclusion, I open the python shell and try import the target module, it works. When I rerun this script, log display:
module already exist
It means this script import target module successfully
What I think is:
script will check the python environment before launched, if I want to import an new module in running status of script, I need to let the script know the environment has updated.
What I have try is:
I have searched many related problem, but I haven't got an effective solution.
For some reason, I must use python2.6 to complete my function.And I try to use reload function, like this, but it can't work.
What should I do to solve this problem?

Using pip install will work well, My Solution:
import pip
if __name__ == "__main__":
try:
import target_module
print("module already exist")
# to-do something
except ImportError:
print("has not target_module, start install")
pip.main(['install', './target_module_dir/'])
print("install finished")
import target_module
# to-do something

Related

Python subprocess inside a Docker container says a custom package is installed but can't import it

I'm trying to run a python package as an ENTRYPOINT to a Docker container which installs another package with setup.py then runs something inside. I use subprocess to install the second package. The output from terminal says that the second package got installed after the subprocess, but I couldn't import the second package. Here's the code:
my_tar = tarfile.open('user_package.gz')
package_main_directory=os.path.commonprefix(my_tar.getnames())
my_tar.extractall('/')
my_tar.close()
os.chdir('/'+package_main_directory)
subprocess.check_call([sys.executable, 'setup.py', 'develop'])
for importer, modname, ispkg in pkgutil.iter_modules():
if ispkg:
print("Found submodule %s (is a package: %s)" % (modname, ispkg))
from module import function
I keep getting the error "no module name module" although the output from terminal says that the package is installed. pkgutil.iter_modules() isn't showing the second package name either. I tried pip install as well and it gives the same error. Any ideas on why this is happening?

Cannot install python external package with pip main

I have external package and i want to install it by code inside my app, the code looks like :
try:
from pip import main as pipmain
except:
from pip._internal import main as pipmain
pipmain(['install', module])
# NOTE : module is string package's name
but there is an error like : TypeError: 'module' object is not callable
So after figuring out what happen, i've found the answer in this github issue's comment
it should be pipmain.main() not pipmain()

Import python package after installing it with setup.py, without restarting?

I have a package that I would like to automatically install and use from within my own Python script.
Right now I have this:
>>> # ... code for downloading and un-targzing
>>> from subprocess import call
>>> call(['python', 'setup.py', 'install'])
>>> from <package> import <name>
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named <package>
Then I can continue like this:
>>> exit()
$ python
>>> from <package> import <name>
And it works just fine. For some reason, Python is able to pick up the package just fine if I restart after running the setup.py file, but not if I don't. How can I make it work without having the restart step in the middle?
(Also, is there a superior alternative to using subprocess.call() to run setup.py within a python script? Seems silly to spawn a whole new Python interpreter from within one, but I don't know how else to pass that install argument.)
Depending on your Python version, you want to look into imp or importlib.
e.g. for Python 3, you can do:
from importlib.machinery import SourceFileLoader
directory_name = # os.path to module
# where __init__.py is the module entry point
s = SourceFileloader(directory_name, __init__.py).load_module()
or, if you're feeling brave that your Python path knows about the directory:
map(__import__, 'new_package_name')
Hope this helps,
I downloaded from seaborn from GitHub.
Through command prompt, cd to downloads\seaborn folder
python install setup.py
Then using spyder from anaconda, checked if it was installed by running the following in a console
import pip
sorted(["%s==%s" % (i.key, i.version)
for i in pip.get_installed_distributions()])
Seeing that it was not there, go to tools and select "Update module names list"
Again trying the previous code in a python console, the lib was still not showing.
Restarting Spyder and trying import seaborn worked.
Hope this helps.

ImportError dependency install resulting in NameError

Ive been writing a little script to bootstrap an environment for me, but ran into some confusion when attempting to handle module import errors. My intention was to catch any import error for the yaml module, and then use apt to install the module, and re-import it...
def install_yaml():
print "Attempting to install python-yaml"
print "=============== Begining of Apt Output ==============="
if subprocess.call(["apt-get", "-y", "install", "python-yaml"]) != 0 :
print "Failure whilst installing python-yaml"
sys.exit(1)
print "================= End of Apt Output =================="
#if all has gone to plan attempt to import yaml
import yaml
reload(yaml)
try:
import yaml
except ImportError:
print "Failure whilst importing yaml"
install_yaml()
grains_config = {}
grains_config['bootstrap version'] = __version__
grains_config['bootstrap time'] = "{0}".format(datetime.datetime.now())
with open("/tmp/doc.yaml", 'w+') as grains_file:
yaml.dump(grains_config, grains_file, default_flow_style=False)
Unfortunately when run I get a NameError
Traceback (most recent call last):
File "importtest-fail.py", line 32, in <module>
yaml.dump(grains_config, grains_file, default_flow_style=False)
NameError: name 'yaml' is not defined
After some research I discovered the reload builtin (Reload a previously imported module), which sounded like what I wanted to do, but still results in a NameError on the yaml modules first use.
Does anyone have any suggestions that would allow me to handle the import exception, install the dependencies and "re-import" it?
I could obviously wrap the python script in some bash to do the initial dependency install, but its not a very clean solution.
Thanks
You imported yaml as a local in install_yaml(). You'd have to mark it as a global instead:
global yaml
inside the function, or better still, move the import out of the function and put it right after calling install_yaml().
Personally, I'd never auto-install a dependency this way. Just fail and leave it to the administrator to install the dependency properly. They could be using other means (such as a virtualenv) to manage packages, for example.

Get the version from distutils setup.py

How can I import or read the VERSION from the setup.py file so that I can log the version at runtime.
This way I can make sure that the results obtained are from this particular version of my package.
The following is the contents of my setup.py file (simplified to have the necessary part)
import distutils.core
VERSION = '0.1.0'
LICENSE = 'GPLv2'
distutils.core.setup(**KWARGS)
When I try to do :
import setup
I get the following error:
distutils.core.setup(**KWARGS)
usr/lib/python2.6/distutils/core.pyc in setup(**attrs)
ok = dist.parse_command_line()
except DistutilsArgError, msg:
raise SystemExit, gen_usage(dist.script_name) + "\nerror: %s" % msg
if DEBUG:
SystemExit:
error: no commands supplied
There is a way to get the version from your setup script:
python setup.py --version
But I’m not sure I understand what you mean with “log the version at runtime”; the setup script is normally not installed with your modules, so people use other ways to put a version number in their code, like a __version__ attribute in their module or __init__.py file.
In yor example, setup is excecuted automatically, you have to replace:
distutils.core.setup(**KWARGS)
with:
if __name__ == '__main__':
distutils.core.setup(**KWARGS)
Like this, setup is only executed if you actually run the setup.py

Categories