I am trying to make a custom module, where modules as numpy, scipy, etc. are imported. I am facing a problem of "deep" sequential import. I see that it is a general behaviour:
import numpy
print(numpy.sys)
Does not provide error (so I can access sys through numpy which is quite weird in my opinion). Can I avoid this somehow in my own package?
Related
I would like to know if it is possible to parameterized the way a module import one of its own module.
My problem is the following. I have a number of generic tensorflow functions, such as losses, that work with both version (1 and 2) of the API.
If the module is used with TF2, or with an old version of TF1, tensorflow needs to be imported like
import tensorflow as tf
However if I use TF 1.15, or if I want to use the version 1 of the API with TF2, tensorflow needs to be imported as
import tensorflow.compat.v1 as tf
tf.disable_v1_behavior()
So the way the import is done cannot be automatically deduced from the TF version, as TF2 can be used in TF1 "compatibility" mode.
Is there a way I can change the way the import is done in the module?
A hack that seems to work for modules that are directly imported:
import my_module
my_module.tf = tf
That forces the tf module to be the same as the current one. However,
This could have invisible and hard-to-track side effects since tensorflow is imported with potentially different APIs requirement, this could mess up any global variable settings.
This works for modules imported directly, not for modules that are imported by other modules, unless the hack is propagated to all modules.
I use, as in https://stackoverflow.com/a/32965521/2069610:
>>> import pkg_resources
>>> pkg_resources.get_distribution("tensorflow").version
'XX.XX'
Another idea would be to grep:
$ pip freeze | grep tensorflow
tensorflow==XX.XX
are you using pip or conda? This might also offer couple of different options, based on the output of conda list or so..
Based on that, you could use one or the other import...
I am using pytorch and pylint does not recognize few functions for ex: torch.stack however, if I do import torch._C as torch it seems to work fine.
If I do above, actual modules that exist inside torch package like torch.cuda or torch.nn need to imported individually as simply doing torch.cuda would point to torch._C.cuda and hence won't work.
Is there a way to tell pylint to look at both torch and torch._C when I do import torch or even whenever it sees torch? I don't think I would use torch to reference any other thing in my code.
A solution for now is to add torch to generated-members:
pylint --generated-members="torch.*" ...
or in pylintrc under the [TYPECHECK] section:
generated-members=torch.*
I found this solution in a reply to the github discussion of the pytorch issue [Minor Bug] Pylint E1101 Module 'torch' has no 'from_numpy' member #701. Less satisfying than whitelisting, because I guess it won't catch if you reference something that actually isn't a member, but it's the best solution I've come across so far.
I'm currently trying to teach myself tensorflow. The new version has keras built in.
I can access the Dense function in the following way
import tensorflow as tf
tf.keras.layers.Dense
but this does not work:
from tensorflow.keras.layers import Dense
Why is that? I notice that:
from tensorflow.python.keras.layers import Dense
Does work. When I import tensorflow, does it know to intelligently add the .python to the module name?
In the GitHub Repo for Tensorflow, if you look at the two __init__.py files inside tensorflow-master/tensorflow/python/keras/ and tensorflow-master/tensorflow/python/keras/layers/, you can see which modules are imported as part of the package structure. This determines what and how you as a user import things when using the package and its modules.
David Beazley has a really good talk on the innerworkings of this:
https://www.youtube.com/watch?v=0oTh1CXRaQ0
This question already has answers here:
Python: importing a sub‑package or sub‑module
(3 answers)
Closed 6 years ago.
When attempting to import from an alias - which is common in scala I was surprised to see the following results:
Create an alias
import numpy as np
Use the alias to import modules it contains
from np import linalg
ImportError: No module named np.linalg
Is there any other syntax/equivalent in python useful for importing modules?
Using import module as name does not create an alias. You misunderstood the import system.
Importing does two things:
Load the module into memory and store the result in sys.modules. This is done once only; subsequent imports re-use the already loaded module object.
Bind one or more names in your current namespace.
The as name syntax lets you control the name in the last step.
For the from module import name syntax, you need to still name the full module, as module is looked up in sys.modules. If you really want to have an alias for this, you would have to add extra references there:
import numpy # loads sys.modules['numpy']
import sys
sys.modules['np'] = numpy # creates another reference
However, doing so can have side effects when you also are importing submodules. Generally speaking, you don't want to create aliases for packages by poking about in sys.modules without also creating aliases for all (possible) submodules as not doing so can cause Python to re-import submodules as separate namespaces.
In this specific case, importing numpy also triggers the loading of numpy.linalg, so all you really have to do is:
import numpy as np
# np.linalg now is available
No module aliasing is needed. For packages that don't import submodules automatically, you'd have to use:
import package as alias
import package.submodule
and alias.submodule is then available anyway, because a submodule is always added as an attribute on the parent package.
My understanding of your example would be that since you already imported numpy, you couldn't re import it with an alias, as it would already have the linalg portion imported.
I thought that the code in the python-inverse-of-a-matrix was extremely interesting, particularly since I have used numpy for several years in computations that involve matrices. I was disappointed as the 2 imports from numpy failed. Here are the imports:
from numpy import matrix
from numpy import linalg
Neither matrix nor linalg were found in the numpy package. Clearly I miss something that is quite obvious (not for me, though :) ).
I use Linux (kubuntu) and downloaded the numpy package as a debian package. Are there other packages for "matrix" and for "linalg", if so, what are they?
Thank you in anticipation,
OldAl.
Most likely, you have a numpy.py or numpy.pyc file in your local directory... and python is finding it and importing it instead of the numpy package you expect.
Try this before importing.
import numpy
print(numpy.__file__)
You'll probably find that numpy.__file__ is pointing not to the numpy package, but to something you did not intend to import.
In general, it's a good idea to name your own modules with different names from known/popular packages.
SOLVED
The deb package numpy simply does not have the matrix and linalg sub-packages.
In ubuntu or kubuntu one needs to import scipy as well. Scipy expands the name space of numpy and adds matrix and linalg packages.
OldAl.