Most Python packages follow the convention that the version is provided as a string at [package_name].version.version. Let's use Numpy as an example. Say I wanted to import Numpy but ensure that the minimum version is 1.18.1. This is what I do currently:
import numpy as np
if tuple(map(int, np.version.version.split('.'))) < (1, 18, 1):
raise ImportError('Numpy version too low! Must be >= 1.18.1')
While this seems to work, it requires me to import the package before the version can be checked. It would be nice to not have to import the package if the condition is not satisfied.
It also seems a bit "hacky" and it feels like there's probably a method using the Python standard library that does this. Something like version('numpy') > '1.18.1'. But I haven't been able to find one.
Is there a way to check the version of a package BEFORE importing it within the bounds of the Python standard library?
I am looking for a programmatic solution in Python code. Telling me to use a requirements.txt or pip install is not answering the question.
Edit to add context: Adding this package to my requirements.txt is not useful as the imported package is supposed to be an optional dependency. This code would go in a submodule that is optionally loaded in the __init__.py via a try statement. Essentially, some functionality of the package is only available if a package of minimum version is found and successfully imported.
Run pip show for a specific package using subprocess then parse the result to compare the installed version to your requirment(s).
>>> import subprocess
>>> result = subprocess.run(['pip', 'show', 'numpy'], stdout=subprocess.PIPE)
>>> result.stdout
b'Name: numpy\r\nVersion: 1.17.4\r\nSummary: NumPy is the fundamental package for array computing with Python.\r\nHome-page: https://www.numpy.org\r\nAuthor: Travis E. Oliphant et al.\r\nAuthor-email: None\r\nLicense: BSD\r\nLocation: c:\\python38\\lib\\site-packages\r\nRequires: \r\nRequired-by: scipy, scikit-learn, perfplot, pandas, opencv-python, matplotlib\r\n'
>>> result = subprocess.run(['pip', 'show', 'pandas'], stdout=subprocess.PIPE)
>>> for thing in result.stdout.splitlines():
... print(thing)
b'Name: pandas'
b'Version: 0.25.3'
b'Summary: Powerful data structures for data analysis, time series, and statistics'
b'Home-page: http://pandas.pydata.org'
b'Author: None'
b'Author-email: None'
b'License: BSD'
b'Location: c:\\python38\\lib\\site-packages'
b'Requires: numpy, python-dateutil, pytz'
b'Required-by: '
>>>
>>> from email.header import Header
>>> result = subprocess.run(['pip', 'show', 'pandas'], stdout=subprocess.PIPE)
>>> h = Header(result.stdout)
>>> print(str(h))
Name: pandas
Version: 0.25.3
Summary: Powerful data structures for data analysis, time series, and statistics
Home-page: http://pandas.pydata.org
Author: None
Author-email: None
License: BSD
Location: c:\python38\lib\site-packages
Requires: python-dateutil, pytz, numpy
Required-by:
>>> d = {}
>>> for line in result.stdout.decode().splitlines():
... k,v = line.split(':',1)
... d[k] = v
>>> d['Version']
' 0.25.3'
>>>
Or look at everything:
>>> result = subprocess.run(['pip', 'list'], stdout=subprocess.PIPE)
>>> for thing in result.stdout.splitlines():
print(thing)
b'Package Version '
b'---------------- ----------'
b'-illow 6.2.1 '
b'aiohttp 3.6.2 '
b'appdirs 1.4.3 '
...
Use containers to control all the dependencies and runtime environment of your program. An easy way to do it would be to create a Docker image that holds the exact version of python that you require. Then use a requirements.txt to install the correct python modules you need with the exact versions.
Lastly, you can create a shell script or something similar to actually spin-up the docker container with one click.
Alternatively (if Docker seems overkill), check out venv
Related
I noticed that the performance of mpmath, as oddly as it sounds, depends on whether sagemath is installed or not, regardless of whether the sage module is loaded in the current session. In particular, I experienced this for operations with multiple precision floats.
Example:
from mpmath import mp
import time
mp.prec = 650
t = time.time()
for i in range(1000000):
x_mpmath + y_mpmath
w = time.time()
print('plus:\t', (w-t), 'μs')
t = time.time()
for i in range(1000000):
x_mpmath * y_mpmath
w = time.time()
print('times:\t', (w-t), 'μs')
# If sagemath is installed:
# plus: 0.12919950485229492 μs
# times: 0.17601895332336426 μs
#
# If sagemath is *not* installed:
# plus: 0.6239776611328125 μs
# times: 0.6283771991729736 μs
While in both cases the module mpmath is the exact same
import mpmath
print(mpmath.__file__)
# /usr/lib/python3.9/site-packages/mpmath/__init__.py
I thought that mpmath's backend would depend on some sagemath dependency, and if that is missing it falls back to a less optimized one, but I cannot figure out what it is precisely. My goal is to be able to install only the required packages to speed up mpmath instead of installing all of sagemath.
Since this may very well be dependent on how things are packaged, you might need to have details on my system: I am using Arch Linux and all packages are updated to the most recent versions (sagemath 9.3, mpmath 1.2.1, python 3.9.5).
I found the explanation. In /usr/lib/python3.9/site-packages/mpmath/libmp/backend.py at line 82 there is
if 'MPMATH_NOSAGE' not in os.environ:
try:
import sage.all
import sage.libs.mpmath.utils as _sage_utils
sage = sage.all
sage_utils = _sage_utils
BACKEND = 'sage'
MPZ = sage.Integer
except:
pass
This loads all of sage if sagemath is installed and also sets it as a backend. This means that the following library is loaded next:
import sage.libs.mpmath.ext_libmp as ext_lib
From /usr/lib/python3.9/site-packages/mpmath/libmp/libmpf.py at line 1407. By looking at the __file__ of that module, one sees that it's a .so object, hence compiled, thus faster.
This also means that by exporting MPMATH_NOSAGE to any nonempty value will force the backend to be the default one (python or gmpy) and indeed I can confirm that the code I wrote in the question does run slower in this case, even with sagemath installed.
I use a software library that has different function names in different versions of the library.
I try to use the following code:
some_variable = module.old_name_of_function()
But this code only works with the old version of the program library.
I plan to use the code on different computers, with different installed versions of the software library.
Some computers have a new version of the program library installed, and the following code should be used there:
some_variable = module.new_name_of_function()
And if I use old_name_of_function() in the new version of the library, I will get an error.
How to solve this issue?
I suppose you could do
try:
my_func = module.old_name_of_function
except AttributeError:
my_func = module.new_name_of_function
some_variable = my_func()
You can use pkg_resources module for it (example for numpy):
import pkg_resources
pkg_resources.get_distribution("numpy").version
will return:
'1.15.2'
Then you can use cases, ifs or something else to run a function you need.
For example:
import pkg_resources
version = pkg_resources.get_distribution("numpy").version
v = version.split('.')
if int(v[0]) == 1 and int(v[1]) < 17:
print('WAKA')
else:
print('NEW WAKA')
will print 'WAKA' for every 1.X version of numpy, where X < 17.
With the following code,
# $ pip install pysha3
import sys
if sys.version_info < (3, 4):
import sha3
import hashlib
s = hashlib.new("sha3_512")
s.update(b"")
print(s.hexdigest())
I am getting
0eab42de4c3ceb9235fc91acffe746b29c29a8c366b7c60e4e67c466f36a4304c00fa9caf9d87976ba469bcbe06713b435f091ef2769fb160cdab33d3670680e
instead of
a69f73cca23a9ac5c8b567dc185a756e97c982164fe25859e0d1dcc1475c80a615b2123af1f5f94c11e3e9402c3ac558f500199d95b6d3e301758586281dcd26
cf. https://en.wikipedia.org/wiki/SHA-3#Examples_of_SHA-3_variants
Could anyone advise me?
The pysha3 module you found was based on an draft of the SHA-3 specification, before it was standardised.
The module was created as a POC for Python issue 16113, and the code has not been updated since 2012. The NIST standard wasn't finalised until October 2015. As such, the implementation can't be used if you expect it to follow the released standard.
That ticket links to an implementation that does claim to have been updated to the standard: https://github.com/bjornedstrom/python-sha3. That package doesn't appear to be listed on PyPI, but can be installed with pip directly from GitHub:
pip install git+https://github.com/bjornedstrom/python-sha3
and this package does produce the expected result:
>>> import hashlib
>>> import sha3
>>> hashlib.sha3_512(b'').hexdigest()
b'a69f73cca23a9ac5c8b567dc185a756e97c982164fe25859e0d1dcc1475c80a615b2123af1f5f94c11e3e9402c3ac558f500199d95b6d3e301758586281dcd26'
This package doesn't patch the built-in hashlib.new() constructor, but that's easily done by plugging in the constructor into the module cache:
>>> hashlib.__builtin_constructor_cache['sha3_512'] = sha3.sha3_512
>>> hashlib.new('sha3_512')
<sha3.SHA3512 object at 0x10b381a90>
SHA3 has been added to the built-in hashlib module in Python 3.6:
What’s New In Python 3.6
The SHA-3 hash functions sha3_224(), sha3_256(), sha3_384(),
sha3_512(), and SHAKE hash functions shake_128() and shake_256() were
added. (Contributed by Christian Heimes in issue 16113. Keccak Code
Package by Guido Bertoni, Joan Daemen, Michaël Peeters, Gilles Van
Assche, and Ronny Van Keer.)
It can be used in the follow way:
>>> import sys
>>> import hashlib
>>> s = hashlib.new("sha3_512") # sha3_224, sha3_256 and sha3_384 are also available
>>> s.update(b"")
>>> print(s.hexdigest())
a69f73cca23a9ac5c8b567dc185a756e97c982164fe25859e0d1dcc1475c80a615b2123af1f5f94c11e3e9402c3ac558f500199d95b6d3e301758586281dcd26
I have been using QGIS python console to automate my needs. I've used a few processing algorithms (such as distance matrix) to work on my vector layers which outputs csv files.I need R to work on these files before bringing them back to my python console as variables.
Is there a way I can run R directly through the python console (maybe using packages such as rpy2?)
I guess you can easily interact with a R instance in the QGis python console using rpy2.
Try these following lines of code in the QGIS python console:
>>> import rpy2.rinterface as rinterface
>>> rinterface.set_initoptions((b'rpy2', b'--no-save'))
>>> rinterface.initr()
0
>>> from rpy2.robjects.packages import importr
>>> import rpy2.robjects as robjects
You can now interact with R like this :
>>> robjects.r("""seq(1,12);""")
<IntVector - Python:0x7fa5f6e4abd8 / R:0x769f4a8>
[ 1, 2, 3, ..., 10, 11, 12]
Or import some libraries for example :
>>> rutils = importr("utils")
>>> rgraphics = importr('graphics')
Take a look at the documentation of Rpy2, I have successfully used these methods to run some personals scripts or some libraries installed from the CRAN (running multiple statements in robjects.r("""...""") and grabbing the output in a python variable to use in QGIS).
(If i remember well, on windows I had to set some environnement variables first, like R_HOME or R_USER maybe)
Also, if you haven't seen it, take a look at this page of the QGIS documentation : 17.31. Use R scripts in Processing. It offers a convenient way to use your existing R script with some slight add.
Python 3.3 cannot run this code because it does not have pygraph is their a simple way to install pygraoh or can I ammend the code in some way. As you can see I think the rest is fine. It is just the one rather major issue
# Import graphviz
import sys
# Import pygraph
from pygraph.classes.graph import graph
from pygraph.classes.digraph import digraph
from pygraph.algorithms.searching import breadth_first_search
from pygraph.readwrite.dot import write
# Graph creation
gr = graph()
# Add nodes and edges
gr.add_nodes(["Portugal","Spain","France","Germany","Belgium","Netherlands","Italy"])
gr.add_nodes(["Switzerland","Austria","Denmark","Poland","Czech Republic","Slovakia","Hungary"])
gr.add_nodes(["England","Ireland","Scotland","Wales"])
gr.add_edge(("Portugal", "Spain"))
gr.add_edge(("Spain","France"))
gr.add_edge(("France","Belgium"))
gr.add_edge(("France","Germany"))
gr.add_edge(("France","Italy"))
gr.add_edge(("Belgium","Netherlands"))
gr.add_edge(("Germany","Belgium"))
gr.add_edge(("Germany","Netherlands"))
gr.add_edge(("England","Wales"))
gr.add_edge(("England","Scotland"))
gr.add_edge(("Scotland","Wales"))
gr.add_edge(("Switzerland","Austria"))
gr.add_edge(("Switzerland","Germany"))
gr.add_edge(("Switzerland","France"))
gr.add_edge(("Switzerland","Italy"))
gr.add_edge(("Austria","Germany"))
gr.add_edge(("Austria","Italy"))
gr.add_edge(("Austria","Czech Republic"))
gr.add_edge(("Austria","Slovakia"))
gr.add_edge(("Austria","Hungary"))
gr.add_edge(("Denmark","Germany"))
gr.add_edge(("Poland","Czech Republic"))
gr.add_edge(("Poland","Slovakia"))
gr.add_edge(("Poland","Germany"))
gr.add_edge(("Czech Republic","Slovakia"))
gr.add_edge(("Czech Republic","Germany"))
gr.add_edge(("Slovakia","Hungary"))
# Draw as PNG
dot = write(gr)
f = open('europe.dot', 'a')
f.write(dot)
f.close()
import os
command = '"C:\\Program Files\\Graphviz 2.28\\bin\\dot.exe" -Tpng europe.dot > europe.png'
print(command)
os.system(command)
os.system('europe.png')
You should be able to use pip install python-graph-core or easy_install python-graph-core - if this doesn't work then you will need to download it from here, unpack/zip it and in the resulting directory run:
python setup.py install
or
python3 setup.py install
You will need to do this for both -core and -dot packages.
If you are doing it this way you will need to ensure the dependencies, (pydot and pyparsing), are met in your installation.