Tool to determine what lowest version of Python required? - python

Is there something similar to Pylint, that will look at a Python script (or run it), and determine which version of Python each line (or function) requires?
For example, theoretical usage:
$ magic_tool <EOF
with something:
pass
EOF
1: 'with' statement requires Python 2.6 or greater
$ magic_tool <EOF
class Something:
#classmethod
def blah(cls):
pass
EOF
2: classmethod requires Python 2.2 or greater
$ magic_tool <EOF
print """Test
"""
EOF
1: Triple-quote requires Python 1.5 of later
Is such a thing possible? I suppose the simplest way would be to have all Python versions on disc, run the script with each one and see what errors occur..

Inspired by this excellent question, I recently put together a script that tries to do this. You can find it on github at pyqver.
It's reasonably complete but there are some aspects that are not yet handled (as mentioned in the README file). Feel free to fork and improve it!

Not an actual useful answer but here it goes anyway.
I think this should be doable to make (though probably quite an exercise), for example you could make sure you have all the official grammars for the versions you want to check, like this one .
Then parse the bit of code starting with the first grammar version.
Next you need a similar map of all the built-in module namespaces and parse the code again starting with the earliest version, though it might be tricky to differentiate between built-in modules and modules that are external or something in between like ElementTree.
The result should be an overview of versions that support the syntax of the code and an overview of the modules and which version (if at all) is needed to use it. With that result you could calculate the best lowest and highest version.

The tool pyqver from Greg Hewgill wasn't updated since a while.
vermin is a similar utility which shows in the verbose mode (-vvv) what lines are considered in the decision.
% pip install vermin
% vermin -vvv somescript.py
Detecting python files..
Analyzing using 8 processes..
!2, 3.6 /path/to/somescript.py
L13: f-strings require 3.6+
L14: f-strings require 3.6+
L15: f-strings require 3.6+
L16: f-strings require 3.6+
print(expr) requires 2+ or 3+
Minimum required versions: 3.6
Incompatible versions: 2
Bonus: With the parameter -t=V you can define a target version V you want to be compatible with. If this version requirement is not met, the script will exit with an exit code 1, making it easy integratable into a test suite.

Related

Is there a clean way to add Python 3-only functionality to a class whose other functionality is used in 2.7?

I'm creating a project in Python that I intend to be able to run under both Python 2.7 and Python 3. I created a class where it became apparent that a nice piece of functionality was available under Python 3 using some Python 3-specific functionality. I don't believe I can replicate the same functionality in Python 2.7, and am not trying to do so. But I intend for the Python 3 app to perhaps have some additional functionality as a consequence.
Anyway, I was hoping that so long as the 2.7 app never called the functions that used the 3.x functionality I'd be okay. But, no, because the presence of the code generates a compile-time error in 2.7, so it spits the dummy despite the function never being called at runtime. And because of Python's lack of any compile-time guards I'm not entirely sure what the best solution is.
I guess I could create a subclass of MyClass, call it MyClass3, put it in another module and add the extra functions there. But that makes a lot of things substantially grubbier...many more split code paths based on sys.version_info, circular inclusion problems unless I do a lot of file-splitting and...(waves hand). It's a mess that way. But maybe it's the only option available?
EDIT:
The original question made reference to "yield from" which is why the answer below discusses it. But the original question was not actually looking for advice on how to get "yield from" working in 2.7, but the moderator seemed to THINK this was what the question was about and flagged it as a duplicate accordingly.
As it happened, just as I edited the question to focus it on the issue of organizing the project to avoid compile errors (and to remove references to "yield from"), an answer came in that referenced the yield from issue and turned out to be super-useful.
yield from was backported to Python 2.7 in the following module: yieldfrom.
There is also a SO question about implementing yield from functionality back to python 2 that you may find useful as well as a blog post on the same topic.
AFAIK, there is no official backport of the functionality so there is nothing like from __future__ import yieldfrom that one could expect (please correct if you know otherwise).

Is "backporting" Python 3's `range` to Python 2 a bad idea?

One of my classes requires assignments to be completed in Python, and as an exercise, I've been making sure my programs work in both Python 2 and Python 3, using a script like this:
#!/bin/bash
# Run some PyUnit tests
python2 test.py
python3 test.py
One thing I've been doing is making range work the same in both versions with this piece of code:
import sys
# Backport Python 3's range to Python 2 so that this program will run
# identically in both versions.
if sys.version_info < (3, 0):
range = xrange
Is this a bad idea?
EDIT:
The reason for this is that xrange and range work differently in Python 2 and Python 3, and I want my code to do the same thing in both. I could do it the other way around, but making Python 3 work like Python 2 seems stupid, since Python 3 is "the future".
Here's an example of why just using range isn't good enough:
for i in range(1000000000):
do_something_with(i)
I'm obviously not using the list, but in Python 2, this will use an insane amount of memory.
You can use the six package which provides a Python 2 and 3 compatibility library and written by one of the Python core developers. Among its features is a set of standard definitions for renamed modules and functions, including xrange -> range. The use of six is one of many recommendations in the official Porting Python 2 Code to Python 3 HOWTO included in the Python Documentation set.
What you probably should be doing is making sure it works cleanly under 2.x, and then passing it through 2to3 and verifying that the result works cleanly in 3.x. That way you won't have to go through hoops such as redefining range as you have already done.

Using cscope to browse Python code with VIM?

Has anyone managed successfully using cscope with Python code? I have VIM 7.2 and the latest version of cscope installed, however it doesn't get my code's tags correctly (always off by a couple of lines). I tried the pycscope script but its output isn't supported by the modern version of cscope.
Any ideas? Or an alternative for browsing Python code with VIM? (I'm specifically interested in the extra features cscope offers beyond the simple tags of ctags)
EDIT: I'm going to run through the process step by step:
Preparing the sources:
exhuberant ctags, has an option: -x
Alternatively, ctags can generate a cross reference file which lists,
in human readable form, information about the various source objects
found in a set of language files.
This is the key to the problem:
ctags -x $(ls **/*.py); # replace with find if no zsh
will give you your database of source objects in a known, format, described under
man ctags; # make sure you use exuberant ctags!
Gnu Global is not limited to only the "out of the box" type of files. Any regular file format will serve.
Also, you can use gtags-cscope, which comes with global as mentioned in section 3.7 of the manual, for a possible shortcut using gtags. You'll end up with an input of a ctags tabular file which Global/gtags can parse to get your objects, or you can use the source for pycscope together with your ctags file of known format to get an input for the vim cscope commands in
if_cscope.txt.
Either way it's quite doable.
Perhaps you'd prefer idutils?
Definintely possible since
z3c.recipe.tags
on pypi makes use of both ctags and idutils to create tag files for a buildout, which is a method I shall investigate in short while.
Of course, you could always use the greputils script below, it has support for idutils , we know idutils works with python, and if that fails, there is also something called vimentry from this year that also uses python, idutils and vim.
Reference links (not complete list):
gtags vimscript, uses Gnu global. updated 2008
greputils vimscript, contains support for the *id idutils, 2005
lid vimscript, Ancient, but this guy is pretty good, his tag and buffer howtos are amazing 2002
An updated version of pyscope, 2010
Hopefully this helps you with your problem, I certainly helped me. I would have been quite sad tonight with a maggoty pycscope.
This seems to work for me:
Change to the top directory of your python code. Create a file called cscope.files:
find . -name '*.py' > cscope.files
cscope -R
You may need to perform a cscope -b first if the cross references don't get built properly.
From a correspondence with the maintainer of cscope, this tool isn't designed to work with Python, and there are no plans to implement that compatibility. Whatever works now, apparently works by mistake, and there is no promise whatsoever that it will keep working.
It appears I've been using an out-of-date version of pycscope. The latest version 0.3 is supported by the cscope DB. The author of pycscope told me that he figured out the output format for the cscope DB from reading the source code of cscope. That format isn't documented, on purpose, but nevertheless it currently works with pycsope 0.3, which is the solution I'll be using.
I'm going to accept this answer since unfortunately no other answer provided help even after bounty was declared. No answers are upvoted, so I honestly have no idea where the bounty will go.
There is a wonderful Python-mode-klen plugin. If you have it and rope (python refactoring library) installed, then going to the definition of a particular term is as simple as <C-c>g or <C-c>rag (first is filetype mapping, second is a global one). There are much more useful features, some useless for me. All of them are disableable. Features from list of questions found at cscope-intro:
Where is this symbol used? <C-c>f. Rather confusing though, as results in quickfix list do show - instead of the actual lines (though they point to the correct location). Maybe it will be fixed.
Where is it defined?, What is this global symbol's definition?, Where is this function in the source files? <C-c>g
What is <...> global symbol's definition? <C-c>raj
Not very much, but I am not too experienced user of ropevim.
I got the same question you got, after browsing the internet, I found a way to fix this:
create a python script: cscope_scan.py
import os
codeRootDir = os.getcwd()
__revision__ = '0.1'
__author__ = 'lxd'
FILE_TYPE_LIST= ['py']
if __name__ == '__main__':
import os
f = open('cscope.files','w')
for root,dirs,files in os.walk(codeRootDir):
for file in files:
for file_type in FILE_TYPE_LIST:
if file.split('.')[-1] == file_type:
f.write('%s\n' %os.path.join(root,file))
f.close()
cmd = 'cscope -bk'
os.system(cmd)
excute this script under you code's root folder, this will generate the cscope.files and then excute cscope -b I don't know what happens to my computer, the last two lines aren't working well, but I think manually type a cscope -bk is acceptable:)
This hack also seems to force cscope to go through Python files:
cscope -Rb -s *
If you accept that cscope is apparently not designed to work with Python.
Superset any language any tool question: How to find all occurrences of a variable in Vim?

Supporting different versions of Python

This subject has been disturbing me for some time.
For my Python project I wanted to be able to support Python versions 2.4 to 3.1. I thought a bit about how to do this, and eventually decided to have four separate forks of the source code for four different versions of Python: 2.4, 2.5, 2.6 and 3.1.
I have come to view that as a bad decision, mainly because of Python's distribution annoyances, which I now have to do four times instead of one.
The question is, what to do?
My project is in the scientific computing field. I got the impression that there are still many people who depend on Python 2.4.
Someone suggested I just write my entire project for 2.4, but that is unacceptable for me. That will mean I could not use context managers, and that is something I will not give up on.
How do ordinary Python projects support 2.4? Do they avoid using context managers?
Also, is there any choice but having a separate fork for Python 3.1? I know there are all kinds of hacks for making the same code run on 2.x and 3.x, but one of the reasons I like Python is because the code is beautiful, and I will not tolerate making it ugly with compatibility hacks.
Please, give me your opinion.
Yes, you need to write for Python 2.4 syntax to support all of 2.4 - 2.7 in the same codebase.
Some changes in Python 2.6 and 2.7 aim to make it a bit easier to write compatible code with 3.x, but you have to drop support for 2.5 and below to do that.
There seem be different answers to your problem.
First, if you want to offer all functions for all python versions then yes, you're probably stuck with using the smallest possible functionality subset - hence writing your code for Python 2.4. Or you could backport features from newer interpreters if they're pure python (that's not the case of context managers or coroutines neither).
Or you could split version support into features - if you think there's one (optional) feature which would have great benefit from, let's say, context managers, you can make it available in a separate module and just say that 2.4 users don't have that feature.
In order to support Python 3 take a look at the 2to3 helper, if you write your code properly there's a fair chance you won't need to maintain two separate codebases.
If the differences between versions are not extreme, you can try isolating them into a separate package or module in which you write version-specific code to act as an adaptation layer.
In a trivial fashion, this can be done without the separate module in simple cases, such as when a new version of Python makes standard a package that used to be external, such as (for example) simplejson. We have something similar to this in some code:
try:
import simplejson as json
except ImportError:
import json
For non-trivial stuff, such as what you probably have, you wouldn't want such things scattered randomly throughout your code base, so you should collect it all together in one place, when possible, and make that the sole section of your code that is version-specific.
This can't work so well for things where the syntax is different, such as your comment about wanting to use context managers. Sure, you could put the context manager code in a separate module, but that will likely complicate the places where you'd be using it. In such cases, you might backport certain critical features (I think context managers could be simulated somewhat easily) to this adapter module.
Definitely having separate codebases is about the worst thing you could do, so I'd certainly recommend working away from that. At the least, don't arbitrarily use features from newer versions of Python, since although it may look nice to have them in the code (simplifying a particular block of logic perhaps), the fact that you have to duplicate that logic by forking the codebase, even on a single module, is going to more than negate the benefits.
We stick with older versions for legacy code, tweaking as new releases come out to support them but maintaining support for the older ones, sometimes with small adapter layers. At some point, a major release of our code shows up on the schedule, and we consider whether it's time to drop support for an older Python. When that happens, we try to leapfrog several versions, going (for example) from 2.4 to 2.6 directly, and only then start really taking advantage of the new syntax and non-adaptable features.
First of call you need to keep in mind that Python 2.x shares mostly the same syntax which is backward compatible, new features & additions aside. There are other things to consider that aren't necessarily errors, such as DeprecationWarning messages that while not detrimental, are ugly and can cause confusion.
Python 3.x is backward-INcompatible by design and intends to leave all of the old cruft behind. Python 2.6 introduced many changes that are also in Python 3.x to help ease the transition. To see all of them I would recommend reading up on the What's New in Python 2.6 document. For this reason, it is very possible to write code for Python 2.6 that will also run in Python 3.1, but that is not without its caveats.
Even still there are many minor syntax changes even between 2.x versions that will require you you wrap a lot of your code in try/except blocks, so if this is what you're willing to do then having a 2.x and 3.x branch is totally possible. I think you'll find that you'll be doing a lot of attribute and type tests on your objects to do what you want to do.
I would recommend you check out the code of major projects out there that support various Python versions. Twisted Matrix is the first one that comes to mind. Their code is a wonderful example of how Python code should be written.
In the end, what you're setting out to do will not be easy, so prepare yourself for a lot of work!
You could try virtualenv and distribute your application using a single Python version. This may or may not be practical in your case though.
We have related problem, a large system that supports both jython and cpython back to 2.4. Basically you need to isolate code that needs to be written differently into a hopefully small set of modules, and have things get imported conditionally.
# module svn.py
import sys
if sys.platform.startswith('java'):
from jythonsvn import *
else:
from nativesvn import *
In your example you would use tests against sys.version_info, presumably. You could define some simple things in a utility module, that you would use like: from util import *
# module util.py
import sys
if sys.exc_info[0] == 2:
if sys.exc_info[1] == 4:
from util_py4 import *
...
Then things in util_py4.py like:
def any(seq): # define workaround functions where possible
for a in seq:
if a: return True
return False
...
Although this is a different problem than porting (since you want to continue to support), this link gives some useful guidance http://python3porting.com/preparing.html (as do a variety of other articles about porting python 2.x).
Your comment that you just cannot live without context managers is a little confusing though.
While context managers are powerful and make the code more readable and minimize the risk of errors, you just won't be able to have them in the code of your 2.4 version.
### 2.5 (with appropriate future import) and later
with open('foo','rb')as myfile:
# do something with myfile
### 2.4 and earlier
myfile = None
try:
myfile = open('foo','rb')
# do something with myfile
finally:
if myfile: myfile.close()
Since you want to support 2.4 you'll have a body of code that just has to have the second syntax. Will it really be more elegant to write it BOTH ways?

Vim Python omni-completion failing to work on system modules

I'm noticing that even for system modules, code completion doesn't work too well.
For example, if I have a simple file that does:
import re
p = re.compile(pattern)
m = p.search(line)
If I type p., I don't get completion for methods I'd expect to see (I don't see search() for example, but I do see others, such as func_closure(), func_code()).
If I type m., I don't get any completion what so ever (I'd expect .groups(), in this case).
This doesn't seem to affect all modules.. Has any one seen this behaviour and knows how to correct it?
I'm running Vim 7.2 on WinXP, with the latest pythoncomplete.vim from vim.org (0.9), running python 2.6.2.
Completion for this kind of things is tricky, because it would need to execute the actual code to work.
For example p.search() could return None or a MatchObject, depending on the data that is passed to it.
This is why omni-completion does not work here, and probably never will. It works for things that can be statically determined, for example a module's contents.
I never got the builtin omnicomplete to work for any languages. I had the most success with pysmell (which seems to have been updated slightly more recently on github than in the official repo). I still didn't find it to be reliable enough to use consistently but I can't remember exactly why.
I've resorted to building an extensive set of snipMate snippets for my primary libraries and using the default tab completion to supplement.

Categories