How can I know what Python versions can run my code? - python

I've read in few places that generally, Python doesn't provide backward compatibility, which means that any newer version of Python may break code that worked fine for earlier versions. If so, what is my way as a developer to know what versions of Python can execute my code successfully? Is there any set of rules/guarantees regarding this? Or should I just tell my users: Just run this with Python 3.8 (for example) - no more no less...?

99% of the time, if it works on Python 3.x, it'll work on 3.y where y >= x. Enabling warnings when running your code on the older version should pop DeprecationWarnings when you use a feature that's deprecated (and therefore likely to change/be removed in later Python versions). Aside from that, you can read the What's New docs for each version between the known good version and the later versions, in particular the Deprecated and Removed sections of each.
Beyond that, the only solution is good unit and component tests (you are using those, right? 😉) that you rerun on newer releases to verify stuff still works & behavior doesn't change.

According to PEP387, section "Making Incompatible Changes", before incompatible changes are made, a deprecation warning should appear in at least two minor Python versions of the same major version, or one minor version in an older major version. After that, it's a free game, in principle. This made me cringe with regards to safety. Who knows if people run airplanes on Python and if they don't always read the python-dev list. So if you have something that passes 100% coverage unit tests without deprecation warnings, your code should be safe for the next two minor releases.
You can avoid this issue and many others by containerizing your deployments.

tox is great for running unit tests against multiple Python versions. That’s useful for at least 2 major cases:
You want to ensure compatibility for a certain set of Python versions, say 3.7+, and to be told if you make any breaking changes.
You don’t really know what versions your code supports, but want to establish a baseline of supported versions for future work.
I don’t use it for internal projects where I can control over the environment where my code will be running. It’s lovely for people publishing apps or libraries to PyPI, though.

Related

How to handle multiple major versions of dependency

I'm wondering how to handle multiple major versions of a dependency library.
I have an open source library, Foo, at an early release stage. The library is a wrapper around another open source library, Bar. Bar has just launched a new major version. Foo currently only supports the previous version. As I'm guessing that a lot of people will be very slow to convert from the previous major version of Bar to the new major version, I'm reluctant to switch to the new version myself.
How is this best handled? As I see it I have these options
Switch to the new major version, potentially denying people on the old version.
Keep going with the old version, potentially denying people on the new version.
Have two different branches, updating both branches for all new features. Not sure how this works with PyPi. Wouldn't I have to release at different version numbers each time?
Separate the repository into two parts. Don't really want to do this.
The ideal solution for me would be to have the same code base, where I could have some sort of C/C++ macro-like thing where if the version is new, use new_bar_function, else use old_bar_function. When installing the library from PyPi, the already installed version of the major version dictates which version is used. If no version is installed, install the newest.
Would much appreciate some pointers.
Normally the Package version information is available after import with package.__version__. You could parse that information from Bar and decide based on this what to do (chose the appropriate function calls or halt the program or raise an error or ...).
You might also gain some insight from https://www.python.org/dev/peps/pep-0518/ for ways to control dependency installation.
It seems that if someone already has Bar installed, installing Foo only updates Bar if Foo explicitly requires the new version. See https://github.com/pypa/pip/pull/4500 and this answer
Have two different branches, updating both branches for all new features. Not sure how this works with PyPI. Wouldn't I have to release at different version numbers each time?
Yes, you could have a 1.x release (that supports the old version) and a 2.x release (that supports the new version) and release both simultaneously. This is a common pattern for packages that want to introduce a breaking change, but still want to continue maintaining the previous release as well.

Can we run legacy python 2.7 code under python 3.5?

I'd like to upgrade to python 3.5, but I use legacy python 2.7 packages. Is it easy to run legacy packages in python 3.5? I have been under the impression that this isn't easy, but I did a few searches to see if I'm wrong and didn't come up with much.
I would expect there to be a multiprocessing package that allows standardized hand-offs between 3.5 code and 2.7 packages, allowing them to run independently under their own environments, but being somewhat seamless to the developer.
I'm not talking about converting my own code to 3.5, I'm talking about libraries that I use that won't be updated for or by me.
If you used the newer syntax supported by 2.7, e.g. around exceptions, and/or, better yet, worked with new features imported from __future__, you'll have much easier time converting your code to Python 3 (up to no changes at all). I'd suggest to follow this path first, for it can be trod gradually, without an abrupt jump to Python 3.
I suppose Python processes with different versions can interoperate, because object pickling format is compatible, and you can explicitly use a specific pickling protocol version on both sides to ensure that. I don't think multiprocessing packages on either side would be too useful, though. Consider using e.g. ZeroMQ as a more general solution.
Unfortunately there is no "nice" or automatic way of handling the processing of 2.7 code under 3.5 (that works perfectly).
You mentioned that you are concerned about libraries, not your own code - firstly, you'd hope that if they are under active development, they will be updated. If not, as you stated, then there is a possibility that they were written to be future proof. I've found some good ones are (e.g. google-api-python-client, e.g. https://github.com/google/google-api-python-client/blob/master/setup.py).
Failing that, the only way to upgrade is to fix all the syntax changes yourself. Most common ones I deal with are around 'print' and exception handling.

How to be confident that Python 2.7.10 Doesn't break my Python 2.7.6 Code?

To simplify my work I want to migrate from Python 2.7.6 To Python 2.7.9/2.7.10.
I need to justify that my Python 2.7.10 Will not break my software "working" with Python 2.7.6
I followed the steps describe in porting python 2 to python 3
Increase my test coverage from 0 to 40%
run pylint (no critical bug)
Learn the differences between Python 2.7.10 And 2.7.6 < I read the release notes
I can't be sure 100% that my code will not break, but how can I be confident?
For example, should I have to look at all the Core and Builtins bugs fixed between 2.7.6 And 2.7.10 And search into my code if we use those methods?
Does exists a better strategy?
100% code coverage is a good solution, but it may be harder to obtain than 50% coverage + 100% code using modified methods between 2.7.6 And 2.7.10 Are tested.
It is a very minor Python update that almost certainty won't break anything, even the without the above mentioned steps (Python 2 to Python 3 migration is a different matter entirely).
As to proving it, well, no amount of statical checking and reading the release notes with help, since all it will tell you, is that almost certainly it is backward compatible (which is the initial guess anyway).
A possible approach would be to reproduce your production environment with Python 2.7.10 in a virtual machine (valgrind, etc can help there) and check if everything runs as expected. No way around running it to be 100% sure.
Increasing coverage is a good idea. By itself though, even full coverage run with Python 2.7.6, doesn't tell you whether it will break with Python 2.7.10 or not.
My answer does not apply only to Python, but to software development in general.
First of all, as someone already stated, Python 2.7.10 is "just" a bug fixing release - this means that all regression tests are passing and that no backward incompatible changes are included. This also guarantees that a function signature does not change, therefore your code is likely to be working. Due to Python source code high coverage, it's also possible to say that even if a bug fix might have introduced a bug, this had been covered with regression tests - so either the bug is new or it was not covered by regression tests (the first does not imply the second).
In addition, having 100% coverage is technically not always possible - 90-95% is generally the way to go. And if that's not enough, you might try different scenarios on a local environment as suggested by rth.
However, consider going through your libraries/modules imported and check if they all support Python 2.7.10. If not, it doesn't mean that your project won't work, but it could happen that if you are using some low-level C libraries they might break - so be careful especially there.
In general, I suggest you to go through the changes and through the imported libraries. Adding coverage is always good - not just to update to a new version - so I join other users in saying that you should definitely increase your coverage.
As stated in the dev-cycle presentation:
To clarify terminology, Python uses a major.minor.micro nomenclature
for production-ready releases. So for Python 3.1.2 final, that is a
major version of 3, a minor version of 1, and a micro version of 2.
new major versions are exceptional; they only come when strongly incompatible changes are deemed necessary, and are planned very long
in advance;
new minor versions are feature releases; they get released roughly every 18 months, from the current in-development branch;
new micro versions are bugfix releases; they get released roughly every 6 months, although they can come more often if necessary; they
are prepared in maintenance branches
This means that updating from a micro version to another shouldn't (in theory) break anything.
It's the same for minor versions, which should only add features that are backward compatible.
Considering how widely used python is, you can be sure that many tests are made to ensure that this is respected.
However, there is no guarantee, but the whole point of micro versions is bugfixing, not introducing new bugs.

Make install new Python minor release over previous one

I have built and installed Python 2.7.8 from source on CentOS 6 with the following commands:
./configure --prefix /opt/Python27 --exec-prefix=/opt/Python27
make
make install
Now 2.7.9 is out and I would like to update my installation. Is it reasonable to expect everything to keep working if I uncompress it in a different directory from the previous one and install it with exactly the same commands, i.e. over the previous installation?
In practice, you're probably OK, and the worst-case scenario isn't that bad.
I'm not sure if Python 2.x ever guaranteed binary-API stability between micro versions.* But, according to the dev guide:
The only changes allowed to occur in a maintenance branch without debate are bug fixes. Also, a general rule for maintenance branches is that compatibility must not be broken at any point between sibling minor releases (3.4.1, 3.4.2, etc.). For both rules, only rare exceptions are accepted and must be discussed first.
So, in theory, there could have been a compatibility-breaking release between 2.7.8 and 2.7.9, and the only way to know for sure is to dig through the bug tracker and the python-dev mailing list and so on to see where it was discussed and accepted. And of course they could always have screwed up and make a breaking change without realizing it. But in practice, the first has only happened a few times in history, and the second has as far as I know never happened.
Another thing that can cause a problem is major changes to the required or optional dependencies that Python builds against between your last build. But this is pretty rare in practice. If you've, say, uninstalled zlib since the last build, then yeah, that could break compatibility, but you're unlikely to have done anything like that.
So, what happens if either of those is true? It just means that any binary extensions, or embedding apps, that you've built need to be rebuilt.
Hopefully you've been using pip, in which case, if there's a problem, getting a list of all the extensions in your site-packages and force-reinstalling them is trivial (although it may take a while to run). And if you're using a lot of virtual environments, you could need to do the same for all of them. As for embedding, if you don't know about it, you're not doing it (unless you've built "semi-standalone" executables with something like pyInstaller, which I doubt you have).
So, not too terrible. And, remember, that's usually not a problem at all, it's just the worst-case scenario.

How do I find out all previous versions of python with which my code is compatible

I have created a medium sized project in python 2.7.3 containing around 100 modules. I wish to find out with which previous versions of python (ex: 2.6.x, 2.7.x) is my code compatible (before releasing my project in public domain). What is the easiest way to find it out?
Solutions I know -
Install multiple versions of python and check in every versions. But I don't have test cases defined yet, so need to define those first.
Read and compare changelog of the various python versions I wish to check compatibility for, and accordingly find out.
Kindly provide better solutions.
I don't really know of a way to get around doing this without some test cases. Even if your code could run in an older version of python there is no guarantee that it works correctly without a suite of test cases that sufficiently test your code
No, what you named is pretty much how it's done, though the What's New pages and the documentation proper may be more useful than the full changelog. Compatibility to such a huge, moving target is infeasible to automate even partially. It's just not as much work as it sounds like, because:
Some people do have test suites ;-)
You don't (usually) need to consider bugfix releases (such as 2.7.x for various x). It's possible that your code requires a bug fix, but generally the .0 releases are quite reliable and code compatible with x.y.0 can run on any x.y.z version.
Thanks to the backwards compatibility policy, it is enough to establish a minimum supported version, all later releases (of the same major version) will stay compatible. This doesn't help in your case as 2.7 is the last 2.x release ever, but if you target, say, 2.5 then you usually don't have to check for 2.6 or 2.7 compatibility.
If you keep your eyes open while coding, and have a bit of experience as well as a good memory, you'll know you used some functionality that was introduced in a recent version. Even if you don't know what version specifically, you can look it up quickly in the documentation.
Some people embark with the intent to support a specific version, and always keep that in mind when developing. Even if it happens to work on other versions, they'd consider it unsupported and don't claim compatibility.
So, you could either limit yourself to 2.7 (it's been out for three years), or perform tests on older releases. If you just want to determine whether it's compatible, not which incompatibilities there are and how they can be fixed, you can:
Search the What's New pages for new features, most importantly new syntax, which you used.
Check the version constraints of third party libraries you used.
Search the documentation of standard library modules you use for newly added functionality.
A lot easier with some test cases but manual testing can give you a reasonably idea.
Take the furthest back version that you would hope to support, (I would suggest 2.5.x but further back if you must - manually test with that version keeping notes of what you did and especially where it fails if any where - if it does fail then either address the issue or do a binary search to see which version the failure point(s) disappear at. This could work even better if you start from a version that you are quite sure you will fail at, 2.0 maybe.
1) If you're going to maintain compatibility with previous versions, testing is the way to go. Even if your code happens to be compatible now, it can stop being so at any moment in the future if you don't pay attention.
2) If backwards compatibility is not an objective but just a "nice side-feature for those lucky enough", an easy way for OSS is to let users try it out, noting that "it was tested in <version> but may work in previous ones as well". If there's anyone in your user base interested in running your code in an earlier version (and maintain compatibility with it), they'll probably give you feedback. If there isn't, why bother?

Categories