I have a problem regarding set in Python 2.2.
Currently I am just comparing between two lists like so:
temp3 = set(list1) - set(list2)
But it keeps prompting me that set isn't define. I used 2.7 to create the program.
Updating software is not an option sadly.
Is there an alternative I can use?
You can try third-party modules which provide the missing set functionality. For example, demset.
The simplest way to use this module is to keep it in the same directory as the program you are writing and import the desired contents like so:
from demset import set
The documentation as well as home page mention a way to use Python's built-in set (and frozenset) when available, and only use the versions in the demset module when the built-ins are not available:
try:
type(frozenset)
except NameError:
from demset import set, frozenset
Aside from those imports, your program can stay exactly the same.
Note that I mentioned keeping the demset module in the same directory as your program only because this doesn't require any installation, and if you are stuck on Python 2.2, it sounds like maybe you are not allowed to install anything.
Unless you're doing something massive you can probably just write your own function for this like the one below.
tmp3=[]
for i in list1:
if i not in list2 and i not in tmp3:
tmp3.append(i)
Related
I am interested in using doit to automate the build process of a python package.
If possible, I would like doit to re-execute a task if any of the user-created source files it depends on have changed.
From my understanding, the best way to accomplish this would be to use the file_dep key and a list of the dependent source files, however I am having a lot of trouble generating this list.
I've tried using sys.modules and inspect.getmembers(), but these solutions can't deal with import statements that do not import a module, such as from x import Y, which is unfortunately a common occurrence in the package I am developing.
Another route I investigated was to use the snakefood tool, which initially looks like it would do exactly what I wanted, generate a list of file dependencies for every file in a given path.
Unfortunately, this tool seems to have limited Python 3 support, making it useless for my package.
Does anyone have any insight into how to get snakefood-like features in Python 3, or is the only option to change all of my source code to only import modules?
doit tutorial itself is about creating a graph of python module imports!
It uses import_deps package, it is similar to snakefood.
Note that for your use-case you will need to modify file_dep itself during Task action's execution. To achieve that you need to pass the task parameter to your action (as described here).
I'm trying to dynamically load modules as explained here.
I have written a script that requires some modules that may not be installed by default on some systems (such as requests). The script code assumes that a regular import has been done (it uses requests.get).
If I use the code in the link above, to import requests I would have to use:
requests=importlib.import_module('requests')
But this leads to a lot of code duplication since I have several modules. I can't use that in a loop since the variable name must change with the imported module.
I have found that I can use:
for m in list_of_modules:
locals()[m]=importlib.import_module(m)
And everything happens as if I had done regular import's.
(of course the real code catches exceptions...).
So the question is how valid/risky this is? Too good to be true or not? (Python 2.7 if that makes a difference)
It is explicitely invalid. Doc of Python 2.7.15 says of locals() function:
The contents of this dictionary should not be modified; changes may not affect the values of local and free variables used by the interpreter.
locals() is a way for the program to know the list of variables in a function block. It is not a way to create local variables.
If you really need something like that, you can either use a local map, rely on the sys.modules map which is updated by import_module, or update the globals() map. Anyway, once a module was loaded, it exists (through the sys.module map) for the whole program, so it does not really make sense to store its reference in a local symbol table.
So if you really need to import a dynamically builded list of modules, I would do:
for m in list_of_modules:
globals()[m]=importlib.import_module(m)
I'm looking for an extension which will help in Python's auto complete feature in Python.
If I type the following code:
a = [4,5,6]
a.p
Then I expect it would give me a suggestion for pop as it is one of the method of Python list. Is this thing achievable in Emacs?
I tried installing Rope, Rope mode, Ropemacs and Pymacs. The auto-complete suggestion, I get from that aren't the methods of list. But it suggests me something like print etc. Am I doing something wrong ?
Try Jedi.el. AFAIK, it should support your example.
Because Python variables don't have types, it is difficult to know what class of object a variable would contain at a given point in your code. Imagine if your code later did a = 1. Emacs would have to know when the variable a refers to a list and when it refers to a number in order to offer the correct completion. Generally this is not possible without actually running the code.
Works for me with vimrope from https://github.com/python-rope/ropevim/ (now, the official home of all rope projects)
How does one get (finds the location of) the dynamically imported modules from a python script ?
so, python from my understanding can dynamically (at run time) load modules.
Be it using _import_(module_name), or using the exec "from x import y", either using imp.find_module("module_name") and then imp.load_module(param1, param2, param3, param4) .
Knowing that I want to get all the dependencies for a python file. This would include getting (or at least I tried to) the dynamically loaded modules, those loaded either by using hard coded string objects or those returned by a function/method.
For normal import module_name and from x import y you can do either a manual scanning of the code or use module_finder.
So if I want to copy one python script and all its dependencies (including the custom dynamically loaded modules) how should I do that ?
You can't; the very nature of programming (in any language) means that you cannot predict what code will be executed without actually executing it. So you have no way of telling which modules could be included.
This is further confused by user-input, consider: __import__(sys.argv[1]).
There's a lot of theoretical information about the first problem, which is normally described as the Halting problem, the second just obviously can't be done.
From a theoretical perspective, you can never know exactly what/where modules are being imported. From a practical perspective, if you simply want to know where the modules are, check the module.__file__ attribute or run the script under python -v to find files when modules are loaded. This won't give you every module that could possibly be loaded, but will get most modules with mostly sane code.
See also: How do I find the location of Python module sources?
This is not possible to do 100% accurately. I answered a similar question here: Dependency Testing with Python
Just an idea and I'm not sure that it will work:
You could write a module that contains a wrapper for __builtin__.__import__. This wrapper would save a reference to the old __import__and then assign a function to __builtin__.__import__ that does the following:
whenever called, get the current stacktrace and work out the calling function. Maybe the information in the globals parameter to __import__ is enough.
get the module of that calling functions and store the name of this module and what will get imported
redirect the call the real __import__
After you have done this you can call your application with python -m magic_module yourapp.py. The magic module must store the information somewhere where you can retrieve it later.
That's quite of a question.
Static analysis is about predicting all possible run-time execution paths and making sure the program halts for specific input at all.
Which is equivalent to Halting Problem and unfortunately there is no generic solution.
The only way to resolve dynamic dependencies is to run the code.
This subject has been disturbing me for some time.
For my Python project I wanted to be able to support Python versions 2.4 to 3.1. I thought a bit about how to do this, and eventually decided to have four separate forks of the source code for four different versions of Python: 2.4, 2.5, 2.6 and 3.1.
I have come to view that as a bad decision, mainly because of Python's distribution annoyances, which I now have to do four times instead of one.
The question is, what to do?
My project is in the scientific computing field. I got the impression that there are still many people who depend on Python 2.4.
Someone suggested I just write my entire project for 2.4, but that is unacceptable for me. That will mean I could not use context managers, and that is something I will not give up on.
How do ordinary Python projects support 2.4? Do they avoid using context managers?
Also, is there any choice but having a separate fork for Python 3.1? I know there are all kinds of hacks for making the same code run on 2.x and 3.x, but one of the reasons I like Python is because the code is beautiful, and I will not tolerate making it ugly with compatibility hacks.
Please, give me your opinion.
Yes, you need to write for Python 2.4 syntax to support all of 2.4 - 2.7 in the same codebase.
Some changes in Python 2.6 and 2.7 aim to make it a bit easier to write compatible code with 3.x, but you have to drop support for 2.5 and below to do that.
There seem be different answers to your problem.
First, if you want to offer all functions for all python versions then yes, you're probably stuck with using the smallest possible functionality subset - hence writing your code for Python 2.4. Or you could backport features from newer interpreters if they're pure python (that's not the case of context managers or coroutines neither).
Or you could split version support into features - if you think there's one (optional) feature which would have great benefit from, let's say, context managers, you can make it available in a separate module and just say that 2.4 users don't have that feature.
In order to support Python 3 take a look at the 2to3 helper, if you write your code properly there's a fair chance you won't need to maintain two separate codebases.
If the differences between versions are not extreme, you can try isolating them into a separate package or module in which you write version-specific code to act as an adaptation layer.
In a trivial fashion, this can be done without the separate module in simple cases, such as when a new version of Python makes standard a package that used to be external, such as (for example) simplejson. We have something similar to this in some code:
try:
import simplejson as json
except ImportError:
import json
For non-trivial stuff, such as what you probably have, you wouldn't want such things scattered randomly throughout your code base, so you should collect it all together in one place, when possible, and make that the sole section of your code that is version-specific.
This can't work so well for things where the syntax is different, such as your comment about wanting to use context managers. Sure, you could put the context manager code in a separate module, but that will likely complicate the places where you'd be using it. In such cases, you might backport certain critical features (I think context managers could be simulated somewhat easily) to this adapter module.
Definitely having separate codebases is about the worst thing you could do, so I'd certainly recommend working away from that. At the least, don't arbitrarily use features from newer versions of Python, since although it may look nice to have them in the code (simplifying a particular block of logic perhaps), the fact that you have to duplicate that logic by forking the codebase, even on a single module, is going to more than negate the benefits.
We stick with older versions for legacy code, tweaking as new releases come out to support them but maintaining support for the older ones, sometimes with small adapter layers. At some point, a major release of our code shows up on the schedule, and we consider whether it's time to drop support for an older Python. When that happens, we try to leapfrog several versions, going (for example) from 2.4 to 2.6 directly, and only then start really taking advantage of the new syntax and non-adaptable features.
First of call you need to keep in mind that Python 2.x shares mostly the same syntax which is backward compatible, new features & additions aside. There are other things to consider that aren't necessarily errors, such as DeprecationWarning messages that while not detrimental, are ugly and can cause confusion.
Python 3.x is backward-INcompatible by design and intends to leave all of the old cruft behind. Python 2.6 introduced many changes that are also in Python 3.x to help ease the transition. To see all of them I would recommend reading up on the What's New in Python 2.6 document. For this reason, it is very possible to write code for Python 2.6 that will also run in Python 3.1, but that is not without its caveats.
Even still there are many minor syntax changes even between 2.x versions that will require you you wrap a lot of your code in try/except blocks, so if this is what you're willing to do then having a 2.x and 3.x branch is totally possible. I think you'll find that you'll be doing a lot of attribute and type tests on your objects to do what you want to do.
I would recommend you check out the code of major projects out there that support various Python versions. Twisted Matrix is the first one that comes to mind. Their code is a wonderful example of how Python code should be written.
In the end, what you're setting out to do will not be easy, so prepare yourself for a lot of work!
You could try virtualenv and distribute your application using a single Python version. This may or may not be practical in your case though.
We have related problem, a large system that supports both jython and cpython back to 2.4. Basically you need to isolate code that needs to be written differently into a hopefully small set of modules, and have things get imported conditionally.
# module svn.py
import sys
if sys.platform.startswith('java'):
from jythonsvn import *
else:
from nativesvn import *
In your example you would use tests against sys.version_info, presumably. You could define some simple things in a utility module, that you would use like: from util import *
# module util.py
import sys
if sys.exc_info[0] == 2:
if sys.exc_info[1] == 4:
from util_py4 import *
...
Then things in util_py4.py like:
def any(seq): # define workaround functions where possible
for a in seq:
if a: return True
return False
...
Although this is a different problem than porting (since you want to continue to support), this link gives some useful guidance http://python3porting.com/preparing.html (as do a variety of other articles about porting python 2.x).
Your comment that you just cannot live without context managers is a little confusing though.
While context managers are powerful and make the code more readable and minimize the risk of errors, you just won't be able to have them in the code of your 2.4 version.
### 2.5 (with appropriate future import) and later
with open('foo','rb')as myfile:
# do something with myfile
### 2.4 and earlier
myfile = None
try:
myfile = open('foo','rb')
# do something with myfile
finally:
if myfile: myfile.close()
Since you want to support 2.4 you'll have a body of code that just has to have the second syntax. Will it really be more elegant to write it BOTH ways?