I am trying to use "With open()" with python 2.6 and it is giving error(Syntax error) while it works fine with python 2.7.3
Am I missing something or some import to make my program work!
Any help would be appreciated.
Br
My code is here:
def compare_some_text_of_a_file(self, exportfileTransferFolder, exportfileCheckFilesFolder) :
flag = 0
error = ""
with open("check_files/"+exportfileCheckFilesFolder+".txt") as f1,open("transfer-out/"+exportfileTransferFolder) as f2:
if f1.read().strip() in f2.read():
print ""
else:
flag = 1
error = exportfileCheckFilesFolder
error = "Data of file " + error + " do not match with exported data\n"
if flag == 1:
raise AssertionError(error)
The with open() statement is supported in Python 2.6, you must have a different error.
See PEP 343 and the python File Objects documentation for the details.
Quick demo:
Python 2.6.8 (unknown, Apr 19 2012, 01:24:00)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> with open('/tmp/test/a.txt') as f:
... print f.readline()
...
foo
>>>
You are trying to use the with statement with multiple context managers though, which was only added in Python 2.7:
Changed in version 2.7: Support for multiple context expressions.
Use nested statements instead in 2.6:
with open("check_files/"+exportfileCheckFilesFolder+".txt") as f1:
with open("transfer-out/"+exportfileTransferFolder) as f2:
# f1 and f2 are now both open.
It is the "extended" with statement with multiple context expressions which causes your trouble.
In 2.6, instead of
with open(...) as f1, open(...) as f2:
do_stuff()
you should add a nesting level and write
with open(...) as f1:
with open(...) as f2:
do.stuff()
The docu says
Changed in version 2.7: Support for multiple context expressions.
The with open() syntax is supported by Python 2.6. On Python 2.4 it is not supported and gives a syntax error. If you need to support PYthon 2.4, I would suggest something like:
def readfile(filename, mode='r'):
f = open(filename, mode)
try:
for line in f:
yield f
except e:
f.close()
raise e
f.close()
for line in readfile(myfile):
print line
Related
I am generating a configuration file for a service that expects a list of double quoted string options. I want to avoid installing additional packages via pip3 -r requirements.txt as suggested in this answer and use the yaml module that came available with python 3.8.10 on ubuntu 20.04. I would like a way to solve this problem without searching for the lines and replacing them.
Python 3.8.10 (default, Sep 28 2021, 16:10:42)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import yaml
>>> yaml.__file__
'/usr/lib/python3/dist-packages/yaml/__init__.py'
python3 test.yaml
import yaml
configDict = {}
configDict["OptionList"] = [
"\"item1/Enable\"",
"\"item2/Disable\""
]
with open('./testConfig.yaml', 'w') as f:
yaml.dump(configDict, f)
testConfig.yaml
Current output:
OptionList:
- '"item1/Enable"'
- '"item2/Disable"'
Desired output:
OptionList:
- "item1/Enable"
- "item2/Disable"
It would need to digg in documentation and source code to see if it has option to change it.
At this moment I would simply get text and use replace()
import yaml
configDict = {
"OptionList": [
'item1/Enable',
'item2/Disable'
]
}
text = yaml.dump(configDict)
print(text)
text = text.replace("'\"", '"').replace("\"'", '"')
print(text)
with open('./testConfig.yaml', 'w') as f:
f.write(text)
Result:
OptionList:
- '"item1/Enable"'
- '"item2/Disable"'
OptionList:
- "item1/Enable"
- "item2/Disable"
If I use default_style='"' then I get all values in " "
import yaml
configDict = {
"OptionList": [
'item1/Enable',
'item2/Disable'
]
}
text = yaml.dump(configDict, default_style='"')
print(text)
Result:
"OptionList":
- "item1/Enable"
- "item2/Disable"
I have a class MyClass defined in my_module. MyClass has a method pickle_myself which pickles the instance of the class in question:
def pickle_myself(self, pkl_file_path):
with open(pkl_file_path, 'w+') as f:
pkl.dump(self, f, protocol=2)
I have made sure that my_module is in PYTHONPATH. In the interpreter, executing __import__('my_module') works fine:
>>> __import__('my_module')
<module 'my_module' from 'A:\my_stuff\my_module.pyc'>
However, when eventually loading the file, I get:
File "A:\Anaconda\lib\pickle.py", line 1128, in find_class
__import__(module)
ImportError: No module named my_module
Some things I have made sure of:
I have not changed the location of my_module.py (Python pickling after changing a module's directory)
I have tried to use dill instead, but still get the same error (More on python ImportError No module named)
EDIT -- A toy example that reproduces the error:
The example itself is spread over a bunch of files.
First, we have the module ball (stored in a file called ball.py):
class Ball():
def __init__(self, ball_radius):
self.ball_radius = ball_radius
def say_hello(self):
print "Hi, I'm a ball with radius {}!".format(self.ball_radius)
Then, we have the module test_environment:
import os
import ball
#import dill as pkl
import pickle as pkl
class Environment():
def __init__(self, store_dir, num_balls, default_ball_radius):
self.store_dir = store_dir
self.balls_in_environment = [ball.Ball(default_ball_radius) for x in range(num_balls)]
def persist(self):
pkl_file_path = os.path.join(self.store_dir, "test_stored_env.pkl")
with open(pkl_file_path, 'w+') as f:
pkl.dump(self, f, protocol=2)
Then, we have a module that has functions to make environments, persist them, and load them, called make_persist_load:
import os
import test_environment
#import pickle as pkl
import dill as pkl
def make_env_and_persist():
cwd = os.getcwd()
my_env = test_environment.Environment(cwd, 5, 5)
my_env.persist()
def load_env(store_path):
stored_env = None
with open(store_path, 'rb') as pkl_f:
stored_env = pkl.load(pkl_f)
return stored_env
Then we have a script to put it all together, in test_serialization.py:
import os
import make_persist_load
MAKE_AND_PERSIST = True
LOAD = (not MAKE_AND_PERSIST)
cwd = os.getcwd()
store_path = os.path.join(cwd, "test_stored_env.pkl")
if MAKE_AND_PERSIST == True:
make_persist_load.make_env_and_persist()
if LOAD == True:
loaded_env = make_persist_load.load_env(store_path)
In order to make it easy to use this toy example, I have put it all up on in a Github repository that simply needs to be cloned into your directory of choice.. Please see the README containing instructions, which I also reproduce here:
Instructions:
1) Clone repository into a directory.
2) Add repository directory to PYTHONPATH.
3) Open up test_serialization.py, and set the variable MAKE_AND_PERSIST to True. Run the script in an interpreter.
4) Close the previous interpreter instance, and start up a new one. In test_serialization.py, change MAKE_AND_PERSIST to False, and this will programmatically set LOAD to True. Run the script in an interpreter, causing ImportError: No module named test_environment.
5) By default, the test is set to use dill, instead of pickle. In order to change this, go into test_environment.py and make_persist_load.py, to change imports as required.
EDIT: after switching to dill '0.2.5.dev0', dill.detect.trace(True) output
C2: test_environment.Environment
# C2
D2: <dict object at 0x000000000A9BDAE8>
C2: ball.Ball
# C2
D2: <dict object at 0x000000000AA25048>
# D2
D2: <dict object at 0x000000000AA25268>
# D2
D2: <dict object at 0x000000000A9BD598>
# D2
D2: <dict object at 0x000000000A9BD9D8>
# D2
D2: <dict object at 0x000000000A9B0BF8>
# D2
# D2
EDIT: the toy example works perfectly well when run on Mac/Ubuntu (i.e. Unix-like systems?). It only fails on Windows.
I can tell from your question that you are probably doing something like this, with a class method that is attempting to pickle the instance of the class. It's ill-advised to do that, if you are doing that… it's much more sane to use pkl.dump external to the class instead (where pkl is pickle or dill etc). However, it can still work with this design, see below:
>>> class Thing(object):
... def pickle_myself(self, pkl_file_path):
... with open(pkl_file_path, 'w+') as f:
... pkl.dump(self, f, protocol=2)
...
>>> import dill as pkl
>>>
>>> t = Thing()
>>> t.pickle_myself('foo.pkl')
Then restarting...
Python 2.7.10 (default, Sep 2 2015, 17:36:25)
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> f = open('foo.pkl', 'r')
>>> t = dill.load(f)
>>> t
<__main__.Thing object at 0x1060ff410>
If you have a much more complicated class, which I'm sure you do, then you are likely to run into trouble, especially if that class uses another file that is sitting in the same directory.
>>> import dill
>>> from bar import Zap
>>> print dill.source.getsource(Zap)
class Zap(object):
x = 1
def __init__(self, y):
self.y = y
>>>
>>> class Thing2(Zap):
... def pickle_myself(self, pkl_file_path):
... with open(pkl_file_path, 'w+') as f:
... dill.dump(self, f, protocol=2)
...
>>> t = Thing2(2)
>>> t.pickle_myself('foo2.pkl')
Then restarting…
Python 2.7.10 (default, Sep 2 2015, 17:36:25)
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> f = open('foo2.pkl', 'r')
>>> t = dill.load(f)
>>> t
<__main__.Thing2 object at 0x10eca8090>
>>> t.y
2
>>>
Well… shoot, that works too. You'll have to post your code, so we can see what pattern you are using that dill (and pickle) fails for. I know having one module import another that is not "installed" (i.e. in some local directory) and expecting the serialization to "just work" doesn't for all cases.
See dill issues:
https://github.com/uqfoundation/dill/issues/128
https://github.com/uqfoundation/dill/issues/129
and this SO question:
Why dill dumps external classes by reference, no matter what?
for some examples of failure and potential workarounds.
EDIT with regard to updated question:
I don't see your issue. Running from the command line, importing from the interpreter (import test_serialization), and running the script in the interpreter (as below, and indicated in your steps 3-5) all work. That leads me to think you might be using an older version of dill?
>>> import os
>>> import make_persist_load
>>>
>>> MAKE_AND_PERSIST = False #True
>>> LOAD = (not MAKE_AND_PERSIST)
>>>
>>> cwd = os.getcwd()
>>> store_path = os.path.join(cwd, "test_stored_env.pkl")
>>>
>>> if MAKE_AND_PERSIST == True:
... make_persist_load.make_env_and_persist()
...
>>> if LOAD == True:
... loaded_env = make_persist_load.load_env(store_path)
...
>>>
EDIT based on discussion in comments:
Looks like it's probably an issue with Windows, as that seems to be the only OS the error appears.
EDIT after some work (see: https://github.com/uqfoundation/dill/issues/140):
Using this minimal example, I can reproduce the same error on Windows, while on MacOSX it still works…
# test.py
class Environment():
def __init__(self):
pass
and
# doit.py
import test
import dill
env = test.Environment()
path = "test.pkl"
with open(path, 'w+') as f:
dill.dump(env, f)
with open(path, 'rb') as _f:
_env = dill.load(_f)
print _env
However, if you use open(path, 'r') as _f, it works on both Windows and MacOSX. So it looks like the __import__ on Windows is more sensitive to file type than on non-Windows systems. Still, throwing an ImportError is weird… but this one small change should make it work.
In case someone is having same problem, I had the same problem running Python 2.7 and the problem was the pickle file created on windows while I am running Linux, what I had to do is running dos2unix which has to be downloaded first using
sudo yum install dos2unix
And then you need to convert the pickle file example
dos2unix data.p
I have some code written in Python 2.7 like so :
if (os.path.exists('/path/to/my/file/somefile.txt')):
with open('/path/to/my/file/somefile.txt', 'r') as readfile:
firstline = readfile.readline()
return firstline
When I try to run this on a system that has python 2.4, I get and Invalid Syntax error:
with open('/path/to/my/file/somefile.txt', 'r') as readfile:
^
SyntaxError: invalid syntax
What am I doing wrong here?
There is no 'with' statement aka context managers in Python 2.4.
Python 2.4 is more than 10 years old.
Upgrade to Python 2.7 or 3.3.
Since with doesn't exist, could you try this instead?
import os
if (os.path.exists('/root/testing/test123.txt')):
readfile = open('/root/testing/test123.txt', 'r')
teststr = readfile.readline()
print teststr #or 'return' if you want that
If I have a text file that contains a python function definition, how can I make the function call from another Python program. Ps: The function will be defined in the Python program that does the call.
Ways in which can be done:
Consider the python function as a module and call it. Constraint here is that I have to convert a python bare function into a module which would give errors.
Insert the code(function code) into the program that calls the function.
Which would be the better way to go about it?
Edit: Thank you for all the replies. Have shed a lot of light on the initial confusion I myself had. Another doubt would be, what if the person(Obviously, not me) has written a os.system("rm -rf"). And I end up executing it. That would mean doomsday for me, right?
Edit2: As a lot of you have asked me to use exec, I would like to point to the this thread and most particularly the namespace problem. It gives user a lot of chances to "circumvent" python. Don't y'all think?
You are looking for the exec keyword.
>>> mycode = 'print "hello world"'
>>> exec mycode
Hello world
So if you read your text file as text (assuming that it only contains the function) like:
test.txt:
def a():
print "a()"
test.py:
mycode = open('test.txt').read()
exec mycode # this will execute the code in your textfile, thus define the a() function
a() # now you can call the function from your python file
Link to doc: http://docs.python.org/reference/simple_stmts.html#grammar-token-exec%5Fstmt
You may want to look at the compile statement too: here.
compile() and eval() can do the trick:
>>> code = compile('def foo(a): return a*2', '<string>', 'exec')
>>> eval(code)
>>> foo
52: <function foo at 0x01F65F70>
>>> foo(12)
53: 24
or with file:
with open(filename) as source:
eval(compile(source.read(), filename, 'exec'))
A way like Reflection in Java? If so, Python has a module named imp to provide it.
foo.py
def foo():
return "return from function foo in file foo.py"
some code anywhere
modes = imp.get_suffixes() #got modes Explained in link below
mode = modes[-2] # because I want load a py file
with open("foo.py") as file:
m = imp.load_module("name", file, "foo.py", mode)
print(m.foo())
above mode = modes[-2] because my imp.get_suffixes() is:
>>> imp.get_suffixes()
[('.cpython-32m.so', 'rb', 3), ('module.cpython-32m.so', 'rb', 3), ('.abi3.so', 'rb', 3), ('module.abi3.so', 'rb', 3), ('.so', 'rb', 3), ('module.so', 'rb', 3), ('.py', 'U', 1), ('.pyc', 'rb', 2)]
here is my output:
Python 3.2.1 (default, Aug 11 2011, 01:27:29)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import imp
>>> with open("foo.py") as file:
... m = imp.load_module("foo", file, "foo.py", ('.py', 'U', 1))
...
>>> m.foo()
'return from function foo in file foo.py'
Check it here: http://docs.python.org/py3k/library/imp.html
Both python 2.7 and python 3 works:
Python 2.7.1 (r271:86832, Jun 16 2011, 16:59:05)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import imp
>>> imp.get_suffixes()
[('.so', 'rb', 3), ('module.so', 'rb', 3), ('.py', 'U', 1), ('.pyc', 'rb', 2)]
>>> with open("foo.py") as file:
... m = imp.load_module("foo", file, "foo.py", ('.py', 'U', 1))
...
>>> m.foo()
'return from function foo in file foo.py'
You can use execfile:
execfile("path/example.py")
# example.py
# def example_func():
# return "Test"
#
print example_func()
# >Test
EDIT:
In case you want to execute some unsecure code, you can try to sandbox it this way,
although it is probably not very safe anyway:
def execfile_sandbox(filename):
from copy import copy
loc = globals()
bi = loc["__builtins__"]
if not isinstance(bi, dict): bi = bi.__dict__
bi = copy(bi)
# no files
del bi["file"]
# and definitely, no import
del bi["__import__"]
# you can delete other builtin functions you want to deny access to
new_locals = dict()
new_locals["__builtins__"] = bi
execfile(filename, new_locals, new_locals)
Usage:
try:
execfile_sandbox("path/example.py")
except:
# handle exception and errors here (like import error)
pass
I am not sure what is your purpose, but I suppose that you have function in one program and you do want that function run in another program. You can "marshal" function from first to second.
Example, first program:
# first program
def your_func():
return "your function"
import marshal
marshal.dump(your_func.func_code, file("path/function.bin","w"))
Second program:
# Second program
import marshal, types
code = marshal.load(file("path/function.bin"))
your_func = types.FunctionType(code, globals(), "your_func")
print your_func()
# >your function
I have this python code for opening a .cfg file, writing to it and saving it:
import ConfigParser
def get_lock_file():
cf = ConfigParser.ConfigParser()
cf.read("svn.lock")
return cf
def save_lock_file(configurationParser):
cf = configurationParser
config_file = open('svn.lock', 'w')
cf.write(config_file)
config_file.close()
Does this seem normal or am I missing something about how to open-write-save files? Is there a more standard way to read and write config files?
I ask because I have two methods that seem to do the same thing, they get the config file handle ('cf') call cf.set('blah', 'foo' bar) then use the save_lock_file(cf) call above. For one method it works and for the other method the write never takes place, unsure why at this point.
def used_like_this():
cf = get_lock_file()
cf.set('some_prop_section', 'some_prop', 'some_value')
save_lock_file(cf)
Just to note that configuration file handling is simpler with ConfigObj.
To read and then write a config file:
from configobj import ConfigObj
config = ConfigObj(filename)
value = config['entry']
config['entry'] = newvalue
config.write()
Looks good to me.
If both places call get_lock_file, then cf.set(...), and then save_lock_file, and no exceptions are raised, this should work.
If you have different threads or processes accessing the same file you could have a race condition:
thread/process A reads the file
thread/process B reads the file
thread/process A updates the file
thread/process B updates the file
Now the file only contains B's updates, not A's.
Also, for safe file writing, don't forget the with statement (Python 2.5 and up), it'll save you a try/finally (which you should be using if you're not using with). From ConfigParser's docs:
with open('example.cfg', 'wb') as configfile:
config.write(configfile)
Works for me.
C:\temp>type svn.lock
[some_prop_section]
Hello=World
C:\temp>python
ActivePython 2.6.2.2 (ActiveState Software Inc.) based on
Python 2.6.2 (r262:71600, Apr 21 2009, 15:05:37) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import ConfigParser
>>> def get_lock_file():
... cf = ConfigParser.ConfigParser()
... cf.read("svn.lock")
... return cf
...
>>> def save_lock_file(configurationParser):
... cf = configurationParser
... config_file = open('svn.lock', 'w')
... cf.write(config_file)
... config_file.close()
...
>>> def used_like_this():
... cf = get_lock_file()
... cf.set('some_prop_section', 'some_prop', 'some_value')
... save_lock_file(cf)
...
>>> used_like_this()
>>> ^Z
C:\temp>type svn.lock
[some_prop_section]
hello = World
some_prop = some_value
C:\temp>