(Edited for even more clarity)
I'm reading the Python book (Python Essential Reference by Beazley) and he says:
The with statement allows a series of statements to execute inside a
runtime context that is controlled by an object that serves as a context manager.
Here is an example:
with open("debuglog","a") as f:
f.write("Debugging\n")
statements
f.write("Done\n")
He goes on to say:
The with obj statement accepts an optional as var specifier. If given, the value
returned by obj._ enter _() is placed into var. It is important to emphasize
that obj is not necessarily the value assigned to var.
I understand the mechanics of what a 'with' keyword does: a file-object is returned by open and that object is accessible via f within the body of the block. I also understand that enter() and eventually exit() will be called.
But what exactly is a run-time context? A few low level details would be nice - or, an example in C. Could someone clarify what exactly a "context" is and how it might relate to other languages (C, C++). My understanding of a context was the environment eg: a Bash shell executes ls in the context of all the (env displayed) shell variables.
With the with keyword - yes f is accessible to the body of the block but isn't that just scoping? eg: for x in y: here x is not scoped within the block and retains it's value outside the block - is this what Beazley means when he talks about 'runtime context', that f is scoped only within the block and looses all significance outside the with-block?? Why does he say that the statements "execute inside a runtime context"??? Is this like an "eval"??
I understand that open returns an object that is "not ... assigned to var"??
Why isn't it assigned to var? What does Beazley mean by making a statement like that?
The with statement was introduced in PEP 343. This PEP also introduced a new term, "context manager", and defined what that term means.
Briefly, a "context manager" is an object that has special method functions .__enter__() and .__exit__(). The with statement guarantees that the .__enter__() method will be called to set up the block of code indented under the with statement, and also guarantees that the .__exit__() method function will be called at the time of exit from the block of code (no matter how the block is exited; for example, if the code raises an exception, .__exit__() will still be called).
http://www.python.org/dev/peps/pep-0343/
http://docs.python.org/2/reference/datamodel.html?highlight=context%20manager#with-statement-context-managers
The with statement is now the preferred way to handle any task that has a well-defined setup and teardown. Working with a file, for example:
with open(file_name) as f:
# do something with file
You know the file will be properly closed when you are done.
Another great example is a resource lock:
with acquire_lock(my_lock):
# do something
You know the code won't run until you get the lock, and as soon as the code is done the lock will be released. I don't often do multithreaded coding in Python, but when I did, this statement made sure that the lock was always released, even in the face of an exception.
P.S. I did a Google search online for examples of context managers and I found this nifty one: a context manager that executes a Python block in a specific directory.
http://ralsina.me/weblog/posts/BB963.html
EDIT:
The runtime context is the environment that is set up by the call to .__enter__() and torn down by the call to .__exit__(). In my example of acquiring a lock, the block of code runs in the context of having a lock available. In the example of reading a file, the block of code runs in the context of the file being open.
There isn't any secret magic inside Python for this. There is no special scoping, no internal stack, and nothing special in the parser. You simply write two method functions, .__enter__() and .__exit__() and Python calls them at specific points for your with statement.
Look again at this section from the PEP:
Remember, PEP 310 proposes roughly this syntax (the "VAR =" part is optional):
with VAR = EXPR:
BLOCK
which roughly translates into this:
VAR = EXPR
VAR.__enter__()
try:
BLOCK
finally:
VAR.__exit__()
In both examples, BLOCK is a block of code that runs in a specific runtime context that is set up by the call to VAR.__enter__() and torn down by VAR.__exit__().
There are two main benefits to the with statement and the way it is all set up.
The more concrete benefit is that it's "syntactic sugar". I would much rather write a two-line with statement than a six-line sequence of statements; it's easier two write the shorter one, it looks nicer and is easier to understand, and it is easier to get right. Six lines versus two means more chances to screw things up. (And before the with statement, I was usually sloppy about wrapping file I/O in a try block; I only did it sometimes. Now I always use with and always get the exception handling.)
The more abstract benefit is that this gives us a new way to think about designing our programs. Raymond Hettinger, in a talk at PyCon 2013, put it this way: when we are writing programs we look for common parts that we can factor out into functions. If we have code like this:
A
B
C
D
E
F
B
C
D
G
we can easily make a function:
def BCD():
B
C
D
A
BCD()
E
F
BCD()
G
But we have never had a really clean way to do this with setup/teardown. When we have a lot of code like this:
A
BCD()
E
A
XYZ()
E
A
PDQ()
E
Now we can define a context manager and rewrite the above:
with contextA:
BCD()
with contextA:
XYZ()
with contextA:
PDQ()
So now we can think about our programs and look for setup/teardown that can be abstracted into a "context manager". Raymond Hettinger showed several new "context manager" recipes he had invented (and I'm racking my brain trying to remember an example or two for you).
EDIT: Okay, I just remembered one. Raymond Hettinger showed a recipe, that will be built in to Python 3.4, for using a with statement to ignore an exception within a block. See it here: https://stackoverflow.com/a/15566001/166949
P.S. I've done my best to give the sense of what he was saying... if I have made any mistake or misstated anything, it's on me and not on him. (And he posts on StackOverflow sometimes so he might just see this and correct me if I've messed anything up.)
EDIT: You've updated the question with more text. I'll answer it specifically as well.
is this what Beazley means when he talks about 'runtime context', that f is scoped only within the block and looses all significance outside the with-block?? Why does he say that the statements "execute inside a runtime context"??? Is this like an "eval"??
Actually, f is not scoped only within the block. When you bind a name using the as keyword in a with statement, the name remains bound after the block.
The "runtime context" is an informal concept and it means "the state set up by the .__enter__() method function call and torn down by the .__exit__() method function call." Again, I think the best example is the one about getting a lock before the code runs. The block of code runs in the "context" of having the lock.
I understand that open returns an object that is "not ... assigned to var"?? Why isn't it assigned to var? What does Beazley mean by making a statement like that?
Okay, suppose we have an object, let's call it k. k implements a "context manager", which means that it has method functions k.__enter__() and k.__exit__(). Now we do this:
with k as x:
# do something
What David Beazley wants you to know is that x will not necessarily be bound to k. x will be bound to whatever k.__enter__() returns. k.__enter__() is free to return a reference to k itself, but is also free to return something else. In this case:
with open(some_file) as f:
# do something
The call to open() returns an open file object, which works as a context manager, and its .__enter__() method function really does just return a reference to itself.
I think most context managers return a reference to self. Since it's an object it can have any number of member variables, so it can return any number of values in a convenient way. But it isn't required.
For example, there could be a context manager that starts a daemon running in the .__enter__() function, and returns the process ID number of the daemon from the .__enter__() function. Then the .__exit__() function would shut down the daemon. Usage:
with start_daemon("parrot") as pid:
print("Parrot daemon running as PID {}".format(pid))
daemon = lookup_daemon_by_pid(pid)
daemon.send_message("test")
But you could just as well return the context manager object itself with any values you need tucked inside:
with start_daemon("parrot") as daemon:
print("Parrot daemon running as PID {}".format(daemon.pid))
daemon.send_message("test")
If we need the PID of the daemon, we can just put it in a .pid member of the object. And later if we need something else we can just tuck that in there as well.
The with context takes care that on entry, the __enter__ method is called and the given var is set to whatever __enter__ returns.
In most cases, that is the object which is worked on previously - in the file case, it is - but e.g. on a database, not the connection object, but a cursor object is returned.
The file example can be extended like this:
f1 = open("debuglog","a")
with f1 as f2:
print f1 is f2
which will print True as here, the file object is returned by __enter__. (From its point of view, self.)
A database works like
d = connect(...)
with d as c:
print d is c # False
print d, c
Here, d and c are completely different: d is the connection to the database, c is a cursor used for one transaction.
The with clause is terminated by a call to __exit__() which is given the state of execution of the clause - either success or failure. In this case, the __exit__() method can act appropriately.
In the file example, the file is closed no matter if there was an error or not.
In the database example, normally the transaction is committed on success and rolled back on failure.
The context manager is for easy initialisation and cleanup of things like exactly these - files, databases etc.
There is no direct correspondence in C or C++ that I am aware of.
C knows no concept of exception, so none can be caught in a __exit__(). C++ knows exceptions, and there seems to be ways to do soo (look below at the comments).
Related
I am reading only firstline from python using :
with open(file_path, 'r') as f:
my_count = f.readline()
print(my_count)
I am bit confused over scope of variable my_count. Although prints work fine, would it be better to do something like my_count = 0 outside with statement first (for eg in C in used to do int my_count = 0)
A with statement does not create a scope (like if, for and while do not create a scope either).
As a result, Python will analyze the code and see that you made an assignment in the with statement, and thus that will make the variable local (to the real scope).
In Python variables do not need initialization in all code paths: as a programmer, you are responsible to make sure that a variable is assigned before it is used. This can result in shorter code: say for instance you know for sure that a list contains at least one element, then you can assign in a for loop. In Java assignment in a for loop is not considered safe (since it is possible that the body of the loop is never executed).
Initialization before the with scope can be safer in the sense that after the with statement we can safely assume that the variable exists. If on the other hand the variable should be assigned in the with statement, not initializing it before the with statement actually results in an additional check: Python will error if somehow the assignment was skipped in the with statement.
A with statement is only used for context management purposes. It forces (by syntax) that the context you open in the with is closed at the end of the indentation.
You should also go through PEP-343 and Python Documentation. It will clear that its not about creating scope its about using Context Manager. I am quoting python documentation on context manager
A context manager is an object that defines the runtime context to be established when executing a with statement. The context manager handles the entry into, and the exit from, the desired runtime context for the execution of the block of code. Context managers are normally invoked using the with statement (described in section The with statement), but can also be used by directly invoking their methods.
Typical uses of context managers include saving and restoring various kinds of global state, locking and unlocking resources, closing opened files, etc.
One of the basic changes from Python 2 to Python 3 was making print a function - which, to me, makes perfect sense given its structure. Why aren't the raise and del statements also functions? Especially in the case of raise it seems like it is taking an argument and doing something with it, just like a function does.
raise and del are definitely distinct from functions, each for different reasons:
raise exits the current flow of execution; the normal flow of byte-code interpretation is interrupted and the stack is unwound until the next exception handler is found. Functions can't do this, they create a new stack frame instead.
del can't be a function, because you must specify a specific target; you can't use just any expression, and what is deleted depends on the syntax given; if you use subscription, then deletion takes place for a given element in a container, or a name is removed from the current namespace. The right namespace to delete to is also dependent on the scope of the name deleted. See the del statement grammar definition:
del_stmt ::= "del" target_list
A function can't remove items from a parent namespace, nor can they distinguish between the result of a subscription expression or a direct reference. You pass objects to the function, but to a del statement you pass a name and a context (perhaps by the interpreter when deleting a local or global name).
print on the other hand, requires no special relationship with the current namespace or stack frame, and needs no special syntax constraints to do it's work. It is purely functionality at the application level. The global sys.stdout reference can be accessed by functions just as much as by the interpreter. As such it didn't need to be a statement, and by moving it to a function, additional benefits were made available, such as being able to override it's behaviour and to innovate on it quicker across Python releases.
Do note that part of the raise statement was moved to application-level code instead; in Python 2 you can attach a traceback to the raised exception with:
raise ExceptionClass, exception_value, traceback_object
In Python 3, attaching a traceback to an exception has been moved to the exception itself:
raise Exception("foo occurred").with_traceback(tracebackobj)
https://www.python.org/dev/peps/pep-3105/ has a list of rationals why print is made function. Of the five reasons, (IMO) the most relevant one is:
print is the only application-level functionality that has a statement dedicated to it.
As explained by Alex Martelli here https://stackoverflow.com/a/1054062:
Python statements are things the Python compiler must be specifically aware of -- they may alter the binding of names, may alter control flow, and/or may need to be entirely removed from the generated bytecode in certain conditions (the latter applies to assert). print was the only exception to this assertion in Python 2; by removing it from the roster of statements, Python 3 removes an exception, makes the general assertion "just hold", and therefore is a more regular language.
del and raise obviously alter the binding of names/alter the control flow, thus they both are okay.
I would like to handle a NameError exception by injecting the desired missing variable into the frame and then continue the execution from last attempted instruction.
The following pseudo-code should illustrate my needs.
def function():
return missing_var
try:
print function()
except NameError:
frame = inspect.trace()[-1][0]
# inject missing variable
frame.f_globals["missing_var"] = ...
# continue frame execution from last attempted instruction
exec frame.f_code from frame.f_lasti
Read the whole unittest on repl.it
Notes
As pointed out by ivan_pozdeev in his answer, this is known as resumption.
After more research, I found Veedrac's answer to the question Resuming program at line number in the context before an exception using a custom sys.excepthook posted by lc2817 very interesting. It relies on Richie Hindle's work.
Background
The code runs in a slave process, which is controlled by a parent. Tasks (functions really) are written in the parent and latter passed to the slave using dill. I expect some tasks (running in the slave process) to try to access variables from outer scopes in the parent and I'd like the slave to request those variables to the parent on the fly.
p.s.: I don't expect this magic to run in a production environment.
On the contrary to what various commenters are saying, "resume-on-error" exception handling is possible in Python. The library fuckit.py implements said strategy. It steamrollers errors by rewriting the source code of your module at import time, inserting try...except blocks around every statement and swallowing all exceptions. So perhaps you could try a similar sort of tactic?
It goes without saying: that library is intended as a joke. Don't ever use it in production code.
You mentioned that your use case is to trap references to missing names. Have you thought about using metaprogramming to run your code in the context of a "smart" namespace such as a defaultdict? (This is perhaps only marginally less of a bad idea than fuckit.py.)
from collections import defaultdict
class NoMissingNamesMeta(type):
#classmethod
def __prepare__(meta, name, bases):
return defaultdict(lambda: "foo")
class MyClass(metaclass=NoMissingNamesMeta):
x = y + "bar" # y doesn't exist
>>> MyClass.x
'foobar'
NoMissingNamesMeta is a metaclass - a language construct for customising the behaviour of the class statement. Here we're using the __prepare__ method to customise the dictionary which will be used as the class's namespace during creation of the class. Thus, because we're using a defaultdict instead of a regular dictionary, a class whose metaclass is NoMissingNamesMeta will never get a NameError. Any names referred to during the creation of the class will be auto-initialised to "foo".
This approach is similar to #AndréFratelli's idea of manually requesting the lazily-initialised data from a Scope object. In production I'd do that, not this. The metaclass version requires less typing to write the client code, but at the expense of a lot more magic. (Imagine yourself debugging this code in two years, trying to understand why non-existent variables are dynamically being brought into scope!)
The "resumption" exception handling technique has proven to be problematic, that's why it's missing from C++ and later languages.
Your best bet is to use a while loop to not resume where the exception was thrown but rather repeat from a predetermined place:
while True:
try:
do_something()
except NameError as e:
handle_error()
else:
break
You really can't unwind the stack after an exception is thrown, so you'd have to deal with the issue before hand. If your requirement is to generate these variables on the fly (which wouldn't be recommended, but you seem to understand that), then you'd have to actually request them. You can implement a mechanism for that (such as having a global custom Scope class instance and overriding __getitem__, or using something like the __dir__ function), but not as you are asking for it.
I just realized there is something mysterious (at least for me) in the way you can add vertex instructions in Kivy with the with Python statement. For example, the way with is used goes something like this:
... some code
class MyWidget(Widget)
... some code
def some_method (self):
with self.canvas:
Rectangle(pos=self.pos, size=self.size)
At the beginning I thought that it was just the with Python statement that I have used occasionally. But suddenly I realize it is not. Usually it looks more like this (example taken from here):
with open('output.txt', 'w') as f:
f.write('Hi there!')
There is usually an as after the instance and something like and alias to the object. In the Kivy example we don't define and alias which is still ok. But the part that puzzles me is that instruction Rectangle is still associated to the self.canvas. After reading about the with statement, I am quite convinced that the Kivy code should be written like:
class MyWidget(Widget)
... some code
def some_method (self):
with self.canvas as c:
c.add (Rectangle(pos=self.pos, size=self.size))
I am assuming that internally the method add is the one being called. The assumption is based that we can simply add the rectangles with self.add (Rectangle(pos=self.pos, size=self.size))
Am I missing something about the with Python statement? or is this somehow something Kivy implements?
I don't know Kivy, but I think I can guess how this specific construction work.
Instead of keeping a handle to the object you are interacting with (the canvas?), the with statement is programmed to store it in some global variable, hidden to you. Then, the statements you use inside with use that global variable to retrieve the object. At the end of the block, the global variable is cleared as part of cleanup.
The result is a trade-off: code is less explicit (which is usually a desired feature in Python). However, the code is shorter, which might lead to easier understanding (with the assumption that the reader knows how Kivy works). This is actually one of the techniques of making embedded DSLs in Python.
There are some technicalities involved. For example, if you want to be able to nest such constructions (put one with inside another), instead of a simple global variable you would want to use a global variable that keeps a stack of such objects. Also, if you need to deal with threading, you would use a thread-local variable instead of a global one. But the generic mechanism is still the same—Kivy uses some state which is kept in a place outside your direct control.
There is nothing extra magical with the with statement, but perhaps you are unaware of how it works?
In order for any object to be used in a with statement it must implement two methods: __enter__ and __exit__. __enter__ is called when the with block is entered, and __exit__ is called when the block is exited for any reason.
What the object does in its __enter__ method is, of course, up to it. Since I don't have the Kivy code I can only guess that its canvas.__enter__ method sets a global variable somewhere, and that Rectangle checks that global to see where it should be drawing.
I'm familiar with using python's with statement as a means of ensuring finalization of an object in the event of an exception being thrown. This usually looks like
with file.open('myfile.txt') as f:
do stuff...
which is short-hand for
f = file.open('myfile.txt'):
try:
do stuff...
finally:
f.close()
or whatever other finalization routine a class may present.
I recently came across a piece of code dealing with OpenGL that presented this:
with self.shader:
(Many OpenGL commands)
Note that absence of any as keyword. Does this indicate that the __enter__ and __exit__ methods of the class are still to be called, but that the object is never explicitly used in the block (i.e., it works through globals or implicit references)? Or is there some other meaning that is eluding me?
The context manager can optionally return an object, to be assigned to the identifier named by as. And it is the object returned by the __enter__ method that is assigned by as, not necessarily the context manager itself.
Using as <identifier> helps when you create a new object, like the open() call does, but not all context managers are created just for the context. They can be reusable and have already been created, for example.
Take a database connection. You create the database connection just once, but many database adapters let you use the connection as a context manager; enter the context and a transaction is started, exit it and the transaction is either committed (on success), or rolled back (when there is an exception):
with db_connection:
# do something to the database
No new objects need to be created here, the context is entered with db_connection.__enter__() and exited again with db_connection.__exit__(), but we already have a reference to the connection object.
Now, it could be that the connection object produces a cursor object when you enter. Now it makes sense to assign that cursor object in a local name:
with db_connection as cursor:
# use cursor to make changes to the database
db_connection still wasn't called here, it already existed before, and we already have a reference to it. But whatever db_connection.__enter__() produced is now assigned to cursor and can be used from there on out.
This is what happens with file objects; open() returns a file object, and fileobject.__enter__() returns the file object itself, so you can use the open() call in a with statement and assign a reference to the newly created object in one step, rather than two. Without that little trick, you'd have to use:
f = open('myfile.txt')
with f:
# use `f` in the block
Applying all this to your shader example; you already have a reference to self.shader. It is quite probable that self.shader.__enter__() returns a reference to self.shader again, but since you already have a perfectly serviceable reference, why create a new local for that?
The above answer is nicely put.
The only thing I kept asking myself while reading it, is where is the confirmation of the following scenario. In the event there is an assignment in the body of the context of the with statement, anything on the right side of the assignment is first "bound" to the context. So, in the following:
with db_connection():
result = select(...)
... select is ~ ref_to_connection.select(...)
I put this here for anyone like me who comes and goes between languages and might benefit by a quick reminder of how to read and track the refs here.