I want to make a single database object available across many python modules.
For a related example, I create globl.py:
DOCS_ROOT="c:\docs" ## as an example
SOLR_BASE="http://localhost:8636/solr/"
Any other module which needs it can do a
from globl import DOCS_ROOT
Now this example aside, I want to do the same thing with database connection objects, share them across many modules.
import MySQLdb
conn = MySQLdb.connect (host="localhost"...)
cursor = conn.cursor()
I tried this on the interpreter:
from globl import cursor
and it seems to work. But I suspect that this will cause the same module to be executed each time one imports from it. So is this the proper way?
Even if the import doesn't run the code multiple times, this is definitely not the correct way.
You should instead hide the process of obtaining a connection or cursor behind a function. You can then implement this function using either a Singleton or Object Pool design pattern.
So it would be something like this:
db.py:
_connection = None
def get_connection():
global _connection
if not _connection:
_connection = MySQLdb.connect(host="localhost"...)
return _connection
# List of stuff accessible to importers of this module. Just in case
__all__ = [ 'getConnection' ]
## Edit: actually you can still refer to db._connection
## if you know that's the name of the variable.
## It's just left out from enumeration if you inspect the module
someothermodule.py:
import db
conn = db.get_connection() # This will always return the same object
By the way, depending on what you are doing, it may not be so much of a good idea to share
your connection object rather than create a new one every time you need one.
But, that's why you'd want to write a get_connection() method, to abstract from these issues in the rest of your code.
You suspect wrongly. The code will only be executed once - subsequent imports just refer to the module via sys.modules, and don't re-run it.
(Note that this is the case as long as you always use the same path to import the module - if you do from globl import cursor in one place, and from my.fullyqualified.project.global import cursor in another, you probably will find the code is re-executed.)
Edit to add as S.Lott says in the comment, this is a perfectly good way to handle a global object.
I think Daniel already answered the question, while I'd like to add few comments about the cursor object you want to share.
It is generally not a good idea to share the cursor object that way. Certainly it depends on what your program is, but as a general solution I'd recommend you to hide this cursor object behind a "factory" producing cursors. Basically you can create a method cursor() or get_cursor() instead of making the cursor a global variable. The major benefit (but not the only one) - you can hide a more complex logic behind this "factory" - pooling, automatic re-connection in case the connection is dropped, etc. Even if you don't need it right away - it will be very easy to add it later if you start using this approach now, and while for now you can keep this function implementation as simple as return _cursor.
And yes, still, the module itself will be imported once only.
Related
I just realized there is something mysterious (at least for me) in the way you can add vertex instructions in Kivy with the with Python statement. For example, the way with is used goes something like this:
... some code
class MyWidget(Widget)
... some code
def some_method (self):
with self.canvas:
Rectangle(pos=self.pos, size=self.size)
At the beginning I thought that it was just the with Python statement that I have used occasionally. But suddenly I realize it is not. Usually it looks more like this (example taken from here):
with open('output.txt', 'w') as f:
f.write('Hi there!')
There is usually an as after the instance and something like and alias to the object. In the Kivy example we don't define and alias which is still ok. But the part that puzzles me is that instruction Rectangle is still associated to the self.canvas. After reading about the with statement, I am quite convinced that the Kivy code should be written like:
class MyWidget(Widget)
... some code
def some_method (self):
with self.canvas as c:
c.add (Rectangle(pos=self.pos, size=self.size))
I am assuming that internally the method add is the one being called. The assumption is based that we can simply add the rectangles with self.add (Rectangle(pos=self.pos, size=self.size))
Am I missing something about the with Python statement? or is this somehow something Kivy implements?
I don't know Kivy, but I think I can guess how this specific construction work.
Instead of keeping a handle to the object you are interacting with (the canvas?), the with statement is programmed to store it in some global variable, hidden to you. Then, the statements you use inside with use that global variable to retrieve the object. At the end of the block, the global variable is cleared as part of cleanup.
The result is a trade-off: code is less explicit (which is usually a desired feature in Python). However, the code is shorter, which might lead to easier understanding (with the assumption that the reader knows how Kivy works). This is actually one of the techniques of making embedded DSLs in Python.
There are some technicalities involved. For example, if you want to be able to nest such constructions (put one with inside another), instead of a simple global variable you would want to use a global variable that keeps a stack of such objects. Also, if you need to deal with threading, you would use a thread-local variable instead of a global one. But the generic mechanism is still the same—Kivy uses some state which is kept in a place outside your direct control.
There is nothing extra magical with the with statement, but perhaps you are unaware of how it works?
In order for any object to be used in a with statement it must implement two methods: __enter__ and __exit__. __enter__ is called when the with block is entered, and __exit__ is called when the block is exited for any reason.
What the object does in its __enter__ method is, of course, up to it. Since I don't have the Kivy code I can only guess that its canvas.__enter__ method sets a global variable somewhere, and that Rectangle checks that global to see where it should be drawing.
I'm familiar with using python's with statement as a means of ensuring finalization of an object in the event of an exception being thrown. This usually looks like
with file.open('myfile.txt') as f:
do stuff...
which is short-hand for
f = file.open('myfile.txt'):
try:
do stuff...
finally:
f.close()
or whatever other finalization routine a class may present.
I recently came across a piece of code dealing with OpenGL that presented this:
with self.shader:
(Many OpenGL commands)
Note that absence of any as keyword. Does this indicate that the __enter__ and __exit__ methods of the class are still to be called, but that the object is never explicitly used in the block (i.e., it works through globals or implicit references)? Or is there some other meaning that is eluding me?
The context manager can optionally return an object, to be assigned to the identifier named by as. And it is the object returned by the __enter__ method that is assigned by as, not necessarily the context manager itself.
Using as <identifier> helps when you create a new object, like the open() call does, but not all context managers are created just for the context. They can be reusable and have already been created, for example.
Take a database connection. You create the database connection just once, but many database adapters let you use the connection as a context manager; enter the context and a transaction is started, exit it and the transaction is either committed (on success), or rolled back (when there is an exception):
with db_connection:
# do something to the database
No new objects need to be created here, the context is entered with db_connection.__enter__() and exited again with db_connection.__exit__(), but we already have a reference to the connection object.
Now, it could be that the connection object produces a cursor object when you enter. Now it makes sense to assign that cursor object in a local name:
with db_connection as cursor:
# use cursor to make changes to the database
db_connection still wasn't called here, it already existed before, and we already have a reference to it. But whatever db_connection.__enter__() produced is now assigned to cursor and can be used from there on out.
This is what happens with file objects; open() returns a file object, and fileobject.__enter__() returns the file object itself, so you can use the open() call in a with statement and assign a reference to the newly created object in one step, rather than two. Without that little trick, you'd have to use:
f = open('myfile.txt')
with f:
# use `f` in the block
Applying all this to your shader example; you already have a reference to self.shader. It is quite probable that self.shader.__enter__() returns a reference to self.shader again, but since you already have a perfectly serviceable reference, why create a new local for that?
The above answer is nicely put.
The only thing I kept asking myself while reading it, is where is the confirmation of the following scenario. In the event there is an assignment in the body of the context of the with statement, anything on the right side of the assignment is first "bound" to the context. So, in the following:
with db_connection():
result = select(...)
... select is ~ ref_to_connection.select(...)
I put this here for anyone like me who comes and goes between languages and might benefit by a quick reminder of how to read and track the refs here.
I just realized there is something mysterious (at least for me) in the way you can add vertex instructions in Kivy with the with Python statement. For example, the way with is used goes something like this:
... some code
class MyWidget(Widget)
... some code
def some_method (self):
with self.canvas:
Rectangle(pos=self.pos, size=self.size)
At the beginning I thought that it was just the with Python statement that I have used occasionally. But suddenly I realize it is not. Usually it looks more like this (example taken from here):
with open('output.txt', 'w') as f:
f.write('Hi there!')
There is usually an as after the instance and something like and alias to the object. In the Kivy example we don't define and alias which is still ok. But the part that puzzles me is that instruction Rectangle is still associated to the self.canvas. After reading about the with statement, I am quite convinced that the Kivy code should be written like:
class MyWidget(Widget)
... some code
def some_method (self):
with self.canvas as c:
c.add (Rectangle(pos=self.pos, size=self.size))
I am assuming that internally the method add is the one being called. The assumption is based that we can simply add the rectangles with self.add (Rectangle(pos=self.pos, size=self.size))
Am I missing something about the with Python statement? or is this somehow something Kivy implements?
I don't know Kivy, but I think I can guess how this specific construction work.
Instead of keeping a handle to the object you are interacting with (the canvas?), the with statement is programmed to store it in some global variable, hidden to you. Then, the statements you use inside with use that global variable to retrieve the object. At the end of the block, the global variable is cleared as part of cleanup.
The result is a trade-off: code is less explicit (which is usually a desired feature in Python). However, the code is shorter, which might lead to easier understanding (with the assumption that the reader knows how Kivy works). This is actually one of the techniques of making embedded DSLs in Python.
There are some technicalities involved. For example, if you want to be able to nest such constructions (put one with inside another), instead of a simple global variable you would want to use a global variable that keeps a stack of such objects. Also, if you need to deal with threading, you would use a thread-local variable instead of a global one. But the generic mechanism is still the same—Kivy uses some state which is kept in a place outside your direct control.
There is nothing extra magical with the with statement, but perhaps you are unaware of how it works?
In order for any object to be used in a with statement it must implement two methods: __enter__ and __exit__. __enter__ is called when the with block is entered, and __exit__ is called when the block is exited for any reason.
What the object does in its __enter__ method is, of course, up to it. Since I don't have the Kivy code I can only guess that its canvas.__enter__ method sets a global variable somewhere, and that Rectangle checks that global to see where it should be drawing.
I am trying to compare two modules/classes/method and to find out if the class/method has have changed. We allow users to change classes/methods, and after processing, we make those changes persistent, without overwriting the older classes/methods. However, before we commit the new classes, we need to establish if the code has changed and also if the functionally of the methods has changed e.g output differ and performance also defer on the same input data. I am ok with performance change, but my problem is changes in code and how to log - what has changed. i wrote something like below
class TestIfClassHasChanged(unittest.TestCase):
def setUp(self):
self.old = old_class()
self.new = new_class()
def test_if_code_has_changed(self):
# simple case for one method
old_codeobject = self.old.area.func_code.co_code
new_codeobject = self.new.area.func_code.co_code
self.assertEqual(old_codeobject, new_codeobject)
where area() is a method in both classes.. However, if I have many methods, what i see here is looping over all methods. Possible to do this at class or module level?
Secondly if I find that the code objects are not equal, I would like to log the changes. I used inspect.getsource(self.old.area) and inspect.getsource(self.new.area) compared the two to get the difference, could there be a better way of doing this?
You should be using a version control program to help manage development. This is one of the specific d=features you get from vc program is the ability to track changes. You can do diffs between current source code and previous check-ins to test if there were any changes.
if i have many methods, what i see
here is looping over all methods.
Possible to do this at class or module
level?
i will not ask why you want to do such thing ? but yes you can here is an example
import inspect
import collections
# Here i will loop over all the function in a module
module = __import__('inspect') # this is fun !!!
# Get all function in the module.
list_functions = inspect.getmembers(module, inspect.isfunction)
# Get classes and methods correspond .
list_class = inspect.getmembers(module, inspect.isclass)
class_method = collections.defaultdict(list)
for class_name, class_obj in list_class:
for method in inspect.getmembers(class_obj, inspect.ismethod):
class_method[class_name].append(method)
Having just pulled my hair off because of a difference, I'd like to know what the difference really is in Python 2.5.
I had two blocks of code (dbao.getConnection() returns a MySQLdb connection).
conn = dbao.getConnection()
with conn:
# Do stuff
And
with dbao.getConnection() as conn:
# Do stuff
I thought these would have the same effect but apparently not as the conn object of the latter version was a Cursor. Where did the cursor come from and is there a way to combine the variable initialization and with statement somehow?
It may be a little confusing at first glance, but
with babby() as b:
...
is not equivalent to
b = babby()
with b:
...
To see why, here's how the context manager would be implemented:
class babby(object):
def __enter__(self):
return 'frigth'
def __exit__(self, type, value, tb):
pass
In the first case, the name b will be bound to whatever is returned from the __enter__ method of the context manager. This is often the context manager itself (for example for file objects), but it doesn't have to be; in this case it's the string 'frigth', and in your case it's the database cursor.
In the second case, b is the context manager object itself.
In general terms, the value assigned by the as part of a with statement is going to be whatever gets returned by the __enter__ method of the context manager.
The with statement is there to allow for example making sure that transaction is started and stopped correctly.
In case of database connections in python, I think the natural thing to do is to create a cursor at the beginning of the with statement and then commit or rollback the transaction at the end of it.
The two blocks you gave are same from the with statement point of view. You can add the as to the first one just as well and get the cursor.
You need to check how the with support is implemented in the object you use it with.
See http://docs.python.org/whatsnew/2.5.html#pep-343-the-with-statement