Trouble with copying dictionaries and using deepcopy on an SQLAlchemy ORM object - python

I'm doing a Simulated Annealing algorithm to optimise a given allocation of students and projects.
This is language-agnostic pseudocode from Wikipedia:
s ← s0; e ← E(s) // Initial state, energy.
sbest ← s; ebest ← e // Initial "best" solution
k ← 0 // Energy evaluation count.
while k < kmax and e > emax // While time left & not good enough:
snew ← neighbour(s) // Pick some neighbour.
enew ← E(snew) // Compute its energy.
if enew < ebest then // Is this a new best?
sbest ← snew; ebest ← enew // Save 'new neighbour' to 'best found'.
if P(e, enew, temp(k/kmax)) > random() then // Should we move to it?
s ← snew; e ← enew // Yes, change state.
k ← k + 1 // One more evaluation done
return sbest // Return the best solution found.
The following is an adaptation of the technique. My supervisor said the idea is fine in theory.
First I pick up some allocation (i.e. an entire dictionary of students and their allocated projects, including the ranks for the projects) from entire set of randomised allocations, copy it and pass it to my function. Let's call this allocation aOld (it is a dictionary). aOld has a weight related to it called wOld. The weighting is described below.
The function does the following:
Let this allocation, aOld be the best_node
From all the students, pick a random number of students and stick in a list
Strip (DEALLOCATE) them of their projects ++ reflect the changes for projects (allocated parameter is now False) and lecturers (free up slots if one or more of their projects are no longer allocated)
Randomise that list
Try assigning (REALLOCATE) everyone in that list projects again
Calculate the weight (add up ranks, rank 1 = 1, rank 2 = 2... and no project rank = 101)
For this new allocation aNew, if the weight wNew is smaller than the allocation weight wOld I picked up at the beginning, then this is the best_node (as defined by the Simulated Annealing algorithm above). Apply the algorithm to aNew and continue.
If wOld < wNew, then apply the algorithm to aOld again and continue.
The allocations/data-points are expressed as "nodes" such that a node = (weight, allocation_dict, projects_dict, lecturers_dict)
Right now, I can only perform this algorithm once, but I'll need to try for a number N (denoted by kmax in the Wikipedia snippet) and make sure I always have with me, the previous node and the best_node.
So that I don't modify my original dictionaries (which I might want to reset to), I've done a shallow copy of the dictionaries. From what I've read in the docs, it seems that it only copies the references and since my dictionaries contain objects, changing the copied dictionary ends up changing the objects anyway. So I tried to use copy.deepcopy().These dictionaries refer to objects that have been mapped with SQLA.
Questions:
I've been given some solutions to the problems faced but due to my über green-ness with using Python, they all sound rather cryptic to me.
Deepcopy isn't playing nicely with SQLA. I've been told thatdeepcopy on ORM objects probably has issues that prevent it from working as you'd expect. Apparently I'd be better off "building copy constructors, i.e. def copy(self): return FooBar(....)." Can someone please explain what that means?
I checked and found out that deepcopy has issues because SQLAlchemy places extra information on your objects, i.e. an _sa_instance_state attribute, that I wouldn't want in the copy but is necessary for the object to have. I've been told: "There are ways to manually blow away the old _sa_instance_state and put a new one on the object, but the most straightforward is to make a new object with __init__() and set up the attributes that are significant, instead of doing a full deep copy." What exactly does that mean? Do I create a new, unmapped class similar to the old, mapped one?
An alternate solution is that I'd have to "implement __deepcopy__() on your objects and ensure that a new _sa_instance_state is set up, there are functions in sqlalchemy.orm.attributes which can help with that." Once again this is beyond me so could someone kindly explain what it means?
A more general question: given the above information are there any suggestions on how I can maintain the information/state for the best_node (which must always persist through my while loop) and the previous_node, if my actual objects (referenced by the dictionaries, therefore the nodes) are changing due to the deallocation/reallocation taking place? That is, without using copy?

I have another possible solution: use transactions. This probably still isn't the best solution but implementing it should be faster.
Firstly create your session like this:
# transactional session
Session = sessionmaker(transactional=True)
sess = Session()
That way it will be transactional. The way transactions work is that sess.commit() will make your changes permanent while sess.rollback() will revert them.
In the case of simulated annealing you want to commit when you find a new best solution. At any later point, you can invoke rollback() to revert the status back to that position.

You don't want to copy sqlalchemy objects like that. You could implement your own methods which make the copies easily enough, but that is probably not want you want. You don't want copies of students and projects in your database do you? So don't copy that data.
So you have a dictionary which holds your allocations. During the process you should never modify the SQLAlchemy objects. All information that can be modified should be stored in those dictionaries. If you need to modify the objects to take that into account, copy the data back at the end.

Related

scala slower than python in constructing a set

I am learning scala by converting some of my python code to scala code. I just encountered an issue where the python code is significantly outperforming the scala code. The code is supposed to construct a set of candidate pairs based on some conditions. Scala has comparable runtime performance with python for all previous parts.
The id_map is an array of map from Long to set of string. The average number of k-v pairs in the map is 1942.
The scala code snippet is below:
// id_map Array[mutable.Map[Long, Set[String]]
val candidate_pairs = id_map
.flatMap(hashmap => hashmap.values)
.filter(_.size >= 2)
.flatMap(strset => strset.toList.combinations(2))
.map(_.sorted)
.toSet
and the corresponding python code is
candidate_pairs = set()
for hashmap in id_map.values():
for strset in hashmap.values():
if len(strset) >= 2:
for pair in combinations(strset, 2):
candidate_pairs.add(tuple(sorted(pair)))
The scala code snippet takes 80 seconds while python version takes 10 seconds.
I am wondering what can I optimize the above code to make it faster. What I have been trying is updating the set using the for loop
var candidate_pairs = Set.empty[List[String]]
for (
hashmap: mutable.Map[Long, Set[String]] <- id_map;
setstr: Set[String] <- hashmap.values if setstr.size >= 2;
pair <- setstr.toList.combinations(2)
)
candidate_pairs += pair.sorted
and although the candidate_pairs is updated a lot of time and each time it creates a new set, it actually is faster than the previous scala version, and takes about 50 seconds, still worse than python though. I tried using mutable set but however the result is about the same as the immutable version.
Any help would be appreciated! Thanks!
Being slower than python sounds ... surprising.
First of all, make sure you have adequate memory settings, and it is not spending half of those 80 seconds in GC.
Also, be sure to "warm up" the JVM (run your function a few times before doing actual measurement), use the same exact data for runs in python and scala (not just same statistics, exactly the same data), and do not include the time spent acquiring/generating data into measurement. Make several runs and compare average time, not how much a single run took.
Having said that, a few ways to make your code faster:
Adding .view (or .iterator) after id_map in your implementation cuts the execution time by about factor of 4 in my experiments.
(.view makes your chained transformation applied "lazily" – essentially, making a single pass through the single instance of array instead of multiple with multiple copies).
- Replacing .map(_.sorted) with
.map {
case List(a,b) if a < b => (a,b)
case List(a,b) => (b, a)
}
Shaves off about another 75% (sorting two element lists is mostly overhead).
This changes the return type to tuples rather than lists (constructing lots of tiny lists also adds up), but this seems even more appropriate in this case actually.
– Removing .filter(_.size >= 2) (it is redundant anyway, and computing size of a collection may get expensive) yields further improvement, but fairly small, that I did not bother to measure exactly.
Additionally, it may be cheaper to get rid of the separate sort step altogether, and just add .sorted before .combinations. I have not tested it, because it would be futile without knowing more details about your data profile.
These are some general improvements that should improve your performance either way, though it is hard to be sure you'll see the same effect as I do, as I don't really know anything about your data beyond that average map size, the improvement you see might be even better than mine, or it could be somewhat smaller ... but you should see some.
I ran this version with some test Scala code I created. On a list of 1944 elements, it completed in about 15 ms on my laptop.
id_map
.flatMap(hashmap => hashmap.values)
.flatMap { strset =>
if (strset.size >= 2) {
strset.toIndexedSeq.combinations(2)
} else IndexedSeq.empty
}.map(_.sorted).toSet
Main changes I have are to use an IndexedSeq instead of a List (which is a LinkedList), and to do the filter on the fly.
I assume you didn't want to hyper optimize, in which case you could still remove a lot of the intermediate collections created in the flatMap, combinations, conversion to IndexedSeq and toSet call.

Methods vs. Properties

I'm working on the Python API for our physics simulation package (Mechanica, https://mechanica.readthedocs.io), and mulling over wether to use properties or methods.
Python has an established convention that of the objects in a dictionary are accessible by an items() method, i.e.
[i for i in d.items()]
I'm trying to adhere to this established convention for our objects, but it's sometimes awkward, for example, in our simulator, we have:
C.items() # get all the members of this
# type
n = a.neighbors() # get all the neighbors of an object
c = find_some_cluster() # some function to find a cluster
c.items() # get all the items in ths list
b = m.bind(pot, a, b) # create a bond between two objects
b.energy() # gets the current energy of the bond
b.half_life # gets / sets the half life
b.delete() # deletes the bond
b.items()[0], b.items()[1] # gets the pair of objects that this bond acts on.
b.dissociation_energy # bond breaking threshold
To access one of the of objects a that a bond is between, you currently have to call
b.items()[0]
I think that's awkward, and perhaps items would be better as a property. I just don't know though, because if I made it a property, that goes agains the some of the established Python convention. Python itself is pretty inconsistent where some things are a stand-alone function, i.e. len(a) for length of a list, but most other things are methods on objects.
Our bond object, some things on it are methods, like energy(), but others are properties, like half_life. I set these up this way, because half_life is actually a stateful property of the object, but energy() is a computed thing. I’m not sure if this makes the most sense to the end user though.
What do you guys think our items should be, a method or property. Is there any good rule to apply as to when something should be a method or property.

Questions related to performance/efficiency in Python/Django

I have few questions which are bothering me since few days back. I'm a beginner Python/Django Programmer so I just want to clear few things before I dive into real time product development.(for Python 2.7.*)
1) saving value in a variable before using in a function
for x in some_list/tuple:
func(do_something(x))
for x in some_list/tuple:
y = do_something(x)
func(y)
Which one is faster or which one I SHOULD use.
2)Creating a new object of a model in Django
def myview(request):
u = User(username="xyz12",city="TA",name="xyz",...)
u.save()
def myview(request):
d = {'username':"xyz12",'city':"TA",'name':"xyz",...}
u = User(**d)
u.save()
3) creating dictionary
var = Dict(key1=val1,key2=val2,...)
var = {'key1':val1,'key2':val2,...}
4) I know .append() is faster than += but what if I want to append a list's elements to another
a = [1,2,3,],b=[4,5,6]
a += b
or
for i in b:
a.append(i)
This is a very interesting question, but I think you don't ask it for the good reason. The performances gained by such optimisations are negligible, especially if you're working with small number of elements.
On the other hand, what is really important is the ease of reading the code and it's clarity.
def myview(request):
d = {'username':"xyz12",'city':"TA",'name':"xyz",...}
u = User(**d)
u.save()
This code for example isn't "easy" to read and to understand at first sight. It requires to think about it before finding what is actually does. Unless you need the intermediary step, don't do it.
For the 4th point, I'd go for the first solution, way much clearer (and it avoids the function call overhead created by calling the same function in a loop). You could also use more specialised function for better performances such as reduce (see this answer : https://stackoverflow.com/a/11739570/3768672 and this thread as well : What is the fastest way to merge two lists in python?).
The 1st and 3rd points are usually up to what you prefer, as both are really similar and will probably be optimised when compiled to bytecode anyway.
If you really want to optimise more your code, I advise you to go check this out : https://wiki.python.org/moin/PythonSpeed/PerformanceTips
PS : Ultimately, you can still do your own tests. Write two functions doing the exact same things with the two different methods you want to test, measure the execution times of these methods and compare them (be careful, do the tests multiple time to reduce the uncertainties).

Managing Memory with Python Reading Objects of Varying Sizes from OODB's

I'm reading in a collection of objects (tables like sqlite3 tables or dataframes) from an Object Oriented DataBase, most of which are small enough that the Python garbage collector can handle without incident. However, when they get larger in size (less than 10 MB's) the GC doesn't seem to be able to keep up.
psuedocode looks like this:
walk = walkgenerator('/path')
objs = objgenerator(walk)
with db.transaction(bundle=True, maxSize=10000, maxParts=10):
oldobj = None
oldtable = None
for obj in objs:
currenttable = obj.table
if oldtable and oldtable in currenttable:
db.delete(oldobj.path)
del oldtable
oldtable = currenttable
del oldobj
oldobj = obj
if not count % 100:
gc.collect()
I'm looking for an elegant way to manage memory while allowing Python to handle it when possible.
Perhaps embarrassingly, I've tried using del to help clean up reference counts.
I've tried gc.collect() at varying modulo counts in my for loops:
100 (no difference),
1 (slows loop quite a lot, and I will still get a memory error of some type),
3 (loop is still slow but memory still blows up eventually)
Suggestions are appreciated!!!
Particularly, if you can give me tools to assist with introspection. I've used Windows Task Manager here, and it seems to more or less randomly spring a memory leak. I've limited the transaction size as much as I feel comfortable, and that seems to help a little bit.
There's not enough info here to say much, but what I do have to say wouldn't fit in a comment so I'll post it here ;-)
First, and most importantly, in CPython garbage collection is mostly based on reference counting. gc.collect() won't do anything for you (except burn time) unless trash objects are involved in reference cycles (an object A can be reached from itself by following a chain of pointers transitively reachable from A). You create no reference cycles in the code you showed, but perhaps the database layer does.
So, after you run gc.collect(), does memory use go down at all? If not, running it is pointless.
I expect it's most likely that the database layer is holding references to objects longer than necessary, but digging into that requires digging into exact details of how the database layer is implemented.
One way to get clues is to print the result of sys.getrefcount() applied to various large objects:
>>> import sys
>>> bigobj = [1] * 1000000
>>> sys.getrefcount(bigobj)
2
As the docs say, the result is generally 1 larger than you might hope, because the refcount of getrefcount()'s argument is temporarily incremented by 1 simply because it is being used (temporarily) as an argument.
So if you see a refcount greater than 2, del won't free the object.
Another way to get clues is to pass the object to gc.get_referrers(). That returns a list of objects that directly refer to the argument (provided that a referrer participates in Python's cyclic gc).
BTW, you need to be clearer about what you mean by "doesn't seem to work" and "blows up eventually". Can't guess. What exactly goes wrong? For example, is MemoryError raised? Something else? Traebacks often yield a world of useful clues.

Force matrix_world to be recalculated in Blender

I'm designing an add-on for blender, that changes the location of certain vertices of an object. Every object in blender has a matrix_world atribute, that holds a matrix that transposes the coordinates of the vertices from the object to the world frame.
print(object.matrix_world) # unit matrix (as expected)
object.location += mathutils.Vector((5,0,0))
object.rotation_quaternion *= mathutils.Quaternion((0.0, 1.0, 0.0), math.radians(45))
print(object.matrix_world) # Also unit matrix!?!
The above snippet shows that after the translation, you still have the same matrix_world. How can I force blender to recalculate the matrix_world?
You should call Scene.update after changing those values, otherwise Blender won't recalculate matrix_world until it's needed [somewhere else]. The reason, according to the "Gotcha's" section in the API docs, is that this re-calc is an expensive operation, so it's not done right away:
Sometimes you want to modify values from python and immediately access the updated values, eg:
Once changing the objects bpy.types.Object.location you may want to access its transformation right after from bpy.types.Object.matrix_world, but this doesn’t work as you might expect.
Consider the calculations that might go into working out the objects final transformation, this includes:
animation function curves.
drivers and their pythons expressions.
constraints
parent objects and all of their f-curves, constraints etc.
To avoid expensive recalculations every time a property is modified, Blender defers making the actual calculations until they are needed.
However, while the script runs you may want to access the updated values.
This can be done by calling bpy.types.Scene.update after modifying values which recalculates all data that is tagged to be updated.
Calls to bpy.context.scene.update() can become expensive when called within a loop.
If your objects have no complex constraints (e.g. plain or parented), the following can be used to recompute the world matrix after changing object's .location, .rotation_euler\quaternion, or .scale.
def update_matrices(obj):
if obj.parent is None:
obj.matrix_world = obj.matrix_basis
else:
obj.matrix_world = obj.parent.matrix_world * \
obj.matrix_parent_inverse * \
obj.matrix_basis
Some notes:
Immediately after setting object location/rotation/scale the object's matrix_basis is updated
But matrix_local (when parented) and matrix_world are only updated during scene.update()
When matrix_world is manually recomputed (using the code above), matrix_local is recomputed as well
If the object is parented, then its world matrix depends on the parent's world matrix as well as the parent's inverse matrix at the time of creation of the parenting relationship.
I needed to do this too but needed this value to be updated whilst I imported a large scene with tens of thousands of objects.
Calling 'scene.update()' became exponentially slower, so I needed to find a way to do this without calling that function.
This is what I came up with:
def BuildScaleMatrix(s):
return Matrix.Scale(s[0],4,(1,0,0)) * Matrix.Scale(s[1],4,(0,1,0)) * Matrix.Scale(s[2],4,(0,0,1))
def BuildRotationMatrixXYZ(r):
return Matrix.Rotation(r[2],4,'Z') * Matrix.Rotation(r[1],4,'Y') * Matrix.Rotation(r[0],4,'X')
def BuildMatrix(t,r,s):
return Matrix.Translation(t) * BuildRotationMatrixXYZ(r) * BuildScaleMatrix(s)
def UpdateObjectTransform(ob):
ob.matrix_world = BuildMatrix(ob.location, ob.rotation_euler, ob.scale)
This isn't most efficient way to build a matrix (if you know of a better way in blender, please add) and this only works for XYZ order transforms but this avoids exponential slow downs when dealing with large data sets.
Accessing Object.matrix_world is causing it to "freeze" even though you don't do anything to it, eg:
m = C.active_object.matrix_world
causes the matrix to be stuck. Whenever you want to access the matrix use
Object.matrix_world.copy()
only if you want to write the matrix, use
C.active_object.matrix_world = m

Categories