Linking container class properties to contained class properties - python

I'm working on a simulation of a cluster of solar panels (a system/container). The properties of this cluster are linked almost one-to-one to the properties of the elements -- the panels (subsystem/contained) -- via the number of elements per cluster. E.g. the energy production of the cluster is simply the number of panels in the cluster times the production of the single cluster. Same for the cost, weight, etc. My question is how the link the container class to the contained class.
Let me illustrate with a naive example approach:
class PanelA(BasePanel):
... _x,_y,_area,_production,etc.
#property
def production(self):
# J/panel/s
return self._area * self._efficiency
... and 20 similar properties
#property
def _technical_detail
class PanelB(BasePanel):
.... similar
class PanelCluster():
....
self.panel = PanelA()
self.density = 100 # panels/ha
#property
def production(self):
# J/ha/h
uc = 60*60 # unit conversion
rho = self.density
production_single_panel = self.panel.production
return uc*rho*production_single_panel
... and e.g. 20 similar constructions
Note, in this naive approach one would write e.g. 20 such methods which seems not in line with this DRY-principle.
What would be a better alternative? (Ab)Use getattr?
For example?
class Panel():
unit = {'production':'J/panel/s'}
class PanelCluster():
panel = Panel()
def __getattr__(self,key):
if self.panel.__hasattr__(key)
panel_unit = self.panel.unit[key]
if '/panel/s' in panel_unit:
uc = 60*60 # unit conversion
rho = self.density
value_per_panel = getattr(self.panel,key)
return uc*rho*value_per_panel
else:
return getattr(self.panel,key)
This already seems more 'programmatic' but might be naive -- again. So I wonder what are the options and the pros/cons thereof?

There are a number of Python issues with your code, e.g.:
yield means something specific in Python, probably not a good identifier
and it's spelled:
hasattr(self.panel.unit, key)
getattr(self.panel, key)
That aside, you're probably looking for a solution involving inheritance. Perhaps both Panel and PanelCluster need to inherit from a PanelFunction class?
class PanelFunctions(object):
#property
def Yield(...):
...
class Panel(PanelFunctions):
..
class PanelCluster(PanelFunctions):
..
I would leave the properties as separte definitions since you'll need to write unit tests for all of them (and it will be much easier to determine coverage that way).

Related

Pygmo2: migration between islands in an archipelago during evolution

I'm trying to use the Python library Pygmo2 (https://esa.github.io/pagmo2/index.html) to parallelize an optimization problem.
To my understanding, parallelization can be achieved with an archipelago of islands (in this case, mp_island).
As a minimal working example, one of the tutorials from the official site can serve: https://esa.github.io/pagmo2/docs/python/tutorials/using_archipelago.html
I extracted the code:
class toy_problem:
def __init__(self, dim):
self.dim = dim
def fitness(self, x):
return [sum(x), 1 - sum(x*x), - sum(x)]
def gradient(self, x):
return pg.estimate_gradient(lambda x: self.fitness(x), x)
def get_nec(self):
return 1
def get_nic(self):
return 1
def get_bounds(self):
return ([-1] * self.dim, [1] * self.dim)
def get_name(self):
return "A toy problem"
def get_extra_info(self):
return "\tDimensions: " + str(self.dim)
import pygmo as pg
a_cstrs_sa = pg.algorithm(pg.cstrs_self_adaptive(iters=1000))
p_toy = pg.problem(toy_problem(50))
p_toy.c_tol = [1e-4, 1e-4]
archi = pg.archipelago(n=32,algo=a_cstrs_sa, prob=p_toy, pop_size=70)
print(archi)
archi.evolve()
print(archi)
Looking at the documentation of the old version of the library (http://esa.github.io/pygmo/documentation/migration.html), migration between islands seems to be an essential feature of the island parallelization model.
Also, to my understanding, optimization algorithms like evolutionary algorithms could not work without it.
However, in the documentation of Pygmo2, I can nowhere find how to perform migration.
Is it happening automatically in an archipelago?
Does it depend on the selected algorithm?
Is it not yet implemented in Pygmo2?
Is the documentation on this yet missing or did I just not find it?
Can somebody enlighten me?
pagmo2 is now implementing migration since v2.11, the PR has benn completed and merged into master. Almost all capabilities present in pagmo1.x are restored. We will still add more topologies in the future, but they can already be implemented manually. Refer to docs here: https://esa.github.io/pagmo2/docs/cpp/cpp_docs.html
Tutorial and example are missing and will be added in the near future (help is welcome)
The migration framework has not been fully ported from pagmo1 to pagmo2 yet. There is a long-standing PR open here:
https://github.com/esa/pagmo2/pull/102
We will complete the implementation of the migration framework in the next few months, hopefully by the beginning of the summer.
IMHO, the PyGMO2/pagmo documentation is confirming the migration feature to be present.
The archipelago class is the main parallelization engine of pygmo. It essentially is a container of island able to initiate evolution (optimization tasks) in each island asynchronously while keeping track of the results and of the information exchange (migration) between the tasks ...
With an exception of thread_island-s ( where some automated inference may take place and enforce 'em for thread-safe UDI-s ), all other island types - { mp_island | ipyparallel_island }-s do create a GIL-independent form of a parallelism, yet the computing is performed via an async-operated .evolve() method
In original PyGMO, the archipelago class was auto .__init__()-ed with attribute topology = unconnected(), unless specified explicitly, as documented in PyGMO, having a tuple of call-interfaces for archipelago.__init__() method ( showing just the matching one ):
__init__( <PyGMO.algorithm> algo,
<PyGMO.problem> prob,
<int> n_isl,
<int> n_ind [, topology = unconnected(),
distribution_type = point_to_point,
migration_direction = destination
]
)
But, adding that, one may redefine the default, so as to meet one's PyGMO evolutionary process preferences:
topo = topology.erdos_renyi( nodes = 100,
p = 0.03
) # Erdos-Renyi ( random ) topology
or
set a Clustered Barabási-Albert, with ageing vertices graph topology:
topo = topology.clustered_ba( m0 = 3,
m = 3,
p = 0.5,
a = 1000,
nodes = 0
) # clustered Barabasi-Albert,
# # with Ageing vertices topology
or:
topo = topology.watts_strogatz( nodes = 100,
p = 0.1
) # Watts-Strogatz ( circle
# + links ) topology
and finally, set it by assignment into the class-instance attribute:
archi = pg.archipelago( n = 32,
algo = a_cstrs_sa,
prob = p_toy,
pop_size = 70
) # constructs an archipelago
archi.topology = topo # sets the topology to the
# # above selected, pre-defined <topo>

Using BindConstraint in Clutter to constrain the size of an actor

I recently discovered constraints in Clutter, however, I can't find information on how to constrain the size of actor by proportion. For example, I want an actor to keep to 1/2 the width of another actor, say it's parent. It seems like it can only force the width to scale 100% with the source.
Clutter.BindConstraint matches the positional and/or dimensional attributes of the source actor. for fractional positioning, you can use Clutter.AlignConstraint, but there is no Clutter.Constraint class that allows you to set a fractional dimensional attribute. you can implement your own ClutterConstraint that does so by subclassing Clutter.Constraint and overriding the Clutter.Constraint.do_update_allocation() virtual function, which gets passed the allocation's of the actor that should be modified by the constraint. something similar to this (untested) code should work:
class MyConstraint (Clutter.Constraint):
def __init__(self, source, width_fraction=1.0, height_fraction=1.0):
Clutter.Constraint.__init__(self)
self._source = source
self._widthf = width_fraction
self._heightf = height_fraction
def do_update_allocation(self, actor, allocation):
source_alloc = self._source.get_allocation()
width = source_alloc.get_width() * self._widthf
height = source_alloc.get_height() * self._heightf
allocation.x2 = allocation.x1 + width
allocation.y2 = allocation.y1 + height
this should illustrate the mechanism used by Clutter.Constraint to modify the allocation of an actor.

How to set an instance variable within a decorator?

I have a class which calculates salary components as shown below.
def normalize(func):
from functools import wraps
#wraps(func)
def wrapper(instance, *args, **kwargs):
allowanceToCheck = func(instance)
if instance.remainingAmount <= 0:
allowanceToCheck = 0.0
elif allowanceToCheck > instance.remainingAmount:
allowanceToCheck = instance.remainingAmount
instance.remainingAmount = instance.remainingAmount - allowanceToCheck
return allowanceToCheck
return wrapper
class SalaryBreakUpRule(object):
grossPay = 0.0
remainingAmount = 0.0
#property
def basic(self):
# calculates the basic pay according to predefined salary slabs.
basic = 6600 # Defaulting to 6600 for now.
self.remainingAmount = self.grossPay - basic
return basic
#property
#normalize
def dearnessAllowance(self):
return self.basic * 0.2
#property
#normalize
def houseRentAllowance(self):
return self.basic * 0.4
def calculateBreakUps(self, value = 0.0):
self.grossPay = value
return {
'basic' : self.basic,
'da' : self.dearnessAllowance,
'hra' : self.houseRentAllowance
}
Before calculating each allowance, I need to check if the total of all allowances does not exceed the grossPay i.e my total salary. I have written a decorator which wraps each allowance calculating method and does the above said requirement. For example,
* an employee having a salary of Rs.6700
* basic = 6,600 (according to slab)
* dearnessAllowance = 100 (cos 20% of basic is more than remaining amount)
* houseRentAllowance = 0.0 (cos 40% of basic is more than remaining amount)
But unfortunately it did not work. First allowance is calculated correctly, but all other allowances are being given the same value as first allowance. i.e houseRentAllowance will have 100 instead of 0.0 as given above.
The problem I have found is, the line of code
instance.remainingAmount = instance.remainingAmount - allowanceToCheck
in the decorator where I am trying to set a variable of the class does not work.
Is there any way I can fix this issue?
You've made Salary.basic into a property, and Salary.basic() has side effects! So every time your other functions reference self.basic, it recalculates and resets self.RemainingAmount to its original value, self.grossPay - basic.
Properties with this kind of side effects are bad design. I hope you see why now. Even after you fix this, accessing your other properties in different order will give you different results. Property accessors should not have durable side effects. More generally: Setters must set, getters must get. Properties look like simple variables, so they must behave accordingly or you'll never be able to understand or debug your code again.

MapReduce on more than one datastore kind in Google App Engine

I just watched Batch data processing with App Engine session of Google I/O 2010, read some parts of MapReduce article from Google Research and now I am thinking to use MapReduce on Google App Engine to implement a recommender system in Python.
I prefer using appengine-mapreduce instead of Task Queue API because the former offers easy iteration over all instances of some kind, automatic batching, automatic task chaining, etc. The problem is: my recommender system needs to calculate correlation between instances of two different Models, i.e., instances of two distinct kinds.
Example:
I have these two Models: User and Item. Each one has a list of tags as an attribute. Below are the functions to calculate correlation between users and items. Note that calculateCorrelation should be called for every combination of users and items:
def calculateCorrelation(user, item):
return calculateCorrelationAverage(u.tags, i.tags)
def calculateCorrelationAverage(tags1, tags2):
correlationSum = 0.0
for (tag1, tag2) in allCombinations(tags1, tags2):
correlationSum += correlation(tag1, tag2)
return correlationSum / (len(tags1) + len(tags2))
def allCombinations(list1, list2):
combinations = []
for x in list1:
for y in list2:
combinations.append((x, y))
return combinations
But that calculateCorrelation is not a valid Mapper in appengine-mapreduce and maybe this function is not even compatible with MapReduce computation concept. Yet, I need to be sure... it would be really great for me having those appengine-mapreduce advantages like automatic batching and task chaining.
Is there any solution for that?
Should I define my own InputReader? A new InputReader that reads all instances of two different kinds is compatible with the current appengine-mapreduce implementation?
Or should I try the following?
Combine all keys of all entities of these two kinds, two by two, into instances of a new Model (possibly using MapReduce)
Iterate using mappers over instances of this new Model
For each instance, use keys inside it to get the two entities of different kinds and calculate the correlation between them.
Following Nick Johnson suggestion, I wrote my own InputReader. This reader fetch entities from two different kinds. It yields tuples with all combinations of these entities. Here it is:
class TwoKindsInputReader(InputReader):
_APP_PARAM = "_app"
_KIND1_PARAM = "kind1"
_KIND2_PARAM = "kind2"
MAPPER_PARAMS = "mapper_params"
def __init__(self, reader1, reader2):
self._reader1 = reader1
self._reader2 = reader2
def __iter__(self):
for u in self._reader1:
for e in self._reader2:
yield (u, e)
#classmethod
def from_json(cls, input_shard_state):
reader1 = DatastoreInputReader.from_json(input_shard_state[cls._KIND1_PARAM])
reader2 = DatastoreInputReader.from_json(input_shard_state[cls._KIND2_PARAM])
return cls(reader1, reader2)
def to_json(self):
json_dict = {}
json_dict[self._KIND1_PARAM] = self._reader1.to_json()
json_dict[self._KIND2_PARAM] = self._reader2.to_json()
return json_dict
#classmethod
def split_input(cls, mapper_spec):
params = mapper_spec.params
app = params.get(cls._APP_PARAM)
kind1 = params.get(cls._KIND1_PARAM)
kind2 = params.get(cls._KIND2_PARAM)
shard_count = mapper_spec.shard_count
shard_count_sqrt = int(math.sqrt(shard_count))
splitted1 = DatastoreInputReader._split_input_from_params(app, kind1, params, shard_count_sqrt)
splitted2 = DatastoreInputReader._split_input_from_params(app, kind2, params, shard_count_sqrt)
inputs = []
for u in splitted1:
for e in splitted2:
inputs.append(TwoKindsInputReader(u, e))
#mapper_spec.shard_count = len(inputs) #uncomment this in case of "Incorrect number of shard states" (at line 408 in handlers.py)
return inputs
#classmethod
def validate(cls, mapper_spec):
return True #TODO
This code should be used when you need to process all combinations of entities of two kinds. You can also generalize this for more than two kinds.
Here it is a valid the mapreduce.yaml for TwoKindsInputReader:
mapreduce:
- name: recommendationMapReduce
mapper:
input_reader: customInputReaders.TwoKindsInputReader
handler: recommendation.calculateCorrelationHandler
params:
- name: kind1
default: kinds.User
- name: kind2
default: kinds.Item
- name: shard_count
default: 16
It's difficult to know what to recommend without more details of what you're actually calculating. One simple option is to simply fetch the related entity inside the map call - there's nothing preventing you from doing datastore operations there.
This will result in a lot of small calls, though. Writing a custom InputReader, as you suggest, will allow you to fetch both sets of entities in parallel, which will significantly improve performance.
If you give more details as to how you need to join these entities, we may be able to provide more concrete suggestions.

Class or Metaclass Design for Astrodynamics Engine

Gurus out there:
The differential equations for modeling spacecraft motion can be described in terms of a collection of acceleration terms:
d2r/dt2 = a0 + a1 + a2 + ... + an
Normally a0 is the point mass acceleration due to a body (a0 = -mu * r/r^3); the "higher order" terms can be due to other planets, solar radiation pressure, thrust, etc.
I'm implementing a collection of algorithms meant to work on this sort of system. I will start with Python for design and prototyping, then I will move on to C++ or Fortran 95.
I want to design a class (or metaclass) which will allow me to specify the different acceleration terms for a given instance, something along the lines of:
# please notice this is meant as "pseudo-code"
def some_acceleration(t):
return (1*t, 2*t, 3*t)
def some_other_acceleration(t):
return (4*t, 5*t, 6*t)
S = Spacecraft()
S.Acceleration += someacceleration + some_other_acceleration
In this case, the instance S would default to, say, two acceleration terms and I would add a the other two terms I want: some acceleration and some_other_acceleration; they return a vector (here represented as a triplet). Notice that in my "implementation" I've overloaded the + operator.
This way the algorithms will be designed for an abstract "spacecraft" and all the actual force fields will be provided on a case-by-case basis, allowing me to work with simplified models, compare modeling methodologies, etc.
How would you implement a class or metaclass for handling this?
I apologize for the rather verbose and not clearly explained question, but it is a bit fuzzy in my brain.
Thanks.
PyDSTool lets you build up "components" that have spatial or physical meaning, and have mathematical expressions associated with them, into bigger components that know how to sum things up etc. The result is a way to specify differential equations in a modular fashion using symbolic tools, and then PyDSTool will create C code automatically to simulate the system using fast integrators. There's no need to see python only as a slow prototyping step before doing the "heavy lifting" in C and Fortran. PyDSTool already moves everything, including the resulting vector field you defined, down to the C level once you've fully specified the problem.
Your example for second order DEs is very similar to the "current balance" first-order equation for the potential difference across a biological cell membrane that contains multiple types of ion channel. By Kirchoff's current law, the rate of change of the p.d. is
dV/dt = 1/memb_capacitance * (sum of currents)
The example in PyDSTool/tests/ModelSpec_tutorial_HH.py is one of several bundled in the package that builds a model for a membrane from modular specification components (the ModelSpec class from which you inherit to create your own - such as "point_mass" in a physics environment), and uses a macro-like specification to define the final DE as the sum of whatever currents are present in the "ion channel" components added to the "membrane" component. The summing is defined in the makeSoma function, simply using a statement such as 'for(channel,current,+)/C', which you can just use directly in your code.
Hope this helps. If it does, feel free to ask me (the author of PyDSTool) through the help forum on sourceforge if you need more help getting it started.
For those who would like to avoid numpy and do this in pure python, this may give you a few good ideas. I'm sure there are disadvantages and flaws to this little skit also. The "operator" module speeds up your math calculations as they are done with c functions:
from operator import sub, add, iadd, mul
import copy
class Acceleration(object):
def __init__(self, x, y, z):
super(Acceleration, self).__init__()
self.accel = [x, y , z]
self.dimensions = len(self.accel)
#property
def x(self):
return self.accel[0]
#x.setter
def x(self, val):
self.accel[0] = val
#property
def y(self):
return self.accel[1]
#y.setter
def y(self, val):
self.accel[1] = val
#property
def z(self):
return self.accel[2]
#z.setter
def z(self, val):
self.accel[2] = val
def __iadd__(self, other):
for x in xrange(self.dimensions):
self.accel[x] = iadd(self.accel[x], other.accel[x])
return self
def __add__(self, other):
newAccel = copy.deepcopy(self)
newAccel += other
return newAccel
def __str__(self):
return "Acceleration(%s, %s, %s)" % (self.accel[0], self.accel[1], self.accel[2])
def getVelocity(self, deltaTime):
return Velocity(mul(self.accel[0], deltaTime), mul(self.accel[1], deltaTime), mul(self.accel[2], deltaTime))
class Velocity(object):
def __init__(self, x, y, z):
super(Velocity, self).__init__()
self.x = x
self.y = y
self.z = z
def __str__(self):
return "Velocity(%s, %s, %s)" % (self.x, self.y, self.z)
if __name__ == "__main__":
accel = Acceleration(1.1234, 2.1234, 3.1234)
accel += Acceleration(1, 1, 1)
print accel
accels = []
for x in xrange(10):
accel += Acceleration(1.1234, 2.1234, 3.1234)
vel = accel.getVelocity(2)
print "Velocity of object with acceleration %s after one second:" % (accel)
print vel
prints the following:
Acceleration(2.1234, 3.1234, 4.1234)
Velocity of object with acceleration
Acceleration(13.3574, 24.3574, 35.3574) after one second:
Velocity(26.7148, 48.7148, 70.7148)
You can get fancy for faster calculations:
def getFancyVelocity(self, deltaTime):
from itertools import repeat
x, y, z = map(mul, self.accel, repeat(deltaTime, self.dimensions))
return Velocity(x, y, z)
Are you asking how to store an arbitrary number of acceleration sources for a spacecraft class?
Can't you just use an array of functions? (Function pointers when you get to c++)
i.e:
#pseudo Python
class Spacecraft
terms = []
def accelerate(t):
a = (0,0,0)
for func in terms:
a+= func(t)
s = Spacecraft
s.terms.append(some_acceleration)
s.terms.append(some_other_acceleration)
ac = s.accelerate(t)
I would use instead some library which can work with vectors (in python, try numpy) and represent the acceleration as vector. Then, you are not reinventing the wheel, the + operator works like you wanted. Please correct me if I misunderstood your problem.

Categories