Class or Metaclass Design for Astrodynamics Engine - python

Gurus out there:
The differential equations for modeling spacecraft motion can be described in terms of a collection of acceleration terms:
d2r/dt2 = a0 + a1 + a2 + ... + an
Normally a0 is the point mass acceleration due to a body (a0 = -mu * r/r^3); the "higher order" terms can be due to other planets, solar radiation pressure, thrust, etc.
I'm implementing a collection of algorithms meant to work on this sort of system. I will start with Python for design and prototyping, then I will move on to C++ or Fortran 95.
I want to design a class (or metaclass) which will allow me to specify the different acceleration terms for a given instance, something along the lines of:
# please notice this is meant as "pseudo-code"
def some_acceleration(t):
return (1*t, 2*t, 3*t)
def some_other_acceleration(t):
return (4*t, 5*t, 6*t)
S = Spacecraft()
S.Acceleration += someacceleration + some_other_acceleration
In this case, the instance S would default to, say, two acceleration terms and I would add a the other two terms I want: some acceleration and some_other_acceleration; they return a vector (here represented as a triplet). Notice that in my "implementation" I've overloaded the + operator.
This way the algorithms will be designed for an abstract "spacecraft" and all the actual force fields will be provided on a case-by-case basis, allowing me to work with simplified models, compare modeling methodologies, etc.
How would you implement a class or metaclass for handling this?
I apologize for the rather verbose and not clearly explained question, but it is a bit fuzzy in my brain.
Thanks.

PyDSTool lets you build up "components" that have spatial or physical meaning, and have mathematical expressions associated with them, into bigger components that know how to sum things up etc. The result is a way to specify differential equations in a modular fashion using symbolic tools, and then PyDSTool will create C code automatically to simulate the system using fast integrators. There's no need to see python only as a slow prototyping step before doing the "heavy lifting" in C and Fortran. PyDSTool already moves everything, including the resulting vector field you defined, down to the C level once you've fully specified the problem.
Your example for second order DEs is very similar to the "current balance" first-order equation for the potential difference across a biological cell membrane that contains multiple types of ion channel. By Kirchoff's current law, the rate of change of the p.d. is
dV/dt = 1/memb_capacitance * (sum of currents)
The example in PyDSTool/tests/ModelSpec_tutorial_HH.py is one of several bundled in the package that builds a model for a membrane from modular specification components (the ModelSpec class from which you inherit to create your own - such as "point_mass" in a physics environment), and uses a macro-like specification to define the final DE as the sum of whatever currents are present in the "ion channel" components added to the "membrane" component. The summing is defined in the makeSoma function, simply using a statement such as 'for(channel,current,+)/C', which you can just use directly in your code.
Hope this helps. If it does, feel free to ask me (the author of PyDSTool) through the help forum on sourceforge if you need more help getting it started.

For those who would like to avoid numpy and do this in pure python, this may give you a few good ideas. I'm sure there are disadvantages and flaws to this little skit also. The "operator" module speeds up your math calculations as they are done with c functions:
from operator import sub, add, iadd, mul
import copy
class Acceleration(object):
def __init__(self, x, y, z):
super(Acceleration, self).__init__()
self.accel = [x, y , z]
self.dimensions = len(self.accel)
#property
def x(self):
return self.accel[0]
#x.setter
def x(self, val):
self.accel[0] = val
#property
def y(self):
return self.accel[1]
#y.setter
def y(self, val):
self.accel[1] = val
#property
def z(self):
return self.accel[2]
#z.setter
def z(self, val):
self.accel[2] = val
def __iadd__(self, other):
for x in xrange(self.dimensions):
self.accel[x] = iadd(self.accel[x], other.accel[x])
return self
def __add__(self, other):
newAccel = copy.deepcopy(self)
newAccel += other
return newAccel
def __str__(self):
return "Acceleration(%s, %s, %s)" % (self.accel[0], self.accel[1], self.accel[2])
def getVelocity(self, deltaTime):
return Velocity(mul(self.accel[0], deltaTime), mul(self.accel[1], deltaTime), mul(self.accel[2], deltaTime))
class Velocity(object):
def __init__(self, x, y, z):
super(Velocity, self).__init__()
self.x = x
self.y = y
self.z = z
def __str__(self):
return "Velocity(%s, %s, %s)" % (self.x, self.y, self.z)
if __name__ == "__main__":
accel = Acceleration(1.1234, 2.1234, 3.1234)
accel += Acceleration(1, 1, 1)
print accel
accels = []
for x in xrange(10):
accel += Acceleration(1.1234, 2.1234, 3.1234)
vel = accel.getVelocity(2)
print "Velocity of object with acceleration %s after one second:" % (accel)
print vel
prints the following:
Acceleration(2.1234, 3.1234, 4.1234)
Velocity of object with acceleration
Acceleration(13.3574, 24.3574, 35.3574) after one second:
Velocity(26.7148, 48.7148, 70.7148)
You can get fancy for faster calculations:
def getFancyVelocity(self, deltaTime):
from itertools import repeat
x, y, z = map(mul, self.accel, repeat(deltaTime, self.dimensions))
return Velocity(x, y, z)

Are you asking how to store an arbitrary number of acceleration sources for a spacecraft class?
Can't you just use an array of functions? (Function pointers when you get to c++)
i.e:
#pseudo Python
class Spacecraft
terms = []
def accelerate(t):
a = (0,0,0)
for func in terms:
a+= func(t)
s = Spacecraft
s.terms.append(some_acceleration)
s.terms.append(some_other_acceleration)
ac = s.accelerate(t)

I would use instead some library which can work with vectors (in python, try numpy) and represent the acceleration as vector. Then, you are not reinventing the wheel, the + operator works like you wanted. Please correct me if I misunderstood your problem.

Related

Read a CSV and interpolate its data

I have a code in which I need the solar radiance. To calculate it I use the Planck function, which I have defined as:
def planck_W(self, x, t):
return (2*self.h*self.c**2/x**5) / (np.exp(self.h*self.c / (self.k*x*t)) -1)
Where x is the wavelength and t is the temperature, and the output is the radiance value.
However I want to be able to use also values coming from a CSV with different solar spectra, which already provides radiance values for different wavelengths. The sturcture of the CSV is Name of the spectrum used, _xx (wavelength), _yy (radiance)
self.spectra_list= pd.read_csv('solar.csv',converters={'_xx': literal_eval, '_yy':literal_eval)
def planck_W(self):
self.spectra_list= pd.read_csv('solar.csv',converters={'_xx':literal_eval, '_yy':literal_eval)
return interp1d( np.array(self._xx)*1e-9,
np.array(self._yy),
kind='linear')
Later I need to use this curve for a calculation at a wavelength range given by another variable, it starts with:
n0 = simpson((planck_W(s.wavelength)...
and I get the error:
planck_W() takes 1 positional argument but 2 were given
I'm kinda new to programming and don't have much idea what I'm doing, how do I manage to make it take the CSV values?
def planck_W(self):
This is the function signature which expects only self. But, while calling the function s.wavelength is supplied.
This causes the the error takes 1 positional argument but 2 were given.
self is not an argument in your case. Since you not pasted whole code I'm just guessing that planck_W() function is a part of a bigger picture which need to a class indeed.
If this is the case your function call should look like this:
n0 = simpson(planck_W())
wavelength value you put in your call is not used anywhere.
Otherwise, if you need that value, you have to add wavelength parameter like this:
def planck_W(self, wavelength):
and then use it inside the function
I hope this explains the situation
You should understand the difference between a function and a method.
Function
A function is a piece of code that may or may not take arguments/keywords and perform some actions.
The main reason for a function to be a thing might be to prevent repeating same code.
Example:
Calculate the sin of a given angle (In radians) See: https://en.wikipedia.org/wiki/Sine#Series_definition
angle = 3.1416
factorial3 = 1
for i in range(1, 4):
factorial3 *= i
factorial5 = 1
for i in range(1, 6):
factorial5 *= i
factorial7 = 1
for i in range(1, 8):
factorial7 *= i
sine = angle - ((angle * angle * angle) / factorial3) + ((angle * angle * angle * angle * angle) / factorial5) - \
((angle * angle * angle * angle * angle * angle * angle) / factorial7)
We can see the repeating pattern here.
The factorial and power are used multiple times. Those can be functions:
def factorial(n):
result = 1
for i in range(1, n+1):
result *= i
return result
def power(base, pow):
result = 1
for _ in range(pow):
result *= base
return result
sine2 = angle - (power(angle, 3) / factorial(3)) + (power(angle, 5) / factorial(5)) - (power(angle, 7) / factorial(7))
Method
A method is a function of a class.
We can create an object from a class and ask questions to the object (using method and attributes) and get results.
When you are calling a method of a class you use name spacing as such:
object_from_class = TheClass()
object_from_class.the_method(*possible_args)
Now you refereeing to the_method of the object_from_class.
The question is what if you want to refer to a method of the object inside the object.
Let's say we have a class which does trigonometric operations. We can have a sin, cos, tan etc.
You see here when we trying to calculate tan we do not need a whole new method. We can use sin/cos. Now I need to refer to the methods inside the object.
class Trigonometry:
def sin(self, angle):
return some_thing
def cos(self, angle):
return some_thing_else
def tan(self, angle):
return self.sin(angle) / self.cos(angle)
self.sin is equivalent of object_from_class.the_method when you are inside the object.
Note: Some languages, such as javascript, usesthis instead of self
Now, in your case, you are calling a method of an object def planck_W(self): which takes no arguments (technically it takes 1 argument but you should provide none) and passing some. That's why python is complaining about what number of arguments.

How to specify derivative for SymPy custom function?

I am using sympy to help automate the process of finding equations of motion for some systems using the Euler Lagrange method. What would really make this easy is if I could define a function q and specify its time derivative qd --> d/dt(q) = qd. Likewise I'd like to specify d/dt(qd) = qdd. This is helpful because as part of the process of finding the equations of motion, I need to take derivatives (with respect to time, q, and qd) of expressions that are functions of q and qd. In the end I'll end up with an equation in terms of q, qd, and qdd and I'd like to be able to either print this neatly or use lambdify to convert this to a neat numpy function for use in a simulation.
Currently I've accomplished this is a very roundabout and annoying way by defining q as a function and qd as the derivative of that function:
q = sympy.Function('q', real=True)(t)
q_diff = diff(q,t)
This is fine for most of the process but then I end up with a messy expression filled with "Derivative (q, t)" and "Derivative (Derivative(q, t), t)" which is hard to wrangle into a neat printing format and difficult to turn into a numpy function using lambdify. My current solution is thus to use the subs function and replace q_diff and diff(q_diff, t) with sympy symbols qd and qdd respectively, which cleans things up and makes manipulating the expression much easier. This seems like a bad hack though and takes tons of time to do for more complicated equations with lots of state variables.
What I'd like is to define a function, q, with a specific value for the time derivative. I'd pass in that value when creating the function, and sympy could treat it like a generic Function object but would use whatever I'd given it for the time derivative instead of just saying "Derivative(q, t)". I'd like something like this:
qdd = sympy.symbols('qdd')
qd = my_func(name='qd', time_deriv=qdd)
q = my_func(name='q', time_deriv=qd)
diff(q, t)
>>> qd
diff(q**2, t)
>>> 2*q*qd
diff(diff(q**2, t))
>>> 2*q*qdd + 2*qd**2
expr = q*qd**2
expr.subs(q, 5)
>>> 5*qd**2
Something like that, where I could still use the subs command and lambdify command to substitute numeric values for q and qd, would be extremely helpful. I've been trying to do this but I don't understand enough of how the base sympy.Function class works to get this going. This is what I have right now:
class func(sp.Function):
def __init__(self, name, deriv):
self.deriv = deriv
self.name = name
def diff(self, *args, **kwargs):
return self.deriv
def fdiff(self, argindex=1):
assert argindex == 1
return self.deriv
This code so far does not really work, I don't know how to specify that specifically the time derivative of q is qd. Right now all derivatives of q are returning q?
I don't know if this is just a really bad solution, if I should be avoiding this issue entirely, or if there's already a clean way to solve this. Any advice would be very appreciated.

Optimize Variable From A Function In Python

I'm used to using Excel for this kind of problem but I'm trying my hand at Python for now.
Basically I have two sets of arrays, one constant, and the other's values come from a user-defined function.
This is the function, simple enough.
import scipy.stats as sp
def calculate_probability(spread, std_dev):
return sp.norm.sf(0.5, spread, std_dev)
I have two arrays of data, one with entries that run through the calculate_probability function (these are the spreads), and the other a set of constants called expected_probabilities.
spreads = [10.5, 9.5, 10, 8.5]
expected_probabilities = [0.8091, 0.7785, 0.7708, 0.7692]
The below function is what I am seeking to optimise.
import numpy as np
def calculate_mse(std_dev):
spread_inputs = np.array(spreads)
model_probabilities = calculate_probability(spread_inputs,std_dev)
subtracted_vector = np.subtract(model_probabilities,expected_probabilities)
vector_powered = np.power(subtracted_vector,2)
mse_sum = np.sum(vector_powered)
return mse_sum/len(spreads)
I would like to find a value of std_dev such that function calculate_mse returns as close to zero as possible. This is very easy in Excel using solver but I am not sure how to do it in Python. What is the best way?
EDIT: I've changed my calculate_mse function so that it only takes a standard deviation as a parameter to be optimised. I've tried to return Andrew's answer in an API format using flask but I've run into some issues:
class Minimize(Resource):
std_dev_guess = 12.0 # might have a better guess than zeros
result = minimize(calculate_mse, std_dev_guess)
def get(self):
return {'data': result},200
api.add_resource(Minimize,'/minimize')
This is the error:
NameError: name 'result' is not defined
I guess something is wrong with the input?
I'd suggest using scipy's optimization library. From there, you have a couple options, the easiest from your current setup would be to just use the minimize method. Minimize itself has a massive amount of options, from simplex methods (default) to BFGS and COBYLA.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
from scipy.optimize import minimize
n_params = 4 # based of your code so far
spreads_guess = np.zeros(n_params) # might have a better guess than zeros
result = minimize(calculate_mse, spreads_guess)
Give it a shot and if you have extra questions I can edit the answer and elaborate as needed.
Here's just a couple suggestions to clean up your code.
class Minimize(Resource):
def _calculate_probability(self, spread, std_dev):
return sp.norm.sf(0.5, spread, scale=std_dev)
def _calculate_mse(self, std_dev):
spread_inputs = np.array(self.spreads)
model_probabilities = self._calculate_probability(spread_inputs, std_dev)
mse = np.sum((model_probabilities - self.expected_probabilities)**2) / len(spread_inputs)
print(mse)
return mse
def __init__(self, expected_probabilities, spreads, std_dev_guess):
self.std_dev_guess = std_dev_guess
self.spreads = spreads
self.expected_probabilities = expected_probabilities
self.result = None
def solve(self):
self.result = minimize(self._calculate_mse, self.std_dev_guess, method='BFGS')
def get(self):
return {'data': self.result}, 200
# run something like
spreads = [10.5, 9.5, 10, 8.5]
expected_probabilities = [0.8091, 0.7785, 0.7708, 0.7692]
minimizer = Minimize(expected_probabilities, spreads, 10.)
print(minimizer.get()) # returns none since it hasn't been run yet, up to you how to handle this
minimizer.solve()
print(minimizer.get())

Linking container class properties to contained class properties

I'm working on a simulation of a cluster of solar panels (a system/container). The properties of this cluster are linked almost one-to-one to the properties of the elements -- the panels (subsystem/contained) -- via the number of elements per cluster. E.g. the energy production of the cluster is simply the number of panels in the cluster times the production of the single cluster. Same for the cost, weight, etc. My question is how the link the container class to the contained class.
Let me illustrate with a naive example approach:
class PanelA(BasePanel):
... _x,_y,_area,_production,etc.
#property
def production(self):
# J/panel/s
return self._area * self._efficiency
... and 20 similar properties
#property
def _technical_detail
class PanelB(BasePanel):
.... similar
class PanelCluster():
....
self.panel = PanelA()
self.density = 100 # panels/ha
#property
def production(self):
# J/ha/h
uc = 60*60 # unit conversion
rho = self.density
production_single_panel = self.panel.production
return uc*rho*production_single_panel
... and e.g. 20 similar constructions
Note, in this naive approach one would write e.g. 20 such methods which seems not in line with this DRY-principle.
What would be a better alternative? (Ab)Use getattr?
For example?
class Panel():
unit = {'production':'J/panel/s'}
class PanelCluster():
panel = Panel()
def __getattr__(self,key):
if self.panel.__hasattr__(key)
panel_unit = self.panel.unit[key]
if '/panel/s' in panel_unit:
uc = 60*60 # unit conversion
rho = self.density
value_per_panel = getattr(self.panel,key)
return uc*rho*value_per_panel
else:
return getattr(self.panel,key)
This already seems more 'programmatic' but might be naive -- again. So I wonder what are the options and the pros/cons thereof?
There are a number of Python issues with your code, e.g.:
yield means something specific in Python, probably not a good identifier
and it's spelled:
hasattr(self.panel.unit, key)
getattr(self.panel, key)
That aside, you're probably looking for a solution involving inheritance. Perhaps both Panel and PanelCluster need to inherit from a PanelFunction class?
class PanelFunctions(object):
#property
def Yield(...):
...
class Panel(PanelFunctions):
..
class PanelCluster(PanelFunctions):
..
I would leave the properties as separte definitions since you'll need to write unit tests for all of them (and it will be much easier to determine coverage that way).

homogenization the functions can be compiled into a calculate networks?

Inside of a network, information (package) can be passed to different node(hosts), by modify it's content it can carry different meaning. The final package depends on hosts input via it's given route of network.
Now I want to implement a calculating network model can do small jobs by give different calculate path.
Prototype:
def a(p): return p + 1
def b(p): return p + 2
def c(p): return p + 3
def d(p): return p + 4
def e(p): return p + 5
def link(p, r):
p1 = p
for x in r:
p1 = x(p1)
return p1
p = 100
route = [a,c,d]
result = link(p,result)
#========
target_result = 108
if result = target_result:
# route is OK
I think finally I need something like this:
p with [init_payload, expected_target, passed_path, actual_calculated_result]
|
\/
[CHAOS of possible of functions networks]
|
\/
px [a,a,b,c,e] # ok this path is ok and match the target
Here is my questions hope may get your help:
can p carry(determin) the route(s) by inspect the function and estmated result?
(1.1 ) for example, if on the route there's a node x()
def x(p): return x / 0 # I suppose it can pass the compile
can p know in somehow this path is not good then avoid select this path?
(1.2) Another confuse is if p is a self-defined class type, the payload inside of this class essentially is a string, when it carry with a path [a,c,d], can p know a() must with a int type then avoid to select this node?'
same as 1.2 when generating the path, can I avoid such oops
def a(p): return p + 1
def b(p): return p + 2
def x(p): return p.append(1)
def y(p): return p.append(2)
full_node_list = [a,b,x,y]
path = random(2,full_node_list) # oops x,y will be trouble for the inttype P and a,b will be trouble to list type.
pls consider if the path is lambda list of functions
PS: as the whole model is not very clear in my mind the any leading and directing will be appreciated.
THANKS!
You could test each function first with a set of sample data; any function which returns consistently unusable values might then be discarded.
def isGoodFn(f):
testData = [1,2,3,8,38,73,159] # random test input
goodEnough = 0.8 * len(testData) # need 80% pass rate
try:
good = 0
for i in testData:
if type(f(i)) is int:
good += 1
return good >= goodEnough
except:
return False
If you know nothing about what the functions do, you will have to essentially do a full breadth-first tree search with error-checking at each node to discard bad results. If you have more than a few functions this will get very large very quickly. If you can guarantee some of the functions' behavior, you might be able to greatly reduce the search space - but this would be domain-specific, requiring more exact knowledge of the problem.
If you had a heuristic measure for how far each result is from your desired result, you could do a directed search to find good answers much more quickly - but such a heuristic would depend on knowing the overall form of the functions (a distance heuristic for multiplicative functions would be very different than one for additive functions, etc).
Your functions can raise TypeError if they are not satisfied with the data types they receive. You can then catch this exception and see whether you are passing an appropriate type. You can also catch any other exception type. But trying to call the functions and catching the exceptions can be quite slow.
You could also organize your functions into different sets depending on the argument type.
functions = { list : [some functions taking a list], int : [some functions taking an int]}
...
x = choose_function(functions[type(p)])
p = x(p)
I'm somewhat confused as to what you're trying to do, but: p cannot "know about" the functions until it is run through them. By design, Python functions don't specify what type of data they operate on: e.g. a*5 is valid whether a is a string, a list, an integer or a float.
If there are some functions that might not be able to operate on p, then you could catch exceptions, for example in your link function:
def link(p, r):
try:
for x in r:
p = x(p)
except ZeroDivisionError, AttributeError: # List whatever errors you want to catch
return None
return p

Categories