Uncontrollable Simulation Oscillation - python

This is likely a math problem as much as it is a programming problem, but I seem to be encountering severe oscillations in temperature in my class method "update()" when warp is set for a high value (1000+) in the code below. All temperatures are in Kelvin for simplicity.
(I am not a programmer by profession. This formatting is likely unpleasant.)
import math
#Critical to the Stefan-Boltzmann equation. Otherwise known as Sigma
BOLTZMANN_CONSTANT = 5.67e-8
class GeneratorObject(object):
"""Create a new object to run thermal simulation on."""
def __init__(self, mass, emissivity, surfaceArea, material, temp=0, power=5000, warp=1):
self.tK = temp #Temperature of the object.
self.mass = mass #Mass of the object.
self.emissivity = emissivity #Emissivity of the object. Always between 0 and 1.
self.surfaceArea = surfaceArea #Emissive surface area of the object.
self.material = material #Store the material name for some reason.
self.specificHeat = (0.45*1000)*self.mass #Get the specific heat of the object in J/kg (Iron: 0.45*1000=450J/kg)
self.power = power #Joules/Second (Watts) input. This is for heating the object.
self.warp = warp #Warp Multiplier. This pertains to how KSP's warp multiplier works.
def update(self):
"""Update the object's temperature according to it's properties."""
#This method updates the object's temperature according to heat losses and other factors.
self.tK -= (((self.emissivity * BOLTZMANN_CONSTANT * self.surfaceArea * (math.pow(self.tK,4) - math.pow(30+273.15,4))) / self.specificHeat) - (self.power / self.specificHeat)) * self.warp
The law used is the Stefan-Boltzmann law for calculating black-body heat losses:
Temp -= (Emissivity*Sigma*SurfaceArea*(Temp^4-Amb^4))/SpecificHeat)
This was ported from a KSP plugin for quicker debugging. Object.update() is called 50 times per second.
Would there be a solution to preventing these extreme oscillations that doesn't involve executing the code multiple times per step?

Your integration scheme is bad as already hinted by #Beta and #tom10. The integration timestep is self.warp units of time, i.e. self.warp seconds since your work with physical units. This is not the way things are done. You should first convert the equation to a dimensionless format by expressing each term in some sort of computational units. For example, the Stefan-Boltzmann constant and the self.power could be measured in units, in which the constant is 1. Then you should determine the characteristic time for the object, e.g. the time by which the temperature reaches to a certain degree the equilibrium one. If there are many such objects, you should find the smallest of all characteristic times and use it as unit of measurement for the time. Then the integration timestep should be about an order of magnitude less than the characteristic time, otherwise you completely miss the correct solution to the differential equation and end up with wild oscillations.
Example of what happens now: Let's take an 1 kg iron sphere. With surface area of 3,05.10^(-3) m^2 the radiative heating/cooling power is up to 1,73.10^(-10) W/K^4. With self.power equal to 5 kW, the radiative power equates the internal one when the temperature reaches 2319 K and that's the equilibrium temperature. At low temperatures the radiative heating/cooling is negligible and with the internal heating only you end up with temperature rate of 11,1 K/s. If warp is 1000+, your first integration step results in temperature of 11100 K or more, which overshoots the equilibrium one 5 times. Now the radiative energy is orders of magnitude higher than the internal heating and it leads to huge cool-down rate - multiply by 1000+ and you end up with negative temperature. And then the cycle repeats with higher and higher absolute temperatures until you hit outside the range of the floating-point arithmetic.
Here is a hint for you: if self.power is kept constant, then the equation has an analytical solution. Find it (or use a tool like Maple or Mathematica to find it for you) and then plot the solution. See how your timestep of 1000+ units compares to the timescale of the solution, i.e. the time it takes for the system to reach an almost equilibrium state.

I guess KSP = Kerbal Space Platform, so I gather this is a problem in game physics. If so maybe an approximation with the same qualitative behavior is sufficient. Maybe an exponential curve which starts at the initial temperature and falls to the ambient temperature is enough. Pick the decay constant by matching the heat transfer at the initial time.
Sometimes an approximation is good enough. I don't know if this is one of those situations.

Related

How to make a low pass filter for a differential equation for PID control?

So I'm trying to model a correct PID system in python but if I add some noise to the signal (by say a sensor) the D term starts overreacting. I know this is quite common and people fix it using a low pass filter yet I have no idea how to implement one.
So first I have a simple logistic growth function for the temperature:
time = np.arrange(0., maxtime, dt)
for t in time:
temperature = temperature + temperature*50*dt*(1-temperature/150)
Now I import random and I add some noise to the signal (let's say this is caused by an imperfect real life sensor):
temperature = temperature + random.randint(-1,1)/10
I won't have the complete code for the PID but say our system can only cool down the temperature, not heat it up, then I have something like this:
if (P+I+D < 0):
temperature = temperature + dt*(P+I+D)
The problem is that the system reacts a lot to the extra noise generated, I need a way to fix this. The way I have my D term coded is something like this (where Kd is a constant for the derivative:
D = ((previousTemperature - temperature) / dt)*Kd
Note that I don't use error in the D term because otherwise it reacts extremely to a sudden change in the desired value of temperature, by this our derivative constant might need to be negative to not counteract the system.
My question is if there's a way to stop the derivative from overreacting to the noise.

Beam radiation tilt factor calculation

I want to calculate total solar irradiance using isotropic sky model.
My problem is calculation of Rb (beam radiation tilt factor) in which I reach some negative values which is nonsense.
Formula:
Rb = cos⁡(angle_of_incidence)/cos⁡(solar_zenith)
Code in Python:
Rb = np.cos(pvlib.irradiance.aoi(surface_tilt, surface_azimuth, solar_zenith, solar_azimuth))/np.cos(solar_zenith)
Could you help me to solve negative values in Rb?
(My reference: Solar Energy Engineering: Processes and Systems, Soteris A. Kalogirou)
The only way your fraction could be negative is if either the numerator or denominator were negative. For cosines, that only happens if the argument is greater than π/2.
When dealing with trig function problems, the most likely culprit is conversation from degrees to radians (or lack thereof). And it's spelled out in the docs:
Returns: aoi : numeric
Angle of incidence in degrees.
To fix the immediate problem:
R_b = np.cos(pvlib.irradiance.aoi(surface_tilt, surface_azimuth, solar_zenith, solar_azimuth) * np.pi / 180.0)/np.cos(solar_zenith)
Since you don't show the origin of solar_zenith, I can't tell you if it needs to be converted too.
And don't forget:
Input all angles in degrees.

PAGMO/PYGMO: Anyone understand the options for Corana’s Simulated Annealing?

I'm using the PYGMO package to solve some nasty non-linear minimization problems, and am very interested in using their simulated_annealing algorithm, however it has a lot of hyper-parameters for which I don't really have any good intuition. These include:
Ts (float) – starting temperature
Tf (float) – final temperature
n_T_adj (int) – number of temperature adjustments in the annealing schedule
n_range_adj (int) – number of adjustments of the search range performed at a constant temperature
bin_size (int) – number of mutations that are used to compute the acceptance rate
start_range (float) – starting range for mutating the decision vector
Let's say I have a 4 dimensional geometric registration (homography) problem with variables and search ranges:
x1: [-10,10] (a shift in x)
x2: [10,30] (a shift in y)
x3: [-45,0] (rotation angle)
x4: [0.5,2] (scaling/magnification factor)
And the cost function for a random (bad) choice of values is 50. A good value is around zero.
I understand that Ts and Tf are for the Metropolis acceptance criterion of new solutions. That means Ts should be about the expected size of the initial changes in the cost function, and Tf small enough that no more changes are expected.
In Corana's paper, there are many hyperparameters listed that make sense: N_s is the number of evaluation cycles before changing step sizes, N_T are the number of step-size changes before changing the temperature, and r_T is the factor by which the temp is reduced each time. However, I can't figure out how these correlate to pygmo's parameters of n_T_adj, n_range_adj, bin_size, and start_range.
I'm really curious if anyone can explain how pygmo's hyperparameters are used, and how they relate to the original paper by Corana et al?
Gonna answer my own question here. I climbed into the actual .cpp code and found the answers.
In Corana's method, you select how many total iterations N of annealing you want. Then the minimization is a nested series of loops where you vary the step sizes, number of step-size adjustments, and temperature values at user-defined intervals. In PAGMO, they changed this so you explicitly specify how many times you will do these. Those are the n_* parameters and bin_size. I don't think bin_size is a good name here, because it isn't actually a size. It is the number of steps taken through a bin range, such that N=n_T_adj * n_range_adj * bin_range. I think just calling it n_bins or n_bins_adj makes more sense. Every bin_size function evaluations, the stepsize is modified (see below for limits).
In Corana's method you specify the multiplicative factor to decrease the temperature each time it is needed; it could be that you reach the minimum temp before running out of iterations, or vice versa. In PAGMO, the algorithm automatically computes the temperature-change factor so that you reach Tf at the end of the iteration sequence: r_t=(Tf/Ts)**(1/n_T_adj).
The start_range is, I think, a bad name for this variable. The stepsize in the alorithm is a fraction between 0 and start_range which defines the width of the search bins between the upper and lower bounds for each variable. So if stepsize=0.5, width=0.5*(upper_bound-lower_bound). At each iteration, the step size is adjusted based on how many function calls were accepted. If the step size grows larger than start_range, it is reset to that value. I think I would call it step_limit instead. But there you go.

Code to resemble a rowing stroke

I'm working on a small program displaying moving rowing boats. The following shows a simple sample code (Python 2.x):
import time
class Boat:
def __init__(self, pace, spm):
self.pace = pace #velocity of the boat in m/s
self.spm = spm #strokes per minute
self.distance = 0 #distance travelled
def move(self, deltaT):
self.distance = self.distance + (self.pace * deltaT)
boat1 = Boat(3.33, 20)
while True:
boat1.move(0.1)
print boat1.distance
time.sleep(0.1)
As you can see a boat has a pace and rows with a number of strokes per minute. Everytime the method move(deltaT) is called it moves a certain distance according to the pace.
The above boat just travels at a constant pace which is not realistic. A real rowing boat accelerates at the beginning of a stroke and then decelerates after the rowing blades left the water. There are many graphs online which show a typical rowing curve (force shown here, velocity looks similar):
Source: highperformancerowing.net
The pace should be constant over time, but it should change during the stroke.
What is the best way to change the constant velocity into a curve which (at least basically) resembles a more realistic rowing stroke?
Note: Any ideas on how to tag this question better? Is it an algorithm-problem?
If your goal is to simply come up with something visually plausible and not to do a full physical simulation, you can simply add a sine wave to the position.
class Boat:
def __init__(self, pace, spm, var=0.5):
self.pace = pace #average velocity of the boat in m/s
self.sps = spm/60.0 #strokes per second
self.var = var #variation in speed from 0-1
self.totalT = 0 #total time
self.distance = 0 #distance traveled
def move(self, deltaT):
self.totalT += deltaT
self.distance = self.pace * (self.totalT + self.var * math.sin(self.totalT * self.sps * 2*math.pi)
You need to be careful with the variation var, if it gets too high the boat might go backwards and destroy the illusion.
You can convert a curve like this into a polynomial equation for velocity.
A description/example of how to do this can be found at:
python numpy/scipy curve fitting
This shows you how to take a set of x,y coordinates (which you can get by inspection of your existing plot or from actual data) and create a polynomial function.
If you are using the same curve for each Boat object, you could just hard code it into your program. But you could also have a separate polynomial equation for each Boat object as well, assuming each rower or boat has a different profile.
You can perform simple integration of the differential equation of motion. (This is what you are already doing to get space as a function of time, with constant speed, x' = x + V.dt.)
Assume a simple model with a constant force during the stroke and no force during the glide, and drag proportional to the speed.
So the acceleration is a = P - D.v during stroke, and - D.v during glide (deceleration).
The speed is approximated with v' = v + a.dt.
The space is approximated with x' = x + v.dt.
If dt is sufficiently small, this motion should look realistic. You can refine the model with a more accurate force law and better integration techniques like Runge-Kutta, but I am not sure it is worth it.
Below an example plot of speed and space vs time using this technique. It shows speed oscillations quickly establishing a periodic regime, and quasi-linear displacement with undulations.

What is wrong with my gravity simulation?

As per advice given to me in this answer, I have implemented a Runge-Kutta integrator in my gravity simulator.
However, after I simulate one year of the solar system, the positions are still off by cca 110 000 kilometers, which isn't acceptable.
My initial data was provided by NASA's HORIZONS system. Through it, I obtained position and velocity vectors of the planets, Pluto, the Moon, Deimos and Phobos at a specific point in time.
These vectors were 3D, however, some people told me that I could ignore the third dimension as the planets aligned themselves in a plate around the Sun, and so I did. I merely copied the x-y coordinates into my files.
This is the code of my improved update method:
"""
Measurement units:
[time] = s
[distance] = m
[mass] = kg
[velocity] = ms^-1
[acceleration] = ms^-2
"""
class Uni:
def Fg(self, b1, b2):
"""Returns the gravitational force acting between two bodies as a Vector2."""
a = abs(b1.position.x - b2.position.x) #Distance on the x axis
b = abs(b1.position.y - b2.position.y) #Distance on the y axis
r = math.sqrt(a*a + b*b)
fg = (self.G * b1.m * b2.m) / pow(r, 2)
return Vector2(a/r * fg, b/r * fg)
#After this is ran, all bodies have the correct accelerations:
def updateAccel(self):
#For every combination of two bodies (b1 and b2) out of all bodies:
for b1, b2 in combinations(self.bodies.values(), 2):
fg = self.Fg(b1, b2) #Calculate the gravitational force between them
#Add this force to the current force vector of the body:
if b1.position.x > b2.position.x:
b1.force.x -= fg.x
b2.force.x += fg.x
else:
b1.force.x += fg.x
b2.force.x -= fg.x
if b1.position.y > b2.position.y:
b1.force.y -= fg.y
b2.force.y += fg.y
else:
b1.force.y += fg.y
b2.force.y -= fg.y
#For body (b) in all bodies (self.bodies.itervalues()):
for b in self.bodies.itervalues():
b.acceleration.x = b.force.x/b.m
b.acceleration.y = b.force.y/b.m
b.force.null() #Reset the force as it's not needed anymore.
def RK4(self, dt, stage):
#For body (b) in all bodies (self.bodies.itervalues()):
for b in self.bodies.itervalues():
rd = b.rk4data #rk4data is an object where the integrator stores its intermediate data
if stage == 1:
rd.px[0] = b.position.x
rd.py[0] = b.position.y
rd.vx[0] = b.velocity.x
rd.vy[0] = b.velocity.y
rd.ax[0] = b.acceleration.x
rd.ay[0] = b.acceleration.y
if stage == 2:
rd.px[1] = rd.px[0] + 0.5*rd.vx[0]*dt
rd.py[1] = rd.py[0] + 0.5*rd.vy[0]*dt
rd.vx[1] = rd.vx[0] + 0.5*rd.ax[0]*dt
rd.vy[1] = rd.vy[0] + 0.5*rd.ay[0]*dt
rd.ax[1] = b.acceleration.x
rd.ay[1] = b.acceleration.y
if stage == 3:
rd.px[2] = rd.px[0] + 0.5*rd.vx[1]*dt
rd.py[2] = rd.py[0] + 0.5*rd.vy[1]*dt
rd.vx[2] = rd.vx[0] + 0.5*rd.ax[1]*dt
rd.vy[2] = rd.vy[0] + 0.5*rd.ay[1]*dt
rd.ax[2] = b.acceleration.x
rd.ay[2] = b.acceleration.y
if stage == 4:
rd.px[3] = rd.px[0] + rd.vx[2]*dt
rd.py[3] = rd.py[0] + rd.vy[2]*dt
rd.vx[3] = rd.vx[0] + rd.ax[2]*dt
rd.vy[3] = rd.vy[0] + rd.ay[2]*dt
rd.ax[3] = b.acceleration.x
rd.ay[3] = b.acceleration.y
b.position.x = rd.px[stage-1]
b.position.y = rd.py[stage-1]
def update (self, dt):
"""Pushes the uni 'dt' seconds forward in time."""
#Repeat four times:
for i in range(1, 5, 1):
self.updateAccel() #Calculate the current acceleration of all bodies
self.RK4(dt, i) #ith Runge-Kutta step
#Set the results of the Runge-Kutta algorithm to the bodies:
for b in self.bodies.itervalues():
rd = b.rk4data
b.position.x = b.rk4data.px[0] + (dt/6.0)*(rd.vx[0] + 2*rd.vx[1] + 2*rd.vx[2] + rd.vx[3]) #original_x + delta_x
b.position.y = b.rk4data.py[0] + (dt/6.0)*(rd.vy[0] + 2*rd.vy[1] + 2*rd.vy[2] + rd.vy[3])
b.velocity.x = b.rk4data.vx[0] + (dt/6.0)*(rd.ax[0] + 2*rd.ax[1] + 2*rd.ax[2] + rd.ax[3])
b.velocity.y = b.rk4data.vy[0] + (dt/6.0)*(rd.ay[0] + 2*rd.ay[1] + 2*rd.ay[2] + rd.ay[3])
self.time += dt #Internal time variable
The algorithm is as follows:
Update the accelerations of all bodies in the system
RK4(first step)
goto 1
RK4(second)
goto 1
RK4(third)
goto 1
RK4(fourth)
Did I mess something up with my RK4 implementation? Or did I just start with corrupted data (too few important bodies and ignoring the 3rd dimension)?
How can this be fixed?
Explanation of my data etc...
All of my coordinates are relative to the Sun (i.e. the Sun is at (0, 0)).
./my_simulator 1yr
Earth position: (-1.47589927462e+11, 18668756050.4)
HORIZONS (NASA):
Earth position: (-1.474760457316177E+11, 1.900200786726017E+10)
I got the 110 000 km error by subtracting the Earth's x coordinate given by NASA from the one predicted by my simulator.
relative error = (my_x_coordinate - nasa_x_coordinate) / nasa_x_coordinate * 100
= (-1.47589927462e+11 + 1.474760457316177E+11) / -1.474760457316177E+11 * 100
= 0.077%
The relative error seems miniscule, but that's simply because Earth is really far away from the Sun both in my simulation and in NASA's. The distance is still huge and renders my simulator useless.
110 000 km absolute error means what relative error?
I got the 110 000 km value by subtracting my predicted Earth's x
coordinate with NASA's Earth x coordinate.
I'm not sure what you're calculating here or what you mean by "NASA's Earth x coordinate". That's a distance from what origin, in what coordinate system, at what time? (As far as I know, the earth moves in orbit around the sun, so its x-coordinate w.r.t. a coordinate system centered at the sun is changing all the time.)
In any case, you calculated an absolute error of 110,000 km by subtracting your calculated value from "NASA's Earth x coordinate". You seem to think this is a bad answer. What's your expectation? To hit it spot on? To be within a meter? One km? What's acceptable to you and why?
You get a relative error by dividing your error difference by "NASA's Earth x coordinate". Think of it as a percentage. What value do you get? If it's 1% or less, congratulate yourself. That would be quite good.
You should know that floating point numbers aren't exact on computers. (You can't represent 0.1 exactly in binary any more than you can represent 1/3 exactly in decimal.) There are going to be errors. Your job as a simulator is to understand the errors and minimize them as best you can.
You could have a stepsize problem. Try reducing your time step size by half and see if you do better. If you do, it says that your results have not converged. Reduce by half again until you achieve acceptable error.
Your equations might be poorly conditioned. Small initial errors will be amplified over time if that's true.
I'd suggest that you non-dimensionalize your equations and calculate the stability limit step size. Your intuition about what a "small enough" step size should be might surprise you.
I'd also read a bit more about the many body problem. It's subtle.
You might also try a numerical integration library instead of your integration scheme. You'll program your equations and give them to an industrial strength integrator. It could give some insight into whether or not it's your implementation or the physics that causes the problem.
Personally, I don't like your implementation. It'd be a better solution if you'd done it with mathematical vectors in mind. The "if" test for the relative positions leaves me cold. Vector mechanics would make the signs come out naturally.
UPDATE:
OK, your relative errors are pretty small.
Of course the absolute error does matter - depending on your requirements. If you're landing a vehicle on a planet you don't want to be off by that much.
So you need to stop making assumptions about what constitutes too small a step size and do what you must to drive the errors to an acceptable level.
Are all the quantities in your calculation 64-bit IEEE floating point numbers? If not, you'll never get there.
A 64 bit floating point number has about 16 digits of accuracy. If you need more than that, you'll have to use an infinite precision object like Java's BigDecimal or - wait for it - rescale your equations to use a length unit other than kilometers. If you scale all your distances by something meaningful for your problem (e.g., the diameter of the earth or the length of the major/minor axis of the earth's orbit) you might do better.
To do a RK4 integration of the solar system you need a very good precision or your solution will diverge quickly. Assuming you have implemented everything correctly, you may be seeing the drawbacks with RK for this sort of simulation.
To verify if this is the case: try a different integration algorithm. I found that using Verlet integration a solar system simulation will be much less chaotic. Verlet is much simpler to implement than RK4 as well.
The reason Verlet (and derived methods) are often better than RK4 for long term prediction (like full orbits) is that they are symplectic, that is, conserve momentum which RK4 does not. Thus Verlet will give a better behavior even after it diverges (a realistic simulation but with an error in it) whereas RK will give a non-physical behavior once it diverges.
Also: make sure you are using floating point as good as you can. Don't use distances in meters in the solar system, since the precision of floating point numbers is much better in the 0..1 interval. Using AU or some other normalized scale is much better than using meters. As suggested on the other topic, ensure you use an epoch for the time to avoid accumulating errors when adding time steps.
Such simulations are notoriously unreliable. Rounding errors accumulate and introduce instability. Increasing precision doesn't help much; the problem is that you (have to) use a finite step size and nature uses a zero step size.
You can reduce the problem by reducing the step size, so it takes longer for the errors to become apparent. If you are not doing this in real time, you can use a dynamic step size which reduces if two or more bodies are very close.
One thing I do with these kinds of simulations is "re-normalise" after each step to make the total energy the same. The sum of gravitational plus kinetic energy for the system as a whole should be a constant (conservation of energy). Work out the total energy after each step, and then scale all the object speeds by a constant amount to keep the total energy a constant. This at least keeps the output looking more plausible. Without this scaling, a tiny amount of energy is either added to or removed from the system after each step, and orbits tend to blow up to infinity or spiral into the sun.
Very simple changes that will improve things (proper usage of floating point values)
Change the unit system, to use as much mantissa bits as possible. Using meters, you're doing it wrong... Use AU, as suggested above. Even better, scale things so that the solar system fits in a 1x1x1 box
Already said in an other post : your time, compute it as time = epoch_count * time_step, not by adding ! This way, you avoid accumulating errors.
When doing a summation of several values, use a high precision sum algorithm, like Kahan summmation. In python, math.fsum does it for you.
Shouldn't the force decomposition be
th = atan(b, a);
return Vector2(cos(th) * fg, sin(th) * fg)
(see http://www.physicsclassroom.com/class/vectors/Lesson-3/Resolution-of-Forces or https://fiftyexamples.readthedocs.org/en/latest/gravity.html#solution)
BTW: you take the square root to calculate the distance, but you actually need the squared distance...
Why not simplify
r = math.sqrt(a*a + b*b)
fg = (self.G * b1.m * b2.m) / pow(r, 2)
to
r2 = a*a + b*b
fg = (self.G * b1.m * b2.m) / r2
I am not sure about python, but in some cases you get more precise calculations for intermediate results (Intel CPUs support 80 bit floats, but when assigning to variables, they get truncated to 64 bit):
Floating point computation changes if stored in intermediate "double" variable
It is not quite clear in what frame of reference you have your planet coordinates and velocities. If it is in heliocentric frame (the frame of reference is tied to the Sun), then you have two options: (i) perform calculations in non-inertial frame of reference (the Sun is not motionless), (ii) convert positions and velocities into the inertial barycentric frame of reference. If your coordinates and velocities are in barycentric frame of reference, you must have coordinates and velocities for the Sun as well.

Categories