Generate mesh from implicit function - python

I want to use Python (or Python 3) to generate a volume (3D) mesh from an implicit function:
def func(x,y,z):
q = 0.25
mu = q/(1.+q)
return -(1-mu)*pow(x*x + y*y + z*z,-1./2.) - mu*pow(pow(x-1,2) + y*y + z*z,-1./2.) - 0.5*(pow(x-mu,2) + y*y) + 1.9023266381531847
This function has a complicated isosurface, but I want to restrict the surface to be between x=-0.615 and x=1.4, and y = -0.6, y=0.6, there are no restrictions on the z-direction, but the interesting part is between z = +/- 1.
I have tried pygalmesh, but I could not get their example to adapt to my function. It crashes my Python kernel without output. It is possible to get pygalmesh to do this? If not, what would be a better way?

Just for the record, this doesn't crash without output for me:
import numpy
import pygalmesh
class GrimReaper(pygalmesh.DomainBase):
def __init__(self):
super().__init__()
def eval(self, x):
q = 0.25
mu = q / (1.0 + q)
x, y, z = x
return (
-(1 - mu) / numpy.sqrt(x ** 2 + y ** 2 + z ** 2)
- mu / numpy.sqrt((x - 1) ** 2 + y ** 2 + z ** 2)
- 0.5 * ((x - mu) ** 2 + y ** 2)
+ 1.9023266381531847
)
def get_bounding_sphere_squared_radius(self):
return 2.0
d = GrimReaper()
mesh = pygalmesh.generate_mesh(d, cell_size=0.1)
mesh.write("out.xmf")
Inserting protection balls...
refine_balls = true
min_balls_radius = 0
min_balls_weight = 0
insert_corners() done. Nb of points in triangulation: 0
insert_balls_on_edges() done. Nb of points in triangulation: 0
refine_balls() done. Nb of points in triangulation: 0
construct initial points (nb_points: 12)
s.py:17: RuntimeWarning: divide by zero encountered in double_scalars
+ 1.9023266381531847
12/12 initial point(s) found...
Start surface scan...Scanning triangulation for bad facets (sequential) - number of finite facets = 50...
Number of bad facets: 0
scanning edges (lazy)
scanning vertices (lazy)
end scan. [Bad facets:0]
Refining Surface...
Legend of the following line: (#vertices,#steps,#facets to refine,#tets to refine)
(12,0,0,0)
Total refining surface time: 1.90735e-05s
Start volume scan...Scanning triangulation for bad cells (sequential)... 20 cells scanned, done.
Number of bad cells: 1
end scan. [Bad tets:1]
Refining...
Legend of the following line: (#vertices,#steps,#facets to refine,#tets to refine)
(23,11,0,70) (18839.3 vertices/s)Segmentation fault (core dumped)
It works fine without the term - 0.5 * ((x - mu) ** 2 + y ** 2).
The segfault points towards an issue in CGAL. Perhaps it's useful to file a bug there.

Related

Python - Find x and y values of a 2D gaussian given a value for the function

I have a 2D gaussian function f(x,y). I know the values x₀ and y₀ at which the peak g₀ of the function occurs. But then I want to find xₑ and yₑ values at which f(xₑ, yₑ) = g₀ / e¹. I know there are multiple solutions to this, but at least one is sufficient.
So far I have
def f(x, y, g0,x0,y0,sigma_x,sigma_y,offset):
return offset + g0* np.exp(-(((x-x0)**(2)/(2*sigma_x**(2))) + ((y-y0)**(2)/(2*sigma_y**(2)))))
All variables taken as parameters are known as they were extracted from a curve fit.
I understand that taking the derivative in x and setting f() = 0 and similarly in y, gives a solvable linear system for (x,y), but this seems like overkill to manually implement, there must be some library or tool out there that can do what I am trying to achieve?
There are an infinite number of possibilities (or possibly 1 trivial or none in special cases regarding the value of g0). A solution can be computed analytically in constant time using a direct method. No need for approximations or iterative methods to find roots of a given function. It is just pure maths.
Gaussian kernel have interesting symmetries. One of them is the invariance to the rotation when the peak is translated to (0,0). Another on is that the 1D section of a 2D gaussian surface is a gaussian curve.
Lets ignore offset for a moment: it does not really change the problem (it is just a Z-axis translation) and add additional useless term for the resolution.
The geometric solution to is problem is an ellipse so the solution (xe, ye) follows the conic expression : (xe-x0)² / a² + (ye-y0)² / b² = 1. If sigma_x = sigma_y, then the solution is simpler : this is a circle with the expression (xe-x0)² + (ye-y0)² = r. Note that a, b and r are dependant of the searched value and the kernel parameters (eg. sigma_x). Changing sigma_x and sigma_y is like stretching the space, and so the solution similarly. Changing x0 and y0 is like translating the space and so the solution too.
In fact, we could solve the problem for the simpler case where x0=0, y0=0, sigma_x=1 and sigma_y=1. Then we can apply a translation, followed by a linear transformation using a transformation matrix. A basic multiplication 4x4 matrix can do that. Solving the simpler case is much easier since there are are less parameter to consider. Actually, g0 and offset can also be partially discarded of f since it is on both side of the expression and one just need to solve the linear equation offset + g0 * h(xe,ye) = g0 / e so h(x,y) = 1 / e - offset / g0 where h(xe, ye) = exp(-(xe² + ye²)/2). Assuming we forget the translation and linear transformation for a moment, the problem can be solve quite easily:
h(xe, ye) = 1 / e - offset / g0
exp(-(xe² + ye²)/2) = 1 / e - offset / g0
-(xe² + ye²)/2 = ln(1 / e - offset / g0)
xe² + ye² = -2 * ln(1 / e - offset / g0)
That's it! We got our circle expression where the radius r is -2*ln(1 / e - offset / g0)! Note that ln in the expression is basically the natural logarithm.
Now we could try to find the 4x4 matrix coefficients, or actually try to directly solve the full expression which is finally not so difficult.
offset + g0 * exp(-((x-x0)²/(2*sigma_x²) + (y-y0)²/(2*sigma_y²))) = g0 / e
exp(-((x-x0)²/(2*sigma_x²) + (y-y0)²/(2*sigma_y²))) = 1 / e - offset / g0
-((x-x0)²/(2*sigma_x²) + (y-y0)²/(2*sigma_y²)) = ln(1 / e - offset / g0)
((x-x0)²/sigma_x² + (y-y0)²/sigma_y²)/2 = -ln(1 / e - offset / g0)
(x-x0)²/sigma_x² + (y-y0)²/sigma_y² = -2 * ln(1 / e - offset / g0)
That's it! We got you conic expression where r = -2 * ln(1 / e - offset / g0) is a constant, a = sigma_x and b = sigma_y are the unknown parameter in the above expression. It can be normalized using a = sigma_x/sqrt(r) and b = sigma_y/sqrt(r) so the right hand side is 1 fitting exactly with the above expression but this is just some math details.
You can find one point of the ellipse easily since you know the centre of the ellipse (x0, y0) and there is at least 1 point at the intersection of the line y=y0 and the above conic expression. Lets find it:
(x-x0)²/sigma_x² + (y0-y0)²/sigma_y² = -2 * ln(1 / e - offset / g0)
(x-x0)²/sigma_x² = -2 * ln(1 / e - offset / g0)
(x-x0)² = -2 * ln(1 / e - offset / g0) * sigma_x²
x = sqrt(-2 * ln(1 / e - offset / g0) * sigma_x²) + x0
Note there are two solutions (-sqrt(...) + x0) but you only need one of them. I hope I did not make any mistake in the computation (at least the details should be enough to find it easily) and the solution is not a complex number in your case. The benefit of this solution is that it is very very fast to compute.
The final solution is:
(xe, ye) = (sqrt(-2*ln(1/e-offset/g0)*sigma_x²)+x0, y0)

How to fit a piecewise (alternating linear and constant segments) function to a parabolic function?

I do have a function, for example , but this can be something else as well, like a quadratic or logarithmic function. I am only interested in the domain of . The parameters of the function (a and k in this case) are known as well.
My goal is to fit a continuous piece-wise function to this, which contains alternating segments of linear functions (i.e. sloped straight segments, each with intercept of 0) and constants (i.e. horizontal segments joining the sloped segments together). The first and last segments are both sloped. And the number of segments should be pre-selected between around 9-29 (that is 5-15 linear steps + 4-14 constant plateaus).
Formally
The input function:
The fitted piecewise function:
I am looking for the optimal resulting parameters (c,r,b) (in terms of least squares) if the segment numbers (n) are specified beforehand.
The resulting constants (c) and the breakpoints (r) should be whole natural numbers, and the slopes (b) round two decimal point values.
I have tried to do the fitting numerically using the pwlf package using a segmented constant models, and further processed the resulting constant model with some graphical intuition to "slice" the constant steps with the slopes. It works to some extent, but I am sure this is suboptimal from both fitting perspective and computational efficiency. It takes multiple minutes to generate a fitting with 8 slopes on the range of 1-50000. I am sure there must be a better way to do this.
My idea would be to instead using only numerical methods/ML, the fact that we have the algebraic form of the input function could be exploited in some way to at least to use algebraic transforms (integrals) to get to a simpler optimization problem.
import numpy as np
import matplotlib.pyplot as plt
import pwlf
# The input function
def input_func(x,k,a):
return np.power(x,1/a)*k
x = np.arange(1,5e4)
y = input_func(x, 1.8, 1.3)
plt.plot(x,y);
def pw_fit(func, x_r, no_seg, *fparams):
# working on the specified range
x = np.arange(1,x_r)
y_input = func(x, *fparams)
my_pwlf = pwlf.PiecewiseLinFit(x, y_input, degree=0)
res = my_pwlf.fit(no_seg)
yHat = my_pwlf.predict(x)
# Function values at the breakpoints
y_isec = func(res, *fparams)
# Slope values at the breakpoints
slopes = np.round(y_isec / res, decimals=2)
slopes = slopes[1:]
# For the first slope value, I use the intersection of the first constant plateau and the input function
slopes = np.insert(slopes,0,np.round(y_input[np.argwhere(np.diff(np.sign(y_input - yHat))).flatten()[0]] / np.argwhere(np.diff(np.sign(y_input - yHat))).flatten()[0], decimals=2))
plateaus = np.unique(np.round(yHat))
# If due to rounding slope values (to two decimals), there is no change in a subsequent step, I just remove those segments
to_del = np.argwhere(np.diff(slopes) == 0).flatten()
slopes = np.delete(slopes,to_del + 1)
plateaus = np.delete(plateaus,to_del)
breakpoints = [np.ceil(plateaus[0]/slopes[0])]
for idx, j in enumerate(slopes[1:-1]):
breakpoints.append(np.floor(plateaus[idx]/j))
breakpoints.append(np.ceil(plateaus[idx+1]/j))
breakpoints.append(np.floor(plateaus[-1]/slopes[-1]))
return slopes, plateaus, breakpoints
slo, plat, breaks = pw_fit(input_func, 50000, 8, 1.8, 1.3)
# The piecewise function itself
def pw_calc(x, slopes, plateaus, breaks):
x = x.astype('float')
cond_list = [x < breaks[0]]
for idx, j in enumerate(breaks[:-1]):
cond_list.append((j <= x) & (x < breaks[idx+1]))
cond_list.append(breaks[-1] <= x)
func_list = [lambda x: x * slopes[0]]
for idx, j in enumerate(slopes[1:]):
func_list.append(plateaus[idx])
func_list.append(lambda x, j=j: x * j)
return np.piecewise(x, cond_list, func_list)
y_output = pw_calc(x, slo, plat, breaks)
plt.plot(x,y,y_output);
(Not important, but I think the fitted piecewise function is not continuous as it is. Intervals should be x<=r1; r1<x<=r2; ....)
As Anatolyg has pointed out, it looks to me that in the optimal solution (for the function posted at least, and probably for any where the derivative is different from zero), the horizantal segments will collapse to a point or the minimum segment length (in this case 1).
EDIT---------------------------------------------
The behavior above could only be valid if the slopes could have an intercept. If the intercepts are zero, as posted in the question, one consideration must be taken into account: Is the initial parabolic function defined in zero or nearby? Imagine the function y=0.001 *sqrt(x-1000), then the segments defined as b*x will have a slope close to zero and will be so similar to the constant segments that the best fit will be just the line that without intercept that fits better all the function.
Provided that the function is defined in zero or nearby, you can start by approximating the curve just by linear segments (with intercepts):
divide the function domain in N intervals(equal intervals or whose size is a function of the average curvature (or second derivative) of the function along the domain).
linear fit/regression in each intervals
for each interval, if a point (or bunch of points) in the extreme of any interval is better fitted by the line of the neighbor interval than the line of its interval, this point is assigned to the neighbor interval.
Repeat from 2) until no extreme points are moved.
Linear regressions might be optimized not to calculate all the covariance matrixes from scratch on each iteration, but just adding the contributions of the moved points to the previous covariance matrixes.
Then each linear segment (LSi) is replaced by a combination of a small constant segment at the beginning (Cbi), a linear segment without intercept (Si), and another constant segment at the end (Cei). This segments are easy to calculate as Si will contain the middle point of LSi, and Cbi and Cei will have respectively the begin and end values of the segment LSi. Then the intervals of each segment has to be calculated as an intersection between lines.
With this, the constant end segment will be collinear with the constant begin segment from the next interval so they will merge, resulting in a series of constant and linear segments interleaved.
But this would be a floating point start solution. Next, you will have to apply all the roundings which will mess up quite a lot all the segments as the conditions integer intervals and linear segments without slope can be very confronting. In fact, b,c,r are not totally independent. If ci and ri+1 are known, then bi+1 is already fixed
If nothing is broken so far, the final task will be to minimize the error/cost function (I assume that it will be the integral of the error between the parabolic function and the segments). My guess is that gradients here will be quite a pain, as if you change for example one ci, all the rest of the bj and cj will have to adapt as well due to the integer intervals restriction. However, if you can generalize the derivatives between parameters ( how much do I have to adapt bi+1 if ci changes a unit), you can propagate the change of one parameter to all other parameters and have kind of a gradient. Then for each interval, you can estimate what would be the ideal parameter and averaging all intervals calculate the best gradient step. Let me illustrate this:
Assuming first that r parameters are fixed, if I change c1 by one unit, b2 changes by 0.1, c2 changes by -0.2 and b3 changes by 0.2. This would be the gradient.
Then I estimate, comparing with the parabolic curve, that c1 should increase 0.5 (to reduce the cost by 10 points), b2 should increase 0.2 (to reduce the cost by 5 points), c2 should increase 0.2 (to reduce the cost by 6 points) and b3 should increase 0.1 (to reduce the cost by 9 points).
Finally, the gradient step would be (0.5/1·10 + 0.2/0.1·5 - 0.2/(-0.2)·6 + 0.1/0.2·9)/(10 + 5 + 6 + 9)~= 0.45. Thus, c1 would increase 0.45 units, b2 would increase 0.45·0.1, and so on.
When you add the r parameters to the pot, as integer intervals do not have an proper derivative, calculation is not straightforward. However, you can consider r parameters as floating points, calculate and apply the gradient step and then apply the roundings.
We can integrate the squared error function for linear and constant pieces and let SciPy optimize it. Python 3:
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize
xl = 1
xh = 50000
a = 1.3
p = 1 / a
n = 8
def split_b_and_c(bc):
return bc[::2], bc[1::2]
def solve_for_r(b, c):
r = np.empty(2 * n)
r[0] = xl
r[1:-1:2] = c / b[:-1]
r[2::2] = c / b[1:]
r[-1] = xh
return r
def linear_residual_integral(b, x):
return (
(x ** (2 * p + 1)) / (2 * p + 1)
- 2 * b * x ** (p + 2) / (p + 2)
+ b ** 2 * x ** 3 / 3
)
def constant_residual_integral(c, x):
return x ** (2 * p + 1) / (2 * p + 1) - 2 * c * x ** (p + 1) / (p + 1) + c ** 2 * x
def squared_error(bc):
b, c = split_b_and_c(bc)
r = solve_for_r(b, c)
linear = np.sum(
linear_residual_integral(b, r[1::2]) - linear_residual_integral(b, r[::2])
)
constant = np.sum(
constant_residual_integral(c, r[2::2])
- constant_residual_integral(c, r[1:-1:2])
)
return linear + constant
def evaluate(x, b, c, r):
i = 0
while x > r[i + 1]:
i += 1
return b[i // 2] * x if i % 2 == 0 else c[i // 2]
def main():
bc0 = (xl + (xh - xl) * np.arange(1, 4 * n - 2, 2) / (4 * n - 2)) ** (
p - 1 + np.arange(2 * n - 1) % 2
)
bc = scipy.optimize.minimize(
squared_error, bc0, bounds=[(1e-06, None) for i in range(2 * n - 1)]
).x
b, c = split_b_and_c(bc)
r = solve_for_r(b, c)
X = np.linspace(xl, xh, 1000)
Y = [evaluate(x, b, c, r) for x in X]
plt.plot(X, X ** p)
plt.plot(X, Y)
plt.show()
if __name__ == "__main__":
main()
I have tried to come up with a new solution myself, based on the idea of #Amo Robb, where I have partitioned the domain, and curve fitted a dual - constant and linear - piece together (with the help of np.maximum). I have used the 1 / f(x)' as the function to designate the breakpoints, but I know this is arbitrary and does not provide a global optimum. Maybe there is some optimal function for these breakpoints. But this solution is OK for me, as it might be appropriate to have a better fit at the first segments, at the expense of the error for the later segments. (The task itself is actually a cost based retail margin calculation {supply price -> added margin}, as the retail POS software can only work with such piecewise margin function).
The answer from #David Eisenstat is correct optimal solution if the parameters are allowed to be floats. Unfortunately the POS software can not use floats. It is OK to round up c-s and r-s afterwards. But the b-s should be rounded to two decimals, as those are inputted as percents, and this constraint would ruin the optimal solution with long floats. I will try to further improve my solution with both Amo's and David's valuable input. Thank You for that!
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# The input function f(x)
def input_func(x,k,a):
return np.power(x,1/a) * k
# 1 / f(x)'
def one_per_der(x,k,a):
return a / (k * np.power(x, 1/a-1))
# 1 / f(x)' inverted
def one_per_der_inv(x,k,a):
return np.power(a / (x*k), a / (1-a))
def segment_fit(start,end,y,first_val):
b, _ = curve_fit(lambda x,b: np.maximum(first_val, b*x), np.arange(start,end), y[start-1:end-1])
b = float(np.round(b, decimals=2))
bp = np.round(first_val / b)
last_val = np.round(b * end)
return b, bp, last_val
def pw_fit(end_range, no_seg, **fparams):
y_bps = np.linspace(one_per_der(1, **fparams), one_per_der(end_range,**fparams) , no_seg+1)[1:]
x_bps = np.round(one_per_der_inv(y_bps, **fparams))
y = input_func(x, **fparams)
slopes = [np.round(float(curve_fit(lambda x,b: x * b, np.arange(1,x_bps[0]), y[:int(x_bps[0])-1])[0]), decimals = 2)]
plats = [np.round(x_bps[0] * slopes[0])]
bps = []
for i, xbp in enumerate(x_bps[1:]):
b, bp, last_val = segment_fit(int(x_bps[i]+1), int(xbp), y, plats[i])
slopes.append(b); bps.append(bp); plats.append(last_val)
breaks = sorted(list(x_bps) + bps)[:-1]
# If due to rounding slope values (to two decimals), there is no change in a subsequent step, I just remove those segments
to_del = np.argwhere(np.diff(slopes) == 0).flatten()
breaks_to_del = np.concatenate((to_del * 2, to_del * 2 + 1))
slopes = np.delete(slopes,to_del + 1)
plats = np.delete(plats[:-1],to_del)
breaks = np.delete(breaks,breaks_to_del)
return slopes, plats, breaks
def pw_calc(x, slopes, plateaus, breaks):
x = x.astype('float')
cond_list = [x < breaks[0]]
for idx, j in enumerate(breaks[:-1]):
cond_list.append((j <= x) & (x < breaks[idx+1]))
cond_list.append(breaks[-1] <= x)
func_list = [lambda x: x * slopes[0]]
for idx, j in enumerate(slopes[1:]):
func_list.append(plateaus[idx])
func_list.append(lambda x, j=j: x * j)
return np.piecewise(x, cond_list, func_list)
fparams = {'k':1.8, 'a':1.2}
end_range = 5e4
no_steps = 10
x = np.arange(1, end_range)
y = input_func(x, **fparams)
slopes, plats, breaks = pw_fit(end_range, no_steps, **fparams)
y_output = pw_calc(x, slopes, plats, breaks)
plt.plot(x,y_output,y);

Calculating the exact length of an SVG Arc in Python?

I want to be able to calculate the exact length of an SVG Arc. I can do all the manipulations rather easily. But, I am unsure of whether there is a solution at all or the exact implementation of the solution.
Here's the exact solution for the circumference of the ellipse. Using popular libraries is fine. I fully grasp that there are no easy solution as they will all require hypergeometric functions to be exact.
from scipy import pi, sqrt
from scipy.special import hyp2f1
def exact(a, b):
t = ((a - b) / (a + b)) ** 2
return pi * (a + b) * hyp2f1(-0.5, -0.5, 1, t)
a = 2.667950e9
b = 6.782819e8
print(exact(a, b))
My idea is to have this as opt-in code if you happen to have scipy installed it'll use the exact super-fancy solution, else it'll fall back to the weaker approximation code (progressively smaller line segments until error is small). The problem is the math level here is above me. And I don't know if there's some ways to specify a start and stop point for that.
Most of the approximation solutions are for ellipses, but I only want the arc. There may also be a solution unknown to me, for calculating the length of an arc on an ellipse but since the start and end position can be anywhere. It doesn't seem to be instantly viable to say a sweep angle is 15% of the total possible angle therefore it's 15% of the ellipse circumference.
A more effective less fancy arc approximation might also be nice. There are progressively better ellipse approximations but I can't go from ellipse circumference to arc length, so those are currently not helpful.
Let's say the arc parameterization is the start and end points on the ellipse. Since that's how SVG is parameterized. But, anything that isn't tautological like arc_length parameterization is a correct answer.
If you want to calculate this with your bare hands and the std lib, you can base your calculation on the following formula. This is only valid for two points on the upper half of the ellipse because of the acos but we're going to use it with the angles directly.
The calculation consists in these steps:
Start with the SVG data: start point, a ,b rotation, long arc, sweep, end point
Rotate the coordinate system to match the horizontal axis of the ellipse.
Solve a system of 4 equations with 4 unknowns to get the centre point and the angles corresponding to the start and end point
Approximate the integral by a discreet sum over small segments. This is where you could use scipy.special.ellipeinc, as suggested in the comments.
Step 2 is easy, just use a rotation matrix (note the angle rot is positive in the clockwise direction):
m = [
[math.cos(rot), math.sin(rot)],
[-math.sin(rot), math.cos(rot)]
]
Step 3 is very well explained in this answer. Note the value obtained for a1 is modulo pi because it is obtained with atan. That means that you need to calculate the centre points for the two angles t1 and t2 and check they match. If they don't, add pi to a1 and check again.
Step 4 is quite straightforward. Divide the interval [t1, t2] into n segments, get the value of the function at the end of each segment, time it by the segment length and sum all this up. You can try refining this by taking the value of the function at the mid-point of each segment, but I'm not sure there is much benefit to that. The number of segments is likely to have more effect on the precision.
Here is a very rough Python version of the above (please bear with the ugly coding style, I was doing this on my mobile whilst traveling 🤓)
import math
PREC = 1E-6
# matrix vector multiplication
def transform(m, p):
return ((sum(x * y for x, y in zip(m_r, p))) for m_r in m)
# the partial integral function
def ellipse_part_integral(t1, t2, a, b, n=100):
# function to integrate
def f(t):
return math.sqrt(1 - (1 - a**2 / b**2) * math.sin(t)**2)
start = min(t1, t2)
seg_len = abs(t1 - t2) / n
return - b * sum(f(start + seg_len * (i + 1)) * seg_len for i in range(n))
def ellipse_arc_length(x1, y1, a, b, rot, large_arc, sweep, x2, y2):
if abs(x1 - x2) < PREC and abs(y1 - y2) < PREC:
return 0
# get rot in radians
rot = math.pi / 180 * rot
# get the coordinates in the rotated coordinate system
m = [
[math.cos(rot), math.sin(rot)],
[- math.sin(rot), math.cos(rot)]
]
x1_loc, y1_loc, x2_loc, y2_loc = *transform(m, (x1,y1)), *transform(m, (x2,y2))
r1 = (x1_loc - x2_loc) / (2 * a)
r2 = (y2_loc - y1_loc) / (2 * b)
# avoid division by 0 if both points have same y coord
if abs(r2) > PREC:
a1 = math.atan(r1 / r2)
else:
a1 = r1 / abs(r1) * math.pi / 2
if abs(math.cos(a1)) > PREC:
a2 = math.asin(r2 / math.cos(a1))
else:
a2 = math.asin(r1 / math.sin(a1))
# calculate the angle of start and end point
t1 = a1 + a2
t2 = a1 - a2
# calculate centre point coords
x0 = x1_loc - a * math.cos(t1)
y0 = y1_loc - b * math.sin(t1)
x0s = x2_loc - a * math.cos(t2)
y0s = y2_loc - b * math.sin(t2)
# a1 value is mod pi so the centres may not match
# if they don't, check a1 + pi
if abs(x0 - x0s) > PREC or abs(y0 - y0s) > PREC:
a1 = a1 + math.pi
t1 = a1 + a2
t2 = a1 - a2
x0 = x1_loc - a * math.cos(t1)
y0 = y1_loc - b * math.sin(t1)
x0s = x2_loc - a * math.cos(t2)
y0s = y2_loc - b * math.sin(t2)
# get the angles in the range [0, 2 * pi]
if t1 < 0:
t1 += 2 * math.pi
if t2 < 0:
t2 += 2 * math.pi
# increase minimum by 2 * pi for a large arc
if large_arc:
if t1 < t2:
t1 += 2 * math.pi
else:
t2 += 2 * math.pi
return ellipse_part_integral(t1, t2, a, b)
print(ellipse_arc_length(0, 0, 40, 40, 0, False, True, 80, 0))
The good news is that the sweep flag doesn't matter as long as you're just looking for the length of the arc.
I'm not 100% sure the modulo pi problem is handled correctly and the implementation above may have a few bugs.
Nevertheless, it gave me a good approximation of the length in the simple case of a half circle, so I dare calling it WIP. Let me know if this is worth pursuing, I can have a further look when I'll be seated at a computer. Or maybe someone can come up with a clean way of doing this in the meantime?

Finding the self-consistent solution to an equation

At the bottom of this question are a set of functions transcribed from a published neural-network model. When I call R, I get the following error:
RuntimeError: maximum recursion depth exceeded while calling a Python object
Note that within each call to R, a recursive call to R is made for every other neuron in the network. This is what causes the recursion depth to be exceeded. Each return value for R depends on all the others (with the network involving N = 512 total values.) Does anyone have any idea what method should be used to compute the self-consistent solution for R? Note that R itself is a smooth function. I've tried treating this as a vector root-solving problem -- but in this case the 512 dimensions are not independent. With so many degrees of freedom, the roots are never found (using the scipy.optimize functions). Does Python have any tools that can help with this? Maybe it would be more natural to solve R using something like Mathematica? I don't know how this is normally done.
"""Recurrent model with strong excitatory recurrence."""
import numpy as np
l = 3.14
def R(x_i):
"""Steady-state firing rate of neuron at location x_i.
Parameters
----------
x_i : number
Location of this neuron.
Returns
-------
rate : float
Firing rate.
"""
N = 512
T = 1
x = np.linspace(-2, 2, N)
sum_term = 0
for x_j in x:
sum_term += J(x_i - x_j) * R(x_j)
rate = I_S(x_i) + I_A(x_i) + 1.0 / N * sum_term - T
if rate < 0:
return 0
return rate
def I_S(x):
"""Sensory input.
Parameters
----------
x : number
Location of this neuron.
Returns
-------
float
Sensory input to neuron at x.
"""
S_0 = 0.46
S_1 = 0.66
x_S = 0
sigma_S = 1.31
return S_0 + S_1 * np.exp(-0.5 * (x - x_S) ** 2 / sigma_S ** 2)
def I_A(x):
"""Attentional additive bias.
Parameters
----------
x : number
Location of this neuron.
Returns
-------
number
Additive bias for neuron at x.
"""
x_A = 0
A_1 = 0.089
sigma_A = 0.35
A_0 = 0
sigma_A_prime = 0.87
if np.abs(x - x_A) < l:
return (A_1 * np.exp(-0.5 * (x - x_A) ** 2 / sigma_A ** 2) +
A_0 * np.exp(-0.5 * (x - x_A) ** 2 / sigma_A_prime ** 2))
return 0
def J(dx):
"""Connection strength.
Parameters
----------
dx : number
Neuron i's distance from neuron j.
Returns
-------
number
Connection strength.
"""
J_0 = -2.5
J_1 = 8.5
sigma_J = 1.31
if np.abs(dx) < l:
return J_0 + J_1 * np.exp(-0.5 * dx ** 2 / sigma_J ** 2)
return 0
if __name__ == '__main__':
pass
This recursion never ends since there is no termination condition before recursive call, adjusting maximum recursion depth does not help
def R(x_i):
...
for x_j in x:
sum_term += J(x_i - x_j) * R(x_j)
Perhaps you should be doing something like
# some suitable initial guess
state = guess
while True: # or a fixed number of iterations
next_state = compute_next_state(state)
if some_condition_check(state, next_state):
# return answer
return state
if some_other_check(state, next_state):
# something wrong, terminate
raise ...
Change the maximum recursion depth using sys.setrecursionlimit
import sys
sys.setrecursionlimit(10000)
def rec(i):
if i > 1000:
print 'i is over 1000!'
return
rec(i + 1)
rec(0)
More info: https://docs.python.org/3/library/sys.html#sys.setrecursionlimit`

Efficient method of calculating density of irregularly spaced points

I am attempting to generate map overlay images that would assist in identifying hot-spots, that is areas on the map that have high density of data points. None of the approaches that I've tried are fast enough for my needs.
Note: I forgot to mention that the algorithm should work well under both low and high zoom scenarios (or low and high data point density).
I looked through numpy, pyplot and scipy libraries, and the closest I could find was numpy.histogram2d. As you can see in the image below, the histogram2d output is rather crude. (Each image includes points overlaying the heatmap for better understanding)
My second attempt was to iterate over all the data points, and then calculate the hot-spot value as a function of distance. This produced a better looking image, however it is too slow to use in my application. Since it's O(n), it works ok with 100 points, but blows out when I use my actual dataset of 30000 points.
My final attempt was to store the data in an KDTree, and use the nearest 5 points to calculate the hot-spot value. This algorithm is O(1), so much faster with large dataset. It's still not fast enough, it takes about 20 seconds to generate a 256x256 bitmap, and I would like this to happen in around 1 second time.
Edit
The boxsum smoothing solution provided by 6502 works well at all zoom levels and is much faster than my original methods.
The gaussian filter solution suggested by Luke and Neil G is the fastest.
You can see all four approaches below, using 1000 data points in total, at 3x zoom there are around 60 points visible.
Complete code that generates my original 3 attempts, the boxsum smoothing solution provided by 6502 and gaussian filter suggested by Luke (improved to handle edges better and allow zooming in) is here:
import matplotlib
import numpy as np
from matplotlib.mlab import griddata
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import math
from scipy.spatial import KDTree
import time
import scipy.ndimage as ndi
def grid_density_kdtree(xl, yl, xi, yi, dfactor):
zz = np.empty([len(xi),len(yi)], dtype=np.uint8)
zipped = zip(xl, yl)
kdtree = KDTree(zipped)
for xci in range(0, len(xi)):
xc = xi[xci]
for yci in range(0, len(yi)):
yc = yi[yci]
density = 0.
retvalset = kdtree.query((xc,yc), k=5)
for dist in retvalset[0]:
density = density + math.exp(-dfactor * pow(dist, 2)) / 5
zz[yci][xci] = min(density, 1.0) * 255
return zz
def grid_density(xl, yl, xi, yi):
ximin, ximax = min(xi), max(xi)
yimin, yimax = min(yi), max(yi)
xxi,yyi = np.meshgrid(xi,yi)
#zz = np.empty_like(xxi)
zz = np.empty([len(xi),len(yi)])
for xci in range(0, len(xi)):
xc = xi[xci]
for yci in range(0, len(yi)):
yc = yi[yci]
density = 0.
for i in range(0,len(xl)):
xd = math.fabs(xl[i] - xc)
yd = math.fabs(yl[i] - yc)
if xd < 1 and yd < 1:
dist = math.sqrt(math.pow(xd, 2) + math.pow(yd, 2))
density = density + math.exp(-5.0 * pow(dist, 2))
zz[yci][xci] = density
return zz
def boxsum(img, w, h, r):
st = [0] * (w+1) * (h+1)
for x in xrange(w):
st[x+1] = st[x] + img[x]
for y in xrange(h):
st[(y+1)*(w+1)] = st[y*(w+1)] + img[y*w]
for x in xrange(w):
st[(y+1)*(w+1)+(x+1)] = st[(y+1)*(w+1)+x] + st[y*(w+1)+(x+1)] - st[y*(w+1)+x] + img[y*w+x]
for y in xrange(h):
y0 = max(0, y - r)
y1 = min(h, y + r + 1)
for x in xrange(w):
x0 = max(0, x - r)
x1 = min(w, x + r + 1)
img[y*w+x] = st[y0*(w+1)+x0] + st[y1*(w+1)+x1] - st[y1*(w+1)+x0] - st[y0*(w+1)+x1]
def grid_density_boxsum(x0, y0, x1, y1, w, h, data):
kx = (w - 1) / (x1 - x0)
ky = (h - 1) / (y1 - y0)
r = 15
border = r * 2
imgw = (w + 2 * border)
imgh = (h + 2 * border)
img = [0] * (imgw * imgh)
for x, y in data:
ix = int((x - x0) * kx) + border
iy = int((y - y0) * ky) + border
if 0 <= ix < imgw and 0 <= iy < imgh:
img[iy * imgw + ix] += 1
for p in xrange(4):
boxsum(img, imgw, imgh, r)
a = np.array(img).reshape(imgh,imgw)
b = a[border:(border+h),border:(border+w)]
return b
def grid_density_gaussian_filter(x0, y0, x1, y1, w, h, data):
kx = (w - 1) / (x1 - x0)
ky = (h - 1) / (y1 - y0)
r = 20
border = r
imgw = (w + 2 * border)
imgh = (h + 2 * border)
img = np.zeros((imgh,imgw))
for x, y in data:
ix = int((x - x0) * kx) + border
iy = int((y - y0) * ky) + border
if 0 <= ix < imgw and 0 <= iy < imgh:
img[iy][ix] += 1
return ndi.gaussian_filter(img, (r,r)) ## gaussian convolution
def generate_graph():
n = 1000
# data points range
data_ymin = -2.
data_ymax = 2.
data_xmin = -2.
data_xmax = 2.
# view area range
view_ymin = -.5
view_ymax = .5
view_xmin = -.5
view_xmax = .5
# generate data
xl = np.random.uniform(data_xmin, data_xmax, n)
yl = np.random.uniform(data_ymin, data_ymax, n)
zl = np.random.uniform(0, 1, n)
# get visible data points
xlvis = []
ylvis = []
for i in range(0,len(xl)):
if view_xmin < xl[i] < view_xmax and view_ymin < yl[i] < view_ymax:
xlvis.append(xl[i])
ylvis.append(yl[i])
fig = plt.figure()
# plot histogram
plt1 = fig.add_subplot(221)
plt1.set_axis_off()
t0 = time.clock()
zd, xe, ye = np.histogram2d(yl, xl, bins=10, range=[[view_ymin, view_ymax],[view_xmin, view_xmax]], normed=True)
plt.title('numpy.histogram2d - '+str(time.clock()-t0)+"sec")
plt.imshow(zd, origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax])
plt.scatter(xlvis, ylvis)
# plot density calculated with kdtree
plt2 = fig.add_subplot(222)
plt2.set_axis_off()
xi = np.linspace(view_xmin, view_xmax, 256)
yi = np.linspace(view_ymin, view_ymax, 256)
t0 = time.clock()
zd = grid_density_kdtree(xl, yl, xi, yi, 70)
plt.title('function of 5 nearest using kdtree\n'+str(time.clock()-t0)+"sec")
cmap=cm.jet
A = (cmap(zd/256.0)*255).astype(np.uint8)
#A[:,:,3] = zd
plt.imshow(A , origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax])
plt.scatter(xlvis, ylvis)
# gaussian filter
plt3 = fig.add_subplot(223)
plt3.set_axis_off()
t0 = time.clock()
zd = grid_density_gaussian_filter(view_xmin, view_ymin, view_xmax, view_ymax, 256, 256, zip(xl, yl))
plt.title('ndi.gaussian_filter - '+str(time.clock()-t0)+"sec")
plt.imshow(zd , origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax])
plt.scatter(xlvis, ylvis)
# boxsum smoothing
plt3 = fig.add_subplot(224)
plt3.set_axis_off()
t0 = time.clock()
zd = grid_density_boxsum(view_xmin, view_ymin, view_xmax, view_ymax, 256, 256, zip(xl, yl))
plt.title('boxsum smoothing - '+str(time.clock()-t0)+"sec")
plt.imshow(zd, origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax])
plt.scatter(xlvis, ylvis)
if __name__=='__main__':
generate_graph()
plt.show()
This approach is along the lines of some previous answers: increment a pixel for each spot, then smooth the image with a gaussian filter. A 256x256 image runs in about 350ms on my 6-year-old laptop.
import numpy as np
import scipy.ndimage as ndi
data = np.random.rand(30000,2) ## create random dataset
inds = (data * 255).astype('uint') ## convert to indices
img = np.zeros((256,256)) ## blank image
for i in xrange(data.shape[0]): ## draw pixels
img[inds[i,0], inds[i,1]] += 1
img = ndi.gaussian_filter(img, (10,10))
A very simple implementation that could be done (with C) in realtime and that only takes fractions of a second in pure python is to just compute the result in screen space.
The algorithm is
Allocate the final matrix (e.g. 256x256) with all zeros
For each point in the dataset increment the corresponding cell
Replace each cell in the matrix with the sum of the values of the matrix in an NxN box centered on the cell. Repeat this step a few times.
Scale result and output
The computation of the box sum can be made very fast and independent on N by using a sum table. Every computation just requires two scan of the matrix... total complexity is O(S + WHP) where S is the number of points; W, H are width and height of output and P is the number of smoothing passes.
Below is the code for a pure python implementation (also very un-optimized); with 30000 points and a 256x256 output grayscale image the computation is 0.5sec including linear scaling to 0..255 and saving of a .pgm file (N = 5, 4 passes).
def boxsum(img, w, h, r):
st = [0] * (w+1) * (h+1)
for x in xrange(w):
st[x+1] = st[x] + img[x]
for y in xrange(h):
st[(y+1)*(w+1)] = st[y*(w+1)] + img[y*w]
for x in xrange(w):
st[(y+1)*(w+1)+(x+1)] = st[(y+1)*(w+1)+x] + st[y*(w+1)+(x+1)] - st[y*(w+1)+x] + img[y*w+x]
for y in xrange(h):
y0 = max(0, y - r)
y1 = min(h, y + r + 1)
for x in xrange(w):
x0 = max(0, x - r)
x1 = min(w, x + r + 1)
img[y*w+x] = st[y0*(w+1)+x0] + st[y1*(w+1)+x1] - st[y1*(w+1)+x0] - st[y0*(w+1)+x1]
def saveGraph(w, h, data):
X = [x for x, y in data]
Y = [y for x, y in data]
x0, y0, x1, y1 = min(X), min(Y), max(X), max(Y)
kx = (w - 1) / (x1 - x0)
ky = (h - 1) / (y1 - y0)
img = [0] * (w * h)
for x, y in data:
ix = int((x - x0) * kx)
iy = int((y - y0) * ky)
img[iy * w + ix] += 1
for p in xrange(4):
boxsum(img, w, h, 2)
mx = max(img)
k = 255.0 / mx
out = open("result.pgm", "wb")
out.write("P5\n%i %i 255\n" % (w, h))
out.write("".join(map(chr, [int(v*k) for v in img])))
out.close()
import random
data = [(random.random(), random.random())
for i in xrange(30000)]
saveGraph(256, 256, data)
Edit
Of course the very definition of density in your case depends on a resolution radius, or is the density just +inf when you hit a point and zero when you don't?
The following is an animation built with the above program with just a few cosmetic changes:
used sqrt(average of squared values) instead of sum for the averaging pass
color-coded the results
stretching the result to always use the full color scale
drawn antialiased black dots where the data points are
made an animation by incrementing the radius from 2 to 40
The total computing time of the 39 frames of the following animation with this cosmetic version is 5.4 seconds with PyPy and 26 seconds with standard Python.
Histograms
The histogram way is not the fastest, and can't tell the difference between an arbitrarily small separation of points and 2 * sqrt(2) * b (where b is bin width).
Even if you construct the x bins and y bins separately (O(N)), you still have to perform some ab convolution (number of bins each way), which is close to N^2 for any dense system, and even bigger for a sparse one (well, ab >> N^2 in a sparse system.)
Looking at the code above, you seem to have a loop in grid_density() which runs over the number of bins in y inside a loop of the number of bins in x, which is why you're getting O(N^2) performance (although if you are already order N, which you should plot on different numbers of elements to see, then you're just going to have to run less code per cycle).
If you want an actual distance function then you need to start looking at contact detection algorithms.
Contact Detection
Naive contact detection algorithms come in at O(N^2) in either RAM or CPU time, but there is an algorithm, rightly or wrongly attributed to Munjiza at St. Mary's college London, which runs in linear time and RAM.
you can read about it and implement it yourself from his book, if you like.
I have written this code myself, in fact
I have written a python-wrapped C implementation of this in 2D, which is not really ready for production (it is still single threaded, etc) but it will run in as close to O(N) as your dataset will allow. You set the "element size", which acts as a bin size (the code will call interactions on everything within b of another point, and sometimes between b and 2 * sqrt(2) * b), give it an array (native python list) of objects with an x and y property and my C module will callback to a python function of your choice to run an interaction function for matched pairs of elements. it's designed for running contact force DEM simulations, but it will work fine on this problem too.
As I haven't released it yet, because the other bits of the library aren't ready yet, I'll have to give you a zip of my current source but the contact detection part is solid. The code is LGPL'd.
You'll need Cython and a c compiler to make it work, and it's only been tested and working under *nix environemnts, if you're on windows you'll need the mingw c compiler for Cython to work at all.
Once Cython's installed, building/installing pynet should be a case of running setup.py.
The function you are interested in is pynet.d2.run_contact_detection(py_elements, py_interaction_function, py_simulation_parameters) (and you should check out the classes Element and SimulationParameters at the same level if you want it to throw less errors - look in the file at archive-root/pynet/d2/__init__.py to see the class implementations, they're trivial data holders with useful constructors.)
(I will update this answer with a public mercurial repo when the code is ready for more general release...)
Your solution is okay, but one clear problem is that you're getting dark regions despite there being a point right in the middle of them.
I would instead center an n-dimensional Gaussian on each point and evaluate the sum over each point you want to display. To reduce it to linear time in the common case, use query_ball_point to consider only points within a couple standard deviations.
If you find that he KDTree is really slow, why not call query_ball_point once every five pixels with a slightly larger threshold? It doesn't hurt too much to evaluate a few too many Gaussians.
You can do this with a 2D, separable convolution (scipy.ndimage.convolve1d) of your original image with a gaussian shaped kernel. With an image size of MxM and a filter size of P, the complexity is O(PM^2) using separable filtering. The "Big-Oh" complexity is no doubt greater, but you can take advantage of numpy's efficient array operations which should greatly speed up your calculations.
Just a note, the histogram2d function should work fine for this. Did you play around with different bin sizes? Your initial histogram2d plot seems to just use the default bin sizes... but there's no reason to expect the default sizes to give you the representation you want. Having said that, many of the other solutions are impressive too.

Categories