I have a list of y values and a list of x values. I would like to find the area under the curve defined by these points. I have found a couple of solutions to this problem for x values with even spacing:
1) Calculating the area under a curve given a set of coordinates, without knowing the function
2) Using scipy to perform discrete integration of the sample
But neither of these works when the x values are not evenly spaced.
For example:
>>> from scipy.integrate import simps
>>> y = np.array([1,1,1,1])
>>> x = np.array([0,5,20,30])
>>> simps(y,x)
-inf
Of course, using x = np.array([0,10,20,30]) in the above code returns 30.0, as expected.
Can anyone suggest a way to find the area under a curve with uneven x-spacing?
I'd just go for a simple trapezoidal rule:
import numpy as np
x = np.array([0,5,20,30])
y = np.array([1,1,1,1])
s = np.sum((x[1:] - x[:-1]) * (y[1:] + y[:-1]) / 2)
# same as
# s = 0.0
# for k in range(len(x) - 1):
# s += (x[k+1] - x[k]) * (y[k+1] + y[k]) / 2
print(s)
30.0
Related
using Python I have an array with coefficients from a polynomial, let's say
polynomial = [1,2,3,4]
which means the equation:
y = 4x³ + 3x² + 2x + 1
(so the array is in reversed order)
Now how do I plot this into a visual curve in the Jupyter Notebook?
There was a similar question:
Plotting polynomial with given coefficients
but I didn't understand the answer (like what is a and b?).
And what do I need to import to make this happen?
First, you have to decide the limits for x in your plot. Let's say x goes from -2 to 2. Let's also ask for a hundred points on our curve (this can be any sufficiently large number for your interval so that you get a smooth-looking curve)
Let's create that array:
lower_limit = -2
upper_limit = 2
num_pts = 100
x = np.linspace(lower_limit, upper_limit, num_pts)
Now, let's evaluate y at each of these points. Numpy has a handy polyval() that'll do this for us. Remember that it wants the coefficients ordered by highest exponent to lowest, so you'll have to reverse the polynomial list
poly_coefs = polynomial[::-1] # [4, 3, 2, 1]
y = np.polyval(poly_coefs, x)
Finally, let's plot everything:
plt.plot(x, y, '-r')
You'll need the following imports:
import numpy as np
from matplotlib import pyplot as plt
If you don't want to import numpy, you can also write vanilla python methods to do the same thing:
def linspace(start, end, num_pts):
step = (end - start) / (num_pts - 1)
return [start + step * i for i in range(num_pts)]
def polyval(coefs, xvals):
yvals = []
for x in xvals:
y = 0
for power, c in enumerate(reversed(coefs)):
y += c * (x ** power)
yvals.append(y)
return yvals
I have a grid containing some data in polar coordinates, simulating data obtained from a LIDAR for the SLAM problem. Each row in the grid represents the angle, and each column represents a distance. The values contained in the grid store a weighted probability of the occupancy map for a Cartesian world.
After converting to Cartesian Coordinates, I obtain something like this:
This mapping is intended to work in a FastSLAM application, with at least 10 particles. The performance I am obtaining isn't good enough for a reliable application.
I have tried with nested loops, using the scipy.ndimage.geometric_transform library and accessing directly the grid with pre-computed coordinates.
In those examples, I am working with a 800x800 grid.
Nested loops: aprox 300ms
i = 0
for scan in scans:
hit = scan < laser.range_max
if hit:
d = np.linspace(scan + wall_size, 0, num=int((scan+ wall_size)/cell_size))
else:
d = np.linspace(scan, 0, num=int(scan/cell_size))
for distance in distances:
x = int(pos[0] + d * math.cos(angle[i]+pos[2]))
y = int(pos[1] + d * math.sin(angle[i]+pos[2]))
if distance > scan:
grid_cart[y][x] = grid_cart[y][x] + hit_weight
else:
grid_cart[y][x] = grid_cart[y][x] + miss_weight
i = i + 1
Scipy library (Described here): aprox 2500ms (Gives a smoother result since it interpolates the empty cells)
grid_cart = S.ndimage.geometric_transform(weight_mat, polar2cartesian,
order=0,
output_shape = (weight_mat.shape[0] * 2, weight_mat.shape[0] * 2),
extra_keywords = {'inputshape':weight_mat.shape,
'origin':(weight_mat.shape[0], weight_mat.shape[0])})
def polar2cartesian(outcoords, inputshape, origin):
"""Coordinate transform for converting a polar array to Cartesian coordinates.
inputshape is a tuple containing the shape of the polar array. origin is a
tuple containing the x and y indices of where the origin should be in the
output array."""
xindex, yindex = outcoords
x0, y0 = origin
x = xindex - x0
y = yindex - y0
r = np.sqrt(x**2 + y**2)
theta = np.arctan2(y, x)
theta_index = np.round((theta + np.pi) * inputshape[1] / (2 * np.pi))
return (r,theta_index)
Pre-computed indexes: 80ms
for i in range(0, 144000):
gird_cart[ys[i]][xs[i]] = grid_polar_1d[i]
I am not very used to python and Numpy, and I feel I am skipping an easy and fast way to solve this problem. Are there any other alternatives to solve that?
Many thanks to you all!
I came across a piece of code that seems to behave x10 times faster (8ms):
angle_resolution = 1
range_max = 400
a, r = np.mgrid[0:int(360/angle_resolution),0:range_max]
x = (range_max + r * np.cos(a*(2*math.pi)/360.0)).astype(int)
y = (range_max + r * np.sin(a*(2*math.pi)/360.0)).astype(int)
for i in range(0, int(360/angle_resolution)):
cart_grid[y[i,:],x[i,:]] = polar_grid[i,:]
This is using Python.
I tried to post an image, but failed... So it may be difficult to understand.
I call the intersection of a part inside a circle (of which center is (0, 0) and the radius is 1) and a part above a line (y = x).I call it yellow part.
I would like to evaluate the area of the yellow part with random numbers and without for statement or while statement, but I don't know how to do it.
Plus, I also apply it to 3-D.That is, the intersection of a part inside an ellipse (2x^2+3y^2+z^2-4=0) and a part above plane(x+y+2z-1=0). But this is an additional question. It would be easy to do it if I can do it in 2-D.
I'd like to calculate some of volumes in my experiment. I have tried some of my codes, but they don't work.
I appreciate it if you would answer my question.
What I coded↓
import numpy as np
size = 10000
if y - x > 0:
x = np.random.uniform(-1, 1, size)
y = np.random.uniform(-1, 1, size)
area = x**2 + y**2
print(len(area[area < 1])/size)
The area of the yellow part would be expected from this code.
You have the y-x > 0 conditional in the wrong place.
import numpy as np
size = 10000
# Generate random x and y in the square ((-1, -1) (1, 1))
x = np.random.uniform(-1, 1, size)
y = np.random.uniform(-1, 1, size)
area = x*x + y*y
# Move the y-x conditional into the boolean array.
print(len(area[np.logical_and(area < 1, (y-x)>0)])/size)
# The logical and of the points inside the circle and those where y>x.
0.3963
This gives you the proportion of points in the area you want to calculate.
Area of the square = 4.
The proportion both in the circle and gt y-x= 0 .3963
4*.3963 = 1.5852 which is close enough to the pi/2 expected.
An idea is to identify the points in each set. np.where helps for that :
size=10000
x,y = 2*np.random.rand(2,size)-1 # random points in a square
circle= np.where( x*x+y*y<=1)
xc,yc=x[circle],y[circle] # points in circle
half_circle = np.where(yc<xc)
xh,yh= xc[half_circle],yc[half_circle] # points in half circle
from pylab import scatter
scatter(x,y,c='red')
scatter(xc,yc,c='blue')
scatter(xh,yh,c='yellow',s=1)
print(len(xh)/size)
#0.39441
Well, to do it without for statement or while statement means you have to use NumPy vector capabilities. Here is the code, which samples uniformly in the unit circle, counts number of points above y=x line and prints area.
import numpy as np
N=100000 # number of trials
# sampling uniformly in the circle
angle = 2.0 * np.pi * np.random.random(N)
radius = np.sqrt(np.random.random(N))
x = radius * np.cos(angle)
y = radius * np.sin(angle)
t = np.where(y > x)[0]
print(np.pi * len(t)/float(N))
It prints 1.5705135834560735 which is about π/2.
I have a numpy array filled with intensity readings at different radii in a uniform circle (for context, this is a 1D radiative transfer project for protostellar formation models: while much better models exist, my supervisor wasnts me to have the experience of producing one so I understand how others work).
I want to take that 1d array, and "rotate" it through a circle, forming a 2D array of intensities that could then be shown with imshow (or, with a bit of work, aplpy). The final array needs to be 2d, and the projection needs to be Cartesian, not polar.
I can do it with nested for loops, and I can do it with lookup tables, but I have a feeling there must be a neat way of doing it in numpy or something.
Any ideas?
EDIT:
I have had to go back and recreate my (frankly horrible) mess of for loops and if statements that I had before. If I really tried, I could probably get rid of one of the loops and one of the if statements by condensing things down. However, the aim is not to make it work with for loops, but see if there is a built in way to rotate the array.
impB is an array that differs slightly from what I stated it was before. Its actually just a list of radii where particles are detected. I then bin those into radius bins to get the intensity (or frequency if you prefer) in each radius. R is the scale factor for my radius as I run the model in a dimensionless way. iRes is a resolution scale factor, essentially how often I want to sample my radial bins. Everything else should be clear.
radJ = np.ndarray(shape=(2*iRes, 2*iRes)) # Create array of 2xRadius square
for i in range(iRes):
n = len(impB[np.where(impB[:] < ((i+1.) * (R / iRes)))]) # Count number of things within this radius +1
m = len(impB[np.where(impB[:] <= ((i) * (R / iRes)))]) # Count number of things in this radius
a = (((i + 1) * (R / iRes))**2 - ((i) * (R / iRes))**2) * math.pi # A normalisation factor based on area.....dont ask
for x in range(iRes):
for y in range(iRes):
if (x**2 + y**2) < (i * iRes)**2:
if (x**2 + y**2) >= (i * iRes)**2: # Checks for radius, and puts in cartesian space
radJ[x+iRes,y+iRes] = (n-m) / a # Put in actual intensity bins
radJ[x+iRes,-y+iRes] = (n-m) / a
radJ[-x+iRes,y+iRes] = (n-m) / a
radJ[-x+iRes,-y+iRes] = (n-m) / a
Nested loops are a simple approach for that. With ri_data_r and y containing your radius values (difference to the middle pixel) and the array for rotation, respectively, I would suggest:
from scipy import interpolate
import numpy as np
y = np.random.rand(100)
ri_data_r = np.linspace(-len(y)/2,len(y)/2,len(y))
interpol_index = interpolate.interp1d(ri_data_r, y)
xv = np.arange(-1, 1, 0.01) # adjust your matrix values here
X, Y = np.meshgrid(xv, xv)
profilegrid = np.ones(X.shape, float)
for i, x in enumerate(X[0, :]):
for k, y in enumerate(Y[:, 0]):
current_radius = np.sqrt(x ** 2 + y ** 2)
profilegrid[i, k] = interpol_index(current_radius)
print(profilegrid)
This will give you exactly what you are looking for. You just have to take in your array and calculate an symmetric array ri_data_r that has the same length as your data array and contains the distance between the actual data and the middle of the array. The code is doing this automatically.
I stumbled upon this question in a different context and I hope I understood it right. Here are two other ways of doing this. The first uses skimage.transform.warp with interpolation of desired order (here we use order=0 Nearest-neighbor). This method is slower but more precise and needs less memory then the second method.
The second one does not use interpolation, therefore is faster but also less precise and needs way more memory because it stores each 2D array containing one tilt until the end, where they are averaged with np.nanmean().
The difference between both solutions stemmed from the problem of handling the center of the final image where the tilts overlap the most, i.e. the first one would just add values with each tilt ending up out of the original range. This was "solved" by clipping the matrix in each step to a global_min and global_max (consult the code). The second one solves it by taking the mean of the tilts where they overlap, which forces us to use the np.nan.
Please, read the Example of usage and Sanity check sections in order to understand the plot titles.
Solution 1:
import numpy as np
from skimage.transform import warp
def rotate_vector(vector, deg_angle):
# Credit goes to skimage.transform.radon
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
center = vector.size // 2
square = np.zeros((vector.size, vector.size))
square[center,:] = vector
rad_angle = np.deg2rad(deg_angle)
cos_a, sin_a = np.cos(rad_angle), np.sin(rad_angle)
R = np.array([[cos_a, sin_a, -center * (cos_a + sin_a - 1)],
[-sin_a, cos_a, -center * (cos_a - sin_a - 1)],
[0, 0, 1]])
# Approx. 80% of time is spent in this function
return warp(square, R, clip=False, output_shape=((vector.size, vector.size)))
def place_vectors(vectors, deg_angles):
matrix = np.zeros((vectors.shape[-1], vectors.shape[-1]))
global_min, global_max = 0, 0
for i, deg_angle in enumerate(deg_angles):
tilt = rotate_vector(vectors[i], deg_angle)
global_min = tilt.min() if global_min > tilt.min() else global_min
global_max = tilt.max() if global_max < tilt.max() else global_max
matrix += tilt
matrix = np.clip(matrix, global_min, global_max)
return matrix
Solution 2:
Credit for the idea goes to my colleague Michael Scherbela.
import numpy as np
def rotate_vector(vector, deg_angle):
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
square = np.ones([vector.size, vector.size]) * np.nan
radius = vector.size // 2
r_values = np.linspace(-radius, radius, vector.size)
rad_angle = np.deg2rad(deg_angle)
ind_x = np.round(np.cos(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_y = np.round(np.sin(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_x = np.clip(ind_x, 0, vector.size-1)
ind_y = np.clip(ind_y, 0, vector.size-1)
square[ind_y, ind_x] = vector
return square
def place_vectors(vectors, deg_angles):
matrices = []
for deg_angle, vector in zip(deg_angles, vectors):
matrices.append(rotate_vector(vector, deg_angle))
matrix = np.nanmean(np.array(matrices), axis=0)
return np.nan_to_num(matrix, copy=False, nan=0.0)
Example of usage:
r = 100 # Radius of the circle, i.e. half the length of the vector
n = int(np.pi * r / 8) # Number of vectors, e.g. number of tilts in tomography
v = np.ones(2*r) # One vector, e.g. one tilt in tomography
V = np.array([v]*n) # All vectors, e.g. a sinogram in tomography
# Rotate 1D vector to a specific angle (output is 2D)
angle = 45
rotated = rotate_vector(v, angle)
# Rotate each row of a 2D array according to its angle (output is 2D)
angles = np.linspace(-90, 90, num=n, endpoint=False)
inplace = place_vectors(V, angles)
Sanity check:
These are just simple checks which by no means cover all possible edge cases. Depending on your use case you might want to extend the checks and adjust the method.
# I. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then sum(inplace) should be approx. equal to (n * (2πr - n)) / π
# which is an area that should be covered by the tilts
desired_area = (n * (2 * np.pi * r - n)) / np.pi
covered_area = np.sum(inplace)
covered_frac = covered_area / desired_area
print(f'This method covered {covered_frac * 100:.2f}% '
'of the area which should be covered in total.')
# II. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then a circle M with radius m <= r should be the largest circle which
# is fully covered by the vectors. I.e. its mean should be no less than 1.
# If n = πr then m = r.
# m = n / π
m = int(n / np.pi)
# Code for circular mask not included
mask = create_circular_mask(2*r, 2*r, center=None, radius=m)
m_area = np.mean(inplace[mask])
print(f'Full radius r={r}, radius m={m}, mean(M)={m_area:.4f}.')
Code for plotting:
import matplotlib.pyplot as plt
plt.figure(figsize=(16, 8))
plt.subplot(121)
rotated = np.nan_to_num(rotated) # not necessary in case of the first method
plt.title(
f'Output of rotate_vector(), angle={angle}°\n'
f'Sum is {np.sum(rotated):.2f} and should be {np.sum(v):.2f}')
plt.imshow(rotated, cmap=plt.cm.Greys_r)
plt.subplot(122)
plt.title(
f'Output of place_vectors(), r={r}, n={n}\n'
f'Covered {covered_frac * 100:.2f}% of the area which should be covered.\n'
f'Mean of the circle M is {m_area:.4f} and should be 1.0.')
plt.imshow(inplace)
circle=plt.Circle((r, r), m, color='r', fill=False)
plt.gcf().gca().add_artist(circle)
plt.gcf().gca().legend([circle], [f'Circle M (m={m})'])
Short summary: How do I quickly calculate the finite convolution of two arrays?
Problem description
I am trying to obtain the finite convolution of two functions f(x), g(x) defined by
To achieve this, I have taken discrete samples of the functions and turned them into arrays of length steps:
xarray = [x * i / steps for i in range(steps)]
farray = [f(x) for x in xarray]
garray = [g(x) for x in xarray]
I then tried to calculate the convolution using the scipy.signal.convolve function. This function gives the same results as the algorithm conv suggested here. However, the results differ considerably from analytical solutions. Modifying the algorithm conv to use the trapezoidal rule gives the desired results.
To illustrate this, I let
f(x) = exp(-x)
g(x) = 2 * exp(-2 * x)
the results are:
Here Riemann represents a simple Riemann sum, trapezoidal is a modified version of the Riemann algorithm to use the trapezoidal rule, scipy.signal.convolve is the scipy function and analytical is the analytical convolution.
Now let g(x) = x^2 * exp(-x) and the results become:
Here 'ratio' is the ratio of the values obtained from scipy to the analytical values. The above demonstrates that the problem cannot be solved by renormalising the integral.
The question
Is it possible to use the speed of scipy but retain the better results of a trapezoidal rule or do I have to write a C extension to achieve the desired results?
An example
Just copy and paste the code below to see the problem I am encountering. The two results can be brought to closer agreement by increasing the steps variable. I believe that the problem is due to artefacts from right hand Riemann sums because the integral is overestimated when it is increasing and approaches the analytical solution again as it is decreasing.
EDIT: I have now included the original algorithm 2 as a comparison which gives the same results as the scipy.signal.convolve function.
import numpy as np
import scipy.signal as signal
import matplotlib.pyplot as plt
import math
def convolveoriginal(x, y):
'''
The original algorithm from http://www.physics.rutgers.edu/~masud/computing/WPark_recipes_in_python.html.
'''
P, Q, N = len(x), len(y), len(x) + len(y) - 1
z = []
for k in range(N):
t, lower, upper = 0, max(0, k - (Q - 1)), min(P - 1, k)
for i in range(lower, upper + 1):
t = t + x[i] * y[k - i]
z.append(t)
return np.array(z) #Modified to include conversion to numpy array
def convolve(y1, y2, dx = None):
'''
Compute the finite convolution of two signals of equal length.
#param y1: First signal.
#param y2: Second signal.
#param dx: [optional] Integration step width.
#note: Based on the algorithm at http://www.physics.rutgers.edu/~masud/computing/WPark_recipes_in_python.html.
'''
P = len(y1) #Determine the length of the signal
z = [] #Create a list of convolution values
for k in range(P):
t = 0
lower = max(0, k - (P - 1))
upper = min(P - 1, k)
for i in range(lower, upper):
t += (y1[i] * y2[k - i] + y1[i + 1] * y2[k - (i + 1)]) / 2
z.append(t)
z = np.array(z) #Convert to a numpy array
if dx != None: #Is a step width specified?
z *= dx
return z
steps = 50 #Number of integration steps
maxtime = 5 #Maximum time
dt = float(maxtime) / steps #Obtain the width of a time step
time = [dt * i for i in range (steps)] #Create an array of times
exp1 = [math.exp(-t) for t in time] #Create an array of function values
exp2 = [2 * math.exp(-2 * t) for t in time]
#Calculate the analytical expression
analytical = [2 * math.exp(-2 * t) * (-1 + math.exp(t)) for t in time]
#Calculate the trapezoidal convolution
trapezoidal = convolve(exp1, exp2, dt)
#Calculate the scipy convolution
sci = signal.convolve(exp1, exp2, mode = 'full')
#Slice the first half to obtain the causal convolution and multiply by dt
#to account for the step width
sci = sci[0:steps] * dt
#Calculate the convolution using the original Riemann sum algorithm
riemann = convolveoriginal(exp1, exp2)
riemann = riemann[0:steps] * dt
#Plot
plt.plot(time, analytical, label = 'analytical')
plt.plot(time, trapezoidal, 'o', label = 'trapezoidal')
plt.plot(time, riemann, 'o', label = 'Riemann')
plt.plot(time, sci, '.', label = 'scipy.signal.convolve')
plt.legend()
plt.show()
Thank you for your time!
or, for those who prefer numpy to C. It will be slower than the C implementation, but it's just a few lines.
>>> t = np.linspace(0, maxtime-dt, 50)
>>> fx = np.exp(-np.array(t))
>>> gx = 2*np.exp(-2*np.array(t))
>>> analytical = 2 * np.exp(-2 * t) * (-1 + np.exp(t))
this looks like trapezoidal in this case (but I didn't check the math)
>>> s2a = signal.convolve(fx[1:], gx, 'full')*dt
>>> s2b = signal.convolve(fx, gx[1:], 'full')*dt
>>> s = (s2a+s2b)/2
>>> s[:10]
array([ 0.17235682, 0.29706872, 0.38433313, 0.44235042, 0.47770012,
0.49564748, 0.50039326, 0.49527721, 0.48294359, 0.46547582])
>>> analytical[:10]
array([ 0. , 0.17221333, 0.29682141, 0.38401317, 0.44198216,
0.47730244, 0.49523485, 0.49997668, 0.49486489, 0.48254154])
largest absolute error:
>>> np.max(np.abs(s[:len(analytical)-1] - analytical[1:]))
0.00041657780840698155
>>> np.argmax(np.abs(s[:len(analytical)-1] - analytical[1:]))
6
Short answer: Write it in C!
Long answer
Using the cookbook about numpy arrays I rewrote the trapezoidal convolution method in C. In order to use the C code one requires three files (https://gist.github.com/1626919)
The C code (performancemodule.c).
The setup file to build the code and make it callable from python (performancemodulesetup.py).
The python file that makes use of the C extension (performancetest.py)
The code should run upon downloading by doing the following
Adjust the include path in performancemodule.c.
Run the following
python performancemodulesetup.py build
python performancetest.py
You may have to copy the library file performancemodule.so or performancemodule.dll into the same directory as performancetest.py.
Results and performance
The results agree neatly with one another as shown below:
The performance of the C method is even better than scipy's convolve method. Running 10k convolutions with array length 50 requires
convolve (seconds, microseconds) 81 349969
scipy.signal.convolve (seconds, microseconds) 1 962599
convolve in C (seconds, microseconds) 0 87024
Thus, the C implementation is about 1000 times faster than the python implementation and a bit more than 20 times as fast as the scipy implementation (admittedly, the scipy implementation is more versatile).
EDIT: This does not solve the original question exactly but is sufficient for my purposes.