I'm trying to interpolate a gap I have between data points. The data I have is 2 arrays of time and acceleration. The acceleration array consist of values that can be considered periodic. The original data points with the gap look like this:
data points with gap
I am trying to do the interpolation by using the scipy.interpolate.interp1d as illustrated below:
interpolation_func = interpolate.interp1d(time, acceleration,
kind='slinear')
new_time = np.arange(np.min(time), np.max(time), 0.1)
new_acc = interpolation_func(new_time)
plt.figure(2, figsize=(14, 8))
plt.title('Interpolated uncalibrated acceleration data')
plt.scatter(new_time, new_acc, c=new_time[:], s=1, vmin=np.min(new_time),
vmax=np.max(new_time))
plt.colorbar()
plt.xlabel('Time [s]')
plt.ylabel('Acceleration')
plot_fig2 = (output_folder + "kinematic_plot2.png")
plt.savefig(plot_fig2)
However, the result I'm getting is not accurate because I'm get a line that connects the last point from 1st group of scattered points, on the left side of the gap, and the first point from the 2nd group of points, on the right side of the gap. The wrong result looks like this:
Wrong result
I have tried other options from the scipy.interpolate.interp1d function, other than the kind slinear, but all of them would flatten the scattered points on both sides of the gap and fill in the gap with a polynomial graph, which is not what I need. Are there any options in python to interpolate the gap I have between the scattered points?
Related
I have some data, where each data point has x and y coordinates and a magnitude assigned to it. I am currently plotting a scatter plot with the colours representing the magnitude of the points.
However, I would now like to group the data into a set of larger "pixels", which are illustrated in the plot below using the dashed grid (i.e. equally spaced square markers of size 0.2*0.2), where the magnitude is given by the average of the magnitudes of the points within the "pixel".
Is there a way to use a scatter plot to do this simply? Or do I need to manipulate the data myself to give this output beforehand?
fig,ax = plt.subplots()
sc = ax.scatter(x_coord, y_coord, s=100, c=mangitude, marker='s')
cbar = fig.colorbar(sc,ax=ax)
cbar.set_label('Magnitude',rotation=90)
ax.set_xlabel('x-position')
ax.set_ylabel('y-position')
Zooming into a part of the plot this gives me:
I'm trying to use matplotlib and contourf to generate some filled (polar) contour plots of velocity data. I have some data (MeanVel_Z_Run16_np) I am plotting on theta (Th_Run16) and r (R_Run16), as shown here:
fig,ax = plt.subplots(subplot_kw={'projection':'polar'})
levels = np.linspace(-2.5,4,15)
cplot = ax.contourf(Th_Run16,R_Run16,MeanVel_Z_Run16_np,levels,cmap='plasma')
ax.set_rmax(80)
ax.set_rticks([15,30,45,60])
rlabels = ax.get_ymajorticklabels()
for label in rlabels:
label.set_color('#E6E6FA')
cbar = plt.colorbar(cplot,pad=0.1,ticks=[0,3,6,9,12,15])
cbar.set_label(r'$V_{Z}$ [m/s]')
plt.show()
This generates the following plot:
Velocity plot with 15 levels:
Which looks great (and accurate), outside of that random straight orange line roughly between 90deg and 180deg. I know that this is not real data because I plotted this in MATLAB and it did not appear there. Furthermore, I have realized it appears to relate to the number of contour levels I use. For example, if I bump this code up to 30 levels instead of 15, the result changes significantly, with odd triangular regions of uniform value:
Velocity plot with 30 levels:
Does anyone know what might be going on here? How can I get contourf to just plot my data without these strange misrepresentations? I would like to use 15 contour levels at least. Thank you.
I have the equation: z(x,y)=1+x^(2/3)y^(-3/4)
I would like to calculate values of z for x=[0,100] and y=[10^1,10^4]. I will do this for 100 points in each axis direction. My grid, then, will be 100x100 points. In the x-direction I want the points spaced linearly. In the y-direction I want the points space logarithmically.
Were I to need these values I could easily go through the following:
x=np.linspace(0,100,100)
y=np.logspace(1,4,100)
z=np.zeros( (len(x), len(y)) )
for i in range(len(x)):
for j in range(len(y)):
z[i,j]=1+x[i]**(2/3)*y[j]**(-3/4)
The problem for me comes with visualizing these results. I know that I would need to create a grid of points. I feel my options are to create a meshgrid with the values and then use pcolor.
My issue here is that the values at the center of the block do not coincide with the calculated values. In the x-direction I could fix this by shifting the x-vector by half of dx (the step between successive values). I'm not so sure how I would do this for the y-axis. Furthermore, If I wanted to compute values for each of the y-direction values, including the end points, they would not all show up.
In the final visualization I would like to have the y-axis as a log scale and the x axis as a linear scale. I would also like the tick marks to fall in the center of the cells, correlating with the correct value. Can someone point me to the correct plotting functions for this. I have to resolve the issue using pcolor or pcolormesh.
Should you require more details, please let me know.
In current matplotlib, you can use pcolormesh with shading='nearest', and it will center the blocks with the values:
import matplotlib.pyplot as plt
y_plot = np.log10(y)
z[5, 5] = 0 # to make it more evident
plt.pcolormesh(x, y_plot, z, shading="nearest")
plt.colorbar()
ax = plt.gca()
ax.set_xticks(x)
ax.set_yticks(y_plot)
plt.axvline(x[5])
plt.axhline(y_plot[5])
Output:
I am trying to plot global storm tracks, but when the storms cross the dateline (and longitudes go from ~360 to ~0), the line loops all the way around the plotting space.
Here's what the plot looks like. See the weird straight lines near the top.
Here's my code:
ax = plt.axes(projection=ccrs.Robinson())
ax.set_global()
ax.coastlines()
for i in range(nstorms-1): #loop through each TC
bidx = start_idx[i]
eidx = start_idx[i+1]
plt.plot(clons[bidx:eidx],clats[bidx:eidx],transform=ccrs.PlateCarree())
If I try changing the transform to Geodetic, it looks like this:
To plot polylines that cross the dateline, you need to sanitize the longitudes properly. For example, values 359 to 2 should be adjusted to 359 to 362. In the demo code below, sanitize_lonlist() is used to sanitize a list of longitude values before using it to plot a red zigzag line.
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
def sanitize_lonlist(lons):
new_list = []
oldval = 0
treshold = 10 # used to compare adjacent longitudes
for ix,ea in enumerate(lons):
diff = oldval - ea
if (ix>0):
if (diff>treshold):
ea = ea+360
oldval = ea
new_list.append(ea)
return new_list
ax = plt.axes(projection=ccrs.Robinson())
ax.set_global()
ax.coastlines(alpha=0.3)
# sample long/lat data for demo purposes
# xdateline: list of longitudes that cross dateline several times
xdateline = [347,349,352,358,4,7,8,3,359,358,360,3,5,359,1,357,0,8,12,6,357,349]
# ydateline: list of accompanying latitudes
ydateline = range(len(xdateline))
# plot the line crossing dateline using `sanitized` values of longitudes
plt.plot(sanitize_lonlist(xdateline), ydateline, transform=ccrs.PlateCarree(), color='red')
plt.show()
Using the raw values of xdateline to plot with the line of code:-
plt.plot(xdateline, ydateline, transform=ccrs.PlateCarree(), color='red')
the plot will be:-
As per this github issue, this is expected behaviour since PlateCarree is a projected coordinate system.
The PlateCarree coordinate system is Cartesian where a line between two points is straight (in that coordinate system). The Cartesian system has no knowledge of datelines/antimeridians and so when you ask for a line between -170 and +170 you get a line of length 340. It can never be the case that the PlateCarree projection interprets these numbers and chooses to draw a non-cartesian line
One solution is to use the Geodetic transform in your plot calls:
plt.plot(clons[bidx:eidx], clats[bidx:eidx], transform=ccrs.Geodetic())
Or modify your data to make more sense when using the PlateCarree system, e.g. by identifying where values loop from 360-> 0 and adding 360 to all values after that occurs. You could shift them onto a different range (e.g. -180..180) but you'll have the same issue with data crossing +/- 180 as you do with 0/360 currently.
Suppose that I have a function which takes in 2 real numbers x,y as input and outputs 2 real numbers w,z, i.e., myfunc(x,y)=w,z, so if I had a list of x,y points, then I would also have a list of w,z points. I want to be able to visualize this on plot. One way that I know how is to regard w,z as a point in 2d space and calculate the angle theta and intensity r (convert to polar coordinates) and use scatter plot where I represent the angle theta with a hue and intensity r with luminous. The following would be a pseudo-code in python
w,z = myfunc(x,y)
theta, r = cartesian2polar(w,z)
cmap = matplotlib.cm.hsv
my_cmap = convert cmap so that theta corresponds to a hue and r is the luminous
plt.scatter(x,y,c=my_cmap)
The problem with this is that the scatter plot is relatively slow when I have many data points. Is there anyway else to implement this but much more quickly? Maybe by using imshow, since my x,y points are actually obtained from meshgrid.
EDIT:
I found this post, which does exactly what I need.
The bottleneck is computing the cmap.
Could you generate the cmap once and for all? Perhaps could you lower the resolution on the cmap and, instead of having a continuous cmap, have a discrete one.