I'm working on a long code that produces the following graph.
The problem is that the error bars change their length because of the logarithmic scale, and I would like all of them to appear equal to each other, that is, to have a fixed length. Is there any way to do this easily with Python?
Edit: Here is part of the code, where I generate the error bars.
faztec=[11.4,11.4,7.1,7.7,6.8,6.9,6.2,7.5,5.5,4.0,3.5,3.2,2.9,2.9]
flaboca=[9.8,7.3,6.8,8.2,6.8,8.1,6.7,11.0,10.6,4.2,7.0,7.1,5.0,5.3]
err1 = [1.5,1.5,1.7,1.8,1.4,1.5,1.7,3.0,2.7,1.4,1.9,1.9,1.4,1.8]
err2 = [0.7,0.7,0.7,0.8,0.7,0.7,0.7,0.9,0.7,0.7,0.7,0.7,0.7,0.7]
newErr1x = []
newErr1y = []
for i in range(0, len(y)):
x1 = x[i]
y1 = (flaboca[i]-err1[i])/(faztec[i]+err2[i])
x2 = x[i]
y2 = (flaboca[i]+err1[i])/(faztec[i]-err2[i])
pl.plot([x1, x2], [y1, y2])
correction = False
# when end of segment is near 0, we must change it (because of logx)
if x2 < endBarLen:
x2 = 0.03
endBarLen = endBarLen / 10
correction = True
pl.plot([x1+endBarLen, x2-endBarLen], [y1, y1], '-k')
pl.plot([x1+endBarLen, x2-endBarLen], [y2, y2], '-k')
if correction:
correction = False
endBarLen = endBarLen*10
pl.show()
You are likely passing in an array as your error, but instead you should pass in the same value for every error bar if you wish for them to be the same length. It appears as if your y axis is not log-scale - if you wish to transform your values from log-space to linear space, you could take the exponent or log of your error data to adjust it accordingly. I would be able to provide a more specific answer if given more specific information
Related
I am very new to python, and I was trying to create a graph with matplotlib. I could create the first line with no problem, I defined 2 lists with x and y values that had the same length. For the second line, I was trying to plot a graph that displays the maximum y value of the first line for each x. My question is, do I have to explicitly write max(y1) x amount of times, or is there a way to automate the process?
x1 = [8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]
y1 = [36.37, 36.18, 36.31, 36.24, 36.31, 36.37, 36.18, 36.43, 36.12, 36.24, 36.06, 36.31, 36.12, 36.49, 36.74, 36.74]
plt.plot(x1, y1, color='black', marker='o', markerfacecolor='black', markersize='10', label='line1')
# I could just write x1 but this was a little easier for me to structure my code
x2 = x1
# Do I have to write [max(y1), max(y1), max(y1) ..., max(y1)] here or can I make a list with fewer items to represent one line (in this case y=36.74)
y2 = [max(y1)]
plt.plot(x2, y2, color='#30C3BF', linestyle='dashed', label='line2')
You can multiply lists.
y2 = [max(y1)] * len(y1)
results in a single list of the same length as y1 with all values set to the maximum in y1
I have created a simple visualizing function in python using pyplot. It takes a dataframe, an upper and lower limit and the start/end points to visualize. Here is the full code:
def visualize(DATASET, DATASET_LIMITS, DATASET_START, DATASET_END):
# DATASET = df_I
# DATASET_LIMITS = df_I_limits
# DATASET_START = 0
# DATASET_END = len(df_I)
plt.figure(figsize=(20,10))
values = []
values_above = []
for data in DATASET['Temp'].iloc[DATASET_START:DATASET_END]:
if data < DATASET_LIMITS[0] or data > DATASET_LIMITS[1]:
values.append(math.nan)
values_above.append(data)
else:
values.append(data)
values_above.append(math.nan)
plt.plot(range(0, DATASET_END - DATASET_START), values, 'b-')
plt.plot(range(0, DATASET_END - DATASET_START), values_above, 'r-')
plt.hlines(DATASET_LIMITS[0], 0, DATASET_END - DATASET_START, colors='g', linestyles='dashed')
plt.hlines(DATASET_LIMITS[1], 0, DATASET_END - DATASET_START, colors='g', linestyles='dashed')
plt.show()
Here is what a generated graph looks like:
You can already see some gaps where the dotted-greed limit line bisects the graph, but here's a zoomed in version to show the problem more clearly. Here is just the largest spike in the data, in the ~205000 range:
You can clearly see that the red and blue segments of the graph are not connected. I believe this is likely due to the method that I use to visualize the data, that being two arrays - values and values_above. Is there a better way of accomplishing this graphing behaviour? Or perhaps a way to get connected lines using this approach?
Since you didn't provide dataset, I'm going to post a simple function I usually use on this occasions.
You take the x and y coordinates from your dataframe and pass them to this function: modify_coords(x, y, y_lim) where y_lim is the y coordinates of the your horizontal line. This function will insert new points where the intersection happens. Then, you can proceed with your usual code.
def modify_coords(x, y, y_lim):
"""If a line segment defined by `(x1, y1) -> (x2, y2)` intercepts
a limiting y-value, divide this segment by inserting a new point
such that y_newpoint = y_lim.
"""
xv, yv = [x[0]], [y[0]]
for i in range(len(x) - 1):
xc, xn = x[i:i+2]
yc, yn = y[i:i+2]
if ((yc < y_lim) and (yn > y_lim)) or ((yc > y_lim) and (yn < y_lim)):
xv.append(((y_lim - yc) / ((yn - yc) / (xn - xc))) + xc)
yv.append(y_lim)
xv.append(xn)
yv.append(yn)
return np.array(xv), np.array(yv)
Note:
it uses linear interpolation to add the new points.
you can see this function in action at this question/answer, where I used a much smoother function as a test. You should give it a try on your data and eventually adapt it to achieve your goal.
I have two points and would like to calculate the angle of the line crossing these points in degrees.
I calculated the angle like so:
import numpy as np
p1 = [0, 0.004583285714285714]
p2 = [1, 0.004588714285714285]
x1 = p1[0]
y1 = p1[1]
x2 = p2[0]
y2 = p2[1]
angle = np.rad2deg(np.arctan2(y1 - y2, x2 - x1))
print(angle)
As expected, the angle is a very small negative number (a small downward slope in relation to the X plane):
-0.00031103423163937605
If I plot this, you will see what I mean:
plt.ylim([0,1]) # making y axis range the same as X (a full unit)
plt.plot([x1, x2], [y1, y2])
Clearly the angle of that line is a very small number because the Y values are so small.
I know the lowest y number in this plot is 0.00458 and the highest is 0.00459.
I'm having trouble coming up with the way to scale this properly so that I can obtain this angle instead:
Which is closer to -35 degrees or so (visually).
How can I get the angle a person would see if the chart was plotted with the Y axis ranging only between those min and max values above?
Of course all plots are just for illustration - I'm trying to calculate just the raw angle number given two points and the min and max values for the Y axis.
Solved it, turns out was exceedingly simple and I'm not sure why I was having trouble with it (or why folks seem not to understand the question ¯\ (ツ)/¯ ).
The angle I was looking for can be obtained by
yRange = yMaxValue - yMininumValue
scaledY1 = y1 / yRange
scaledY2 = y2 / yRange
angle = np.rad2deg(np.arctan2(scaledY1 - scaledY2, x2 - x1))
Which for the values posted in the question, result in -28.495638618242538
The issue I am having is that when I use the code below to find the norm-1 of my error. Firstly, when I plot the error against step-size h, the error values are quite small, in the range of 10^-14 to 10^-16. Secondly, underneath, you can see my attempt to apply the np.polyfit to my graph, which when run, won't fit a characteristic but will output values. The value of p[0] is not perfect, so I believe something is wrong, but it is "close" to the desired output of 3. Is this a matter of just the wrong input or bad data?
def rk3(A,bvector,y0,interval,N):
x0=interval[0]
x_end=interval[1]
x=np.linspace(x0,x_end,N+1)
h=(x_end-x0)/N
y=np.zeros((N+1,len(y0)))
y[0, :] = y0
for n in range(N):
y_1=y[n,:]+h*(np.dot(A,y[n,:])+bvector(x[n]))
y_2=(3/4)*y[n,:]+(1/4)*y_1+(1/4)*h*(np.dot(A,y_1)+bvector(x[n]+h))
y[n+1,:]=(1/3)*y[n,:]+(2/3)*y_2+(2/3)*h*(np.dot(A,y_2)+bvector(x[n]+(1/2)*h))
return x,y
err_vals = []
h_vals = []
for k in range(2,11): #for the range of N=40k, where k=1,...,10
N=40*k
x, y = rk3(A,bvector,y0,[0,0.1],N)
yc = y[-1,:]
h = (x[-1]-x[0])/N
h_vals.append(h)
yvals.append(yc)
yn = y[:,1]
abs_err = np.zeros(N)
print("The value of y at k=",k," is ",yc)
for j in range(1,N):
y_exact=np.array([np.exp(-1000*x[j]), (1000/999)*(np.exp(-x[j])-np.exp(-1000*x[j]))])
y_exact_2 = y_exact[1]
abs_err[j] = np.abs((y[j, 1] - y_exact_2)/y_exact_2)
Error = h*np.sum(abs_err[j])
err_vals.append(Error)
p = np.polyfit(np.log(h_vals), np.log(err_vals), 1)
pyplot.loglog(h_vals,err_vals,"kx")
pyplot.xlabel("h")
pyplot.ylabel("Error")
pyplot.loglog(h,np.exp(p[1])*h**(p[0]), 'r--')
print("Best fit line slope ",format(p[0]))
My evolution of your code below gives a completely straight line with slope close to 3 for the integration over the interval [0,0.01].
For the given interval [0,0.1] the slope value is about 1/3 larger. The error profiles, that is, the absolute error divided by the expected global error power of the step size, gives a converging pattern, confirming the convergence of order 3 of the method.
The error bound 2e7*h^3 is rather large, showing why the combination of problem and method can become very problematic for larger step sizes.
The error is computed via the L1 norms of the function difference and exact solution,
Error = sum(abs((y-y_exact(x))[:,1]))/sum(abs(y[:,1]))
giving a mathematically sound quantity. The summation of the local relative errors can lead to distortions of the total error where the exact solution has a root or small values. But still, even using your computation method of integrating the local relative error leaving out the first data point which is zero,
Error = sum(abs((y[1:,1]/y_exact(x)[1:,1]-1)))*h
gives a similar linear plot, with the range shifted down to 1e-7..1e-9, the slope staying at 3.0293
Note that if you want to use the list h_vals in a computation like the one to plot the fitted line, you have to convert in into a numpy array first.
h=np.asarray(h_vals)
complete code
def rk3(A,bvector,y0,interval,N):
"""Solves an IVP y'=f(x, y(x)) on x \in [0, x_end] with y(0) = y0 using N points, using Runge-Kutta method."""
x=np.linspace(*interval,N+1)
h=x[1]-x[0]
y=np.zeros((N+1,len(y0)))
y[0, :] = y0
for n in range(N):
y_1=y[n]+h*(np.dot(A,y[n])+bvector(x[n]))
y_2=(3/4)*y[n,:]+(1/4)*y_1+(1/4)*h*(np.dot(A,y_1)+bvector(x[n]+h))
y[n+1]=(1/3)*y[n]+(2/3)*y_2+(2/3)*h*(np.dot(A,y_2)+bvector(x[n]+0.5*h))
return x,y
A = np.array([[-1000.0,0.0],[1000.0,-1.0]]);
bvector = lambda x: 0
y_exact = lambda x: np.array([np.exp(-1000*x), (1000/999)*(np.exp(-x)-np.exp(-1000*x))]).T
y0 = y_exact(0)
plt.figure(figsize=(6,3));
h_vals, y_vals, err_vals = [],[],[]
for k in range(2,11): #for the range of N=40k, where k=1,...,10
N=40*k
x, y = rk3(A,bvector,y0,[0,0.01],N)
yc = y[-1,:]
h = x[1]-x[0];
plt.plot(x,(y-y_exact(x))[:,1]/h**3)
h_vals.append(h)
y_vals.append(yc)
yn = y[:,1]
print("The value of y at k=",k," is ",yc)
Error = sum(abs((y-y_exact(x))[:,1]))/sum(abs(y[:,1]))
err_vals.append(Error)
plt.grid(); plt.show()
p = np.polyfit(np.log(h_vals), np.log(err_vals), 1)
plt.figure(figsize=(6,4))
plt.loglog(h_vals,err_vals,"kx")
h=np.asarray(h_vals)
plt.plot(h,np.exp(p[1])*h**(p[0]), '--r', lw=0.5)
plt.xlabel("h")
plt.ylabel("Error")
plt.grid(); plt.show()
print("Best fit line slope ",format(p[0]))
Let's say you have two arrays of data values from a calculation, that you can model with a continuos, differentiable function each. Both "lines" of data points intersect at (at least) one point and now the question is whether the functions behind these datasets are actually crossing or anticrossing.
The image below shows the situation, where I know (from the physics behind it) that at the upper two "contact points" the yellow and green lines actually should "switch color", whereas at the lower one both functions go out of each others way:
To give an easier "toy set" of data, take this code for example:
import matplotlib.pyplot as plt
import numpy as np
x=np.arange(-10,10,.5)
y1=[np.absolute(i**3)+100*np.absolute(i) for i in x]
y2=[-np.absolute(i**3)-100*np.absolute(i) for i in x][::-1]
plt.scatter(x,y1)
plt.scatter(x,y2,color='r')
plt.show()
Which should produce the following image:
Now how could I extrapolate whether the trend behind the data is crossing (so the data from the lower left continues to the upper right) or anti-crossing (as indicated with the colors above, the data from the lower left continues to the lower right)?
So far I was able to find the "contact point" between these to datasets by looking at the derivative of the Difference between them, roughly like this:
closePoints=np.where(np.diff(np.diff(array_A - array_B) > 0))[0] + 1
(which probably would be faster to evaluate with something like scipy's cKDTree).
Should I go on and (probably very inefficiently) check the derivative on both sides of the intersection? Or can I somehow check if the extrapolation of the data on the left side fits better to crossing or anticrossing?
I understood your problem as:
You have two sequences of points in a 2D plane.
The true curves can be approximated by straight lines between consecutive points of the sequences.
You want to know how often and where the two curves intersect (not only come into contact but really cross each other) (polygon intersection).
A potential solution is:
You look at each combination of a line segment of one curve with a line segment of another curve.
Combinations where the bounding boxes of the line segments have an overlap can potentially contain intersection points.
You solve a linear equation system to compute if and where an intersection between two lines occurs
In case of no solution to the equation system the lines are parallel but not overlapping, dismiss this case
In case of one solution check that it is truly within the segments, if so record this crossing point
In case of infinitely many intersections the lines are identical. This is also no real crossing and can be dismissed.
Do this for all combinations of line segments and eliminate twin cases, i.e. where the two curves intersect at a segment start or end
Let me give some details:
How to check if two bounding-boxes (rectangles) of the segments overlap so that the segments potentially can intersect?
The minimal x/y value of one rectangle must be smaller than the maximal x/y value of the other. This must hold for both.
If you have two segments how do you solve for intersection points?
Let's say segment A has two points (x1, y1) and (x2, y2) and segment B has two points (x2, y3) and (x4, y4).
Then you simply have two parametrized line equations which have to be set equal:
(x1, y1) + t * (x2 - x1, y2 - y1) = (x3, y3) + q * (x4 - x3, y4 - y3)
And you need to find all solutions where t or q in [0, 1). The corresponding linear equation system may be rank deficient or not solvable at all, best is to use a general solver (I chose numpy.linalg.lstsq) that does everything in one go.
Curves sharing a common point
Surprisingly difficult are cases where one point is common in the segmentation of both curves. The difficulty lies then in the correct decision of real intersection vs. contact points. The solution is to compute the angle of both adjacent segments of both curves (gives 4 angles) around the common point and look at the order of the angles. If both curves come alternating when going around the equal point then it's an intersection, otherwise it isn't.
And a code example based on your data:
import math
import matplotlib.pyplot as plt
import numpy as np
def intersect_curves(x1, y1, x2, y2):
"""
x1, y1 data vector for curve 1
x2, y2 data vector for curve 2
"""
# number of points in each curve, number of segments is one less, need at least one segment in each curve
N1 = x1.shape[0]
N2 = x2.shape[0]
# get segment presentation (xi, xi+1; xi+1, xi+2; ..)
xs1 = np.vstack((x1[:-1], x1[1:]))
ys1 = np.vstack((y1[:-1], y1[1:]))
xs2 = np.vstack((x2[:-1], x2[1:]))
ys2 = np.vstack((y2[:-1], y2[1:]))
# test if bounding-boxes of segments overlap
mix1 = np.tile(np.amin(xs1, axis=0), (N2-1,1))
max1 = np.tile(np.amax(xs1, axis=0), (N2-1,1))
miy1 = np.tile(np.amin(ys1, axis=0), (N2-1,1))
may1 = np.tile(np.amax(ys1, axis=0), (N2-1,1))
mix2 = np.transpose(np.tile(np.amin(xs2, axis=0), (N1-1,1)))
max2 = np.transpose(np.tile(np.amax(xs2, axis=0), (N1-1,1)))
miy2 = np.transpose(np.tile(np.amin(ys2, axis=0), (N1-1,1)))
may2 = np.transpose(np.tile(np.amax(ys2, axis=0), (N1-1,1)))
idx = np.where((mix2 <= max1) & (max2 >= mix1) & (miy2 <= may1) & (may2 >= miy1)) # overlapping segment combinations
# going through all the possible segments
x0 = []
y0 = []
for (i, j) in zip(idx[0], idx[1]):
# get segment coordinates
xa = xs1[:, j]
ya = ys1[:, j]
xb = xs2[:, i]
yb = ys2[:, i]
# ax=b, prepare matrices a and b
a = np.array([[xa[1] - xa[0], xb[0] - xb[1]], [ya[1] - ya[0], yb[0]- yb[1]]])
b = np.array([xb[0] - xa[0], yb[0] - ya[0]])
r, residuals, rank, s = np.linalg.lstsq(a, b)
# if this is not a
if rank == 2 and not residuals and r[0] >= 0 and r[0] < 1 and r[1] >= 0 and r[1] < 1:
if r[0] == 0 and r[1] == 0 and i > 0 and j > 0:
# super special case of one segment point (not the first) in common, need to differentiate between crossing or contact
angle_a1 = math.atan2(ya[1] - ya[0], xa[1] - xa[0])
angle_b1 = math.atan2(yb[1] - yb[0], xb[1] - xb[0])
# get previous segment
xa2 = xs1[:, j-1]
ya2 = ys1[:, j-1]
xb2 = xs2[:, i-1]
yb2 = ys2[:, i-1]
angle_a2 = math.atan2(ya2[0] - ya2[1], xa2[0] - xa2[1])
angle_b2 = math.atan2(yb2[0] - yb2[1], xb2[0] - xb2[1])
# determine in which order the 4 angle are
if angle_a2 < angle_a1:
h = angle_a1
angle_a1 = angle_a2
angle_a2 = h
if (angle_b1 > angle_a1 and angle_b1 < angle_a2 and (angle_b2 < angle_a1 or angle_b2 > angle_a2)) or\
((angle_b1 < angle_a1 or angle_b1 > angle_a2) and angle_b2 > angle_a1 and angle_b2 < angle_a2):
# both in or both out, just a contact point
x0.append(xa[0])
y0.append(ya[0])
else:
x0.append(xa[0] + r[0] * (xa[1] - xa[0]))
y0.append(ya[0] + r[0] * (ya[1] - ya[0]))
return (x0, y0)
# create data
def data_A():
# data from question (does not intersect)
x1 = np.arange(-10, 10, .5)
x2 = x1
y1 = [np.absolute(x**3)+100*np.absolute(x) for x in x1]
y2 = [-np.absolute(x**3)-100*np.absolute(x) for x in x2][::-1]
return (x1, y1, x2, y2)
def data_B():
# sine, cosine, should have some intersection points
x1 = np.arange(-10, 10, .5)
x2 = x1
y1 = np.sin(x1)
y2 = np.cos(x2)
return (x1, y1, x2, y2)
def data_C():
# a spiral and a diagonal line, showing the more general case
t = np.arange(0, 10, .2)
x1 = np.sin(t * 2) * t
y1 = np.cos(t * 2) * t
x2 = np.arange(-10, 10, .5)
y2 = x2
return (x1, y1, x2, y2)
def data_D():
# parallel and overlapping, should give no intersection point
x1 = np.array([0, 1])
y1 = np.array([0, 0])
x2 = np.array([-1, 3])
y2 = np.array([0, 0])
return (x1, y1, x2, y2)
def data_E():
# crossing at a segment point, should give exactly one intersection point
x1 = np.array([-1,0,1])
y1 = np.array([0,0,0])
x2 = np.array([0,0,0])
y2 = np.array([-1,0,1])
return (x1, y1, x2, y2)
def data_F():
# contacting at one segment point, should give no intersection point
x1 = np.array([-1,0,-1])
y1 = np.array([-1,0,1])
x2 = np.array([1,0,1])
y2 = np.array([-1,0,1])
return (x1, y1, x2, y2)
x1, y1, x2, y2 = data_F() # select the data you like here
# show example data
plt.plot(x1, y1, 'b-o')
plt.plot(x2, y2, 'r-o')
# call to intersection computation
x0, y0 = intersect_curves(x1, y1, x2, y2)
print('{} intersection points'.format(len(x0)))
# display intersection points in green
plt.plot(x0, y0, 'go')
plt.show() # zoom in to see that the algorithm is correct
I tested it extensively and should get most (all) border cases right (see data_A-F in code). Some examples:
Some Comments:
The assumption about the line approximation is crucial. Most true curves might only be to some extent be approximable to lines locally. Because of this places where the two curves come close but to not intersect with a distance in the order of the distance of consecutive sampling points of your curve - you may obtain false positives or false negatives. The solution is then to either use more points or to use additonal knowledge about the true curves. Splines might give a lower error rate but also require more computations, better sampling of the curves would be preferable then.
Self-intersection is trivially included when taking two times the same curve and let them intersect
This solution has the additional advantage that it isn't restricted to curves of the form y=f(x) but it's applicable to arbitrary curves in 2D.
You could use a spline interpolation for the difference function g(x) = y1(x) - y(2). Finding the minimum of the square g(x)**2 would be a contact or crossing point. Looking at the first and second derivative you could decide if it is a contact point( g(x) has minimum, g'(x)==0, g''(x) != 0) or a crossing point (g(x) is a stationary point, g'(x)==0, g''(x)==0).
The following code searches for a minimum of g(x)**2 in constrained interval and then plot the derivatives. The use of a constrained interval is to find multiple points successively by excluding intervals in which previous points were.
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as sopt
import scipy.interpolate as sip
# test functions:
nocrossingTest = True
if nocrossingTest:
f1 = lambda x: +np.absolute(x**3)+100*np.absolute(x)
f2 = lambda x: -np.absolute(x**3)-100*np.absolute(x)
else:
f1 = lambda x: +np.absolute(x**3)+100*x
f2 = lambda x: -np.absolute(x**3)-100*x
xp = np.arange(-10,10,.5)
y1p, y2p = f1(xp), f2(xp) # test array
# Do Interpolation of y1-y2 to find crossing point:
g12 = sip.InterpolatedUnivariateSpline(xp, y1p - y2p) # Spline Interpolator of Difference
dg12 = g12.derivative() # spline derivative
ddg12 = dg12.derivative() # spline derivative
# Bounded least square fit to find minimal distance
gg = lambda x: g12(x)*g12(x)
rr = sopt.minimize_scalar(gg, bounds=[-1,1]) # search minium in Interval [-1,1]
x_c = rr['x'] # x value with minimum distance
print("Crossing point is at x = {} (Distance: {})".format(x_c, g12(x_c)))
fg = plt.figure(1)
fg.clf()
fg,ax = plt.subplots(1, 1,num=1)
ax.set_title("Function Values $y$")
ax.plot(xp, np.vstack([y1p,y2p]).T, 'x',)
xx = np.linspace(xp[0], xp[-1], 1000)
ax.plot(xx, np.vstack([f1(xx), f2(xx)]).T, '-', alpha=0.5)
ax.grid(True)
ax.legend(loc="best")
fg.canvas.draw()
fg = plt.figure(2)
fg.clf()
fg,axx = plt.subplots(3, 1,num=2)
axx[0].set_title("$g(x) = y_1(x) - y_2(x)$")
axx[1].set_title("$dg(x)/dx$")
axx[2].set_title("$d^2g(x)/dx^2$")
for ax,g in zip(axx, [g12, dg12, ddg12]):
ax.plot(xx, g(xx))
ax.plot(x_c, g(x_c), 'ro', alpha=.5)
ax.grid(True)
fg.tight_layout()
plt.show()
The difference function show that the difference is not smooth: