I am trying to solve this systems but I get error.
I have to definition y3d=0 because y3'=0 in the equation systems. but when I did this, program cant solve. if I say y3d=y[3] then program run,
equation system that ı have to solve is like this:
dy1/dx=y2
dy2/dx=-y3*y1
dy3/dx=0
dy4/d=y1**2+y2**2 and boundary condition y1(0)=y1(1)=0 and y4(0)= 0 y4(1)=1
can scipy handle this?
import numpy as np
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
def eqts(x,y):
y1d=y[1]
y2d=-y[2]*y[0]
y3d=0
y4d=y[0]**2+y[1]**2
return np.vstack((y1d,y2d,y3d,y4d))
def bc(ya,yb):
return np.array([ya[0],yb[3],ya[0],yb[3]-1])
x = np.linspace(0,1,10)
y= np.zeros((4,x.size))
y[2,:]=1
sol=solve_bvp(eqts,bc,x,y)
Unfortunately I get the following error message ;
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 10 and the array at index 2 has size 1
Well, first of all, in your script, your boundary conditions are overdetermined. Nowhere it is said that y3(0) = 0 or y3(1) = 0. Actually, it is not: y3(t) is a constant but it is not zero. If you impose such condition y3(t) = 0, things will not work at all. On top of that, this system looks non-linear (quadratic) but actually is a linear system. You can solve it explicitly without python. If I am not mistaken, the only way you can have a solution is when y3 > 0, which gives you
y1(t) = B * sin(k*pi*t)
y2(t) = k*pi*B * cos(k*pi*t)
y3(t) = k^2*pi^2
y4(t) = t + (k^2pi^2 - 1) B^2 * sin(2*k*pi*t) / (4*k*pi)
where B = sqrt( 2*pi*k / (k^2*pi^2 + 1) )
and k is an arbitrary non-zero integer
or at least something along those lines.
Determinant
The main issue is here
y1d=y[1]
y2d=-y[2]*y[0]
y3d=0
y4d=y[0]**2+y[1]**2
In your implementation all y1d, y2d, and y4d are vector, but y3d is scalar!
You may use y3d = np..zeros_like(y1d)
solve_bvp requires to return a rhs with same size of y see https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_bvp.html?highlight=solve_bvp#scipy.integrate.solve_bvp
In scipy's solve_bvp there is the possibility to treat constant components as parameters that are to be fitted. Thus one can define the system in dimension 3 as
def eqn(t,x,p): return x[1], -p[0]*x[0],x[0]**2+x[1]**2
def bc(x0,x1,p): return x0[0], x1[0], x0[2], x1[2]-1
t_init = np.linspace(0,1,6)
x_init = np.zeros([3,len(t_init)])
x_init[0,1]=1; # break the symmetry that gives a singular Jacobian
Another variant to get a general initial guess would be to fill it with random noise.
To get different solutions the initial conditions need to be different, one point of influence is the frequency square. Setting it close to the expected values gives indeed different solutions.
for w0 in np.arange(1,10)*3:
res = solve_bvp(eqn,bc,t_init,x_init,p=[w0**2])
print(res.message,res.p[0])
This gives as output
The algorithm converged to the desired accuracy. 9.869869348810965
The algorithm converged to the desired accuracy. 39.47947747757308
The algorithm converged to the desired accuracy. 88.82805487260174
The algorithm converged to the desired accuracy. 88.8280548726001
The algorithm converged to the desired accuracy. 157.91618155379464
The algorithm converged to the desired accuracy. 157.91618155379678
The algorithm converged to the desired accuracy. 355.31352482772894
The algorithm converged to the desired accuracy. 355.3135248277307
The algorithm converged to the desired accuracy. 157.91618163528014
As one can see, in the higher frequencies the given initial frequency gets balanced against the lower frequency behavior of the initial functions. This is a general problem, if not forced to stay orthogonal to the lower frequency solutions, the solver tends towards the smoother lower frequencies.
Adding plot commands plt.plot(res.x, res.y[0]) etc.
shows the expected sinusoidal solutions.
Related
I'm trying to use numpy to find the roots of some polynomials, but I am getting some erroneous results:
>> poly = np.polynomial.Polynomial([4.383930e+00, 2.277144e+14, -7.008406e+25, -4.258004e+16])
>> roots = poly.roots()
>> roots
array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12])
>> poly(roots)
array([-3.74803539e+23, -7.99360578e-15, -1.89182003e-13])
What is up with the false root -1.64593692e+09 which results in -3.74803539e+23? This is clearly not a root.
Is this the result of floating-point errors? or something else?..
And more importantly;
Is there a way to get around it?
..perhaps something I can tweak, or a different function I can use?. Any help is much appreciated.
I found this and this previous question which seemed to be related, but after reading them and the answers/comments I don't think that they are the same problem.
First of all, computing the roots of a polynomial is a classically ill-conditioned problem, meaning (roughly) that no matter what algorithm you use to solve it, small changes in the coefficients of many polynomials can lead to huge changes in their roots. That means we should be a little careful not to place an extraordinary amount of faith in root-finding results in general, and that perhaps that we shouldn't be too surprised when a root finder gives weird results. There's a pretty good example on Wikipedia, Wilkinson's polynomial, that shows how things can go wrong.
In this instance, the coefficients of the polynomial of interest are of such different magnitudes that it's not surprising that the results seem poor. But consider this: if our original polynomial is p() and it has a root x, then p(x) = 0, but also c*p(x) = 0 for any constant c. In other words, we can scale the coefficients without changing the roots, so happen if we normalized the polynomial by dividing by the coefficient of largest magnitude, 7e25?
Original polynomial: p(x) = 4.4 + 2.3e+14*x - 7.0e25*x**2 - 4.3e16*x**3
Scaled polynomial: p(x) = 6.3e-26 + 3.2e-12*x - x**2 - 6.1e-10*x**3
So for this polynomial, the largest coefficient ~7e25 is so huge that the smallest coefficient ~4.4 is essentially negligible. That should give us a hint that what counts as zero in a root finding iteration isn't what we would normally consider "small."
The short answer is that the root calculated by NumPy isn't perfect, but it is an estimate of an actual root. Here's some code to convince us.
>>> import numpy as np
>>> coefs = np.array([4.383930e+00, 2.277144e+14, -7.008406e+25, -4.258004e+16])
>>> coefs_normed = coefs / np.abs(coefs).max()
>>> coefs_normed
array([ 6.25524549e-26, 3.24916108e-12, -1.00000000e+00, -6.07556697e-10])
>>> poly = np.polynomial.Polynomial(coefs)
>>> roots = poly.roots()
>>> roots
array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12])
>>> poly(roots)
array([-3.74803539e+23, 8.43769499e-14, -1.89182003e-13])
>>> poly_normed = np.polynomial.Polynomial(coefs_normed)
>>> roots_normed = poly_normed.roots()
>>> roots_normed
array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12])
>>> poly_normed(roots_normed)
array([-5.34791419e-03, 1.20534089e-39, -2.11221641e-39])
Now, -5e-03 is not very close to machine epsilon, but that should convince us that maybe the calculated root isn't quite as bad as it seemed at first.
A final point: the np.polynomial.Polynomial class has domain and window arguments that determine how it does its computations. Since polynomials get absolutely huge as the domain tends to +infinity or -infinity, it's unrealistic to expect accurate calculations for a value around 10^9.
The root appears real:
x = np.linspace(-2e9, 1000, 10000)
plt.plot(x, poly(x))
The problem is that the scale of the data is very large. -3e23 is tiny compared to say 6e43. The discrepancy is caused by roundoff error. Third order polynomials have an analytical solution, but it's not going to be numerically stable when your domain is on the order of 1e9.
You can try to use the domain and window parameters to attempt to introduce some numerical stability. For example, a common choice of domain is something that envelops your entire dataset. You would have to adjust the coefficients to compenstate, since those values are usually used for fitting data.
I was trying to use FiPy to solve a set of PDEs when I realized the command sweep was not working the way I thought it would. Here goes a sample with part of my code:
from pylab import *
import sys
from fipy import *
viscosity = 5.55555555556e-06
Pe =5.
pfi=100.
lfi=0.01
Ly=1.
Nx =200
Ny=100
Lx=Ly*Nx/Ny
dL=Ly/Ny
mesh = PeriodicGrid2DTopBottom(nx=Nx, ny=Ny, dx=dL, dy=dL)
x, y = mesh.cellCenters
xVelocity = CellVariable(mesh=mesh, hasOld=True, name='X velocity')
xVelocity.constrain(Pe, mesh.facesLeft)
xVelocity.constrain(Pe, mesh.facesRight)
rad=0.1
var1 = DistanceVariable(name='distance to center', mesh=mesh, value=numerix.sqrt((x-Nx*dL/2.)**2+(y-Ny*dL/2.)**2))
pi_fi= CellVariable(mesh=mesh, value=0.,name='Fluid-interface energy map')
pi_fi.setValue(pfi*exp(-1.*(var1-rad)/lfi), where=(var1 > rad) )
pi_fi.setValue(pfi, where=(var1 <= rad))
xVelocityEq = DiffusionTerm(coeff=viscosity) - ImplicitSourceTerm(pi_fi)
xres=10.
while (xres > 1.e-6) :
xVelocity.updateOld()
mySolver = LinearGMRESSolver(iterations=1000,tolerance=1.e-6)
xres = xVelocityEq.sweep(var=xVelocity,solver=mySolver)
print 'Result = ', xres
#Thats it
In short, I am declaring a function called xVelocityEq and solving it using sweep. Here is my output:
Result = 0.0007856742013190237
Result = 6.414470433257661e-07
As you can see, the while loop ends after two iterations. My first question is: why is my first residual error (=0.0007856742013190237) higher than the solver's tolerance? I thought that, since xVelocityEq corresponds to a linear system, solver tolerance and residual error would mean the same thing.
If I increase the no. of iterations in mySolver from 1000 to 10000, I get the following output:
Result = 0.0007856742013190237
Result = 2.4619110931978988e-09
Why did the second residual change, given that the first remained the same?
If I increase the tolerance in mySolver from 1.e-6 to 7.e-4, I get the following output:
Result = 0.0007856742013190237
Result = 6.414470433257661e-07
Note that these residuals are the same as in the first output. Now if I try to further increase the tolerance to 8.e-4, here's what I get as output:
Result = 0.0007856742013190237
Result = 0.0007856742013190237
Result = 0.0007856742013190237
Result = 0.0007856742013190237
Result = 0.0007856742013190237
...
At this point I was completely lost. Why the residuals have the same values for all solver tolerances smaller than 7.e-4? And why these residuals are constant and equal to 0.0007856742013190237 for solver tolerances higher than 7.e-4?
If I change the mySolver to LinearLUSolver (iterations=1000, tolerance=1.e-6), here's what I get:
Result = 0.0007856742013190237
Result = 1.6772757200988522e-18
Why in the world is my first residual the same as before, even though I have changed the solver?
why is my first residual error (=0.0007856742013190237) higher than the solver's tolerance?
The residual calculated by .sweep() is calculated before the solver is invoked to calculated a new solution vector. The matrix L and right-hand-side vector b are calculated based on the initial value of the solution vector x.
The residual is a measure of how well the current solution vector satisfies the non-linear PDE. The solver tolerance places a limit on how hard the solver should work to satisfy the linear system of equations discretized from the PDE.
Even if the PDE is linear (e.g., the diffusion coefficient is not a function of the solution variable), the initial value presumably doesn't solve the PDE, so the residual is large. After the solver is invoked, then x should solve the PDE, to within the solver tolerance. If the PDE is non-linear, then a well-converged solution to the linear algebra is still probably not a good solution to the PDE; that's what sweeping is for.
I thought that, since xVelocityEq corresponds to a linear system, solver tolerance and residual error would mean the same thing.
There wouldn't be any utility in keeping track of both. In addition to the residual being before the solve and the solver tolerance being used to terminate the solve, there are different normalizations that can be used and a lot of the solver documentation can be kind of sketchy. FiPy uses |L x - b|_2 as its residual. Solvers may normalize by the magnitude of b, the diagonal of L, or the phase of the moon, all of which can make it hard to directly compare the residual with the tolerance.
Why did the second residual change, given that the first remained the same?
By allowing 1000 iterations instead of 100, the solver was able to drive to a more exacting tolerance which, in turn, led to a smaller residual for the next sweep.
Why the residuals have the same values for all solver tolerances smaller than 7.e-4? And why these residuals are constant and equal to 0.0007856742013190237 for solver tolerances higher than 7.e-4?
Probably because the solver is failing and so not changing the value of the solution vector. Some solvers don't report this. In other cases, we should be doing a better job of reporting that fact to you.
Why in the world is my first residual the same as before, even though I have changed the solver?
The residual is not a property of the solver. It is a property of the discretized system of equations that approximates your PDE. Those linear algebra equations are then the input to the solver.
When creating a line of best fit with numpy's polyfit, you can specify the parameter full to be True. This returns 4 extra values, apart from the coefficents. What do these values mean and what do they tell me about how well the function fits my data?
https://docs.scipy.org/doc/numpy/reference/generated/numpy.polyfit.html
What i'm doing is:
bestFit = np.polyfit(x_data, y_data, deg=1, full=True)
and I get the result:
(array([ 0.00062008, 0.00328837]), array([ 0.00323329]), 2, array([
1.30236506, 0.55122159]), 1.1102230246251565e-15)
The documentation says that the four extra pieces of information are: residuals, rank, singular_values, and rcond.
Edit:
I am looking for a further explanation of how rcond and singular_values describes goodness of fit.
Thank you!
how rcond and singular_values describes goodness of fit.
Short answer: they don't.
They do not describe how well the polynomial fits the data; this is what residuals are for. They describe how numerically robust was the computation of that polynomial.
rcond
The value of rcond is not really about quality of fit, it describes the process by which the fit was obtained, namely a least-squares solution of a linear system. Most of the time the user of polyfit does not provide this parameter, so a suitable value is picked by polyfit itself. This value is then returned to the user for their information.
rcond is used for truncation in ill-conditioned matrices. Least squares solver does two things:
Finds x that minimizes the norm of residuals Ax-b
If multiple x achieve this minimum, returns x with the smallest norm among those.
The second clause occurs when some changes of x do not affect the right-hand side at all. But since floating point computations are imperfect, usually what happens is that some changes of x affect the right hand side very little. And this is where rcond is used to decide when "very little" should be considered as "zero up to noise".
For example, consider the system
x1 = 1
x1 + 0.0000000001 * x2 = 2
This one can be solved exactly: x1 = 1 and x2 = 10000000000. But... that tiny coefficient (that in reality, came after some matrix manipulations) has some numeric error in it; for all we know it could be negative, or zero. Should we let it have such huge influence on the solution?
So, in such a situation the matrix (specifically its singular values) gets truncated at level rcond. This leaves
x1 = 1
x1 = 2
for which the least-squares solution is x1 = 1.5, x2 = 0. Note that this solution is robust: no huge numbers from tiny fluctuations of coefficients.
Singular values
When one solves a linear system Ax = b in the least squares sense, the singular values of A determine how numerically tricky this is. Specifically, large disparity between largest and smallest singular values is problematic: such systems are ill-conditioned. An example is
0.835*x1 + 0.667*x2 = 0.168
0.333*x1 + 0.266*x2 = 0.0067
The exact solution is (1, -1). But if the right hand side is changed from 0.067 to 0.066, the solution is (-666, 834) -- totally different. The problem is that the singular values of A are (roughly) 1 and 1e-6; this magnifies any changes on the right by the factor of 1e6.
Unfortunately, polynomial fit often results in ill-conditioned matrices. For example, fitting a polynomial of degree 24 to 25 equally spaced data points is unadvisable.
import numpy as np
x = np.arange(25)
np.polyfit(x, x, 24, full=True)
The singular values are
array([4.68696731e+00, 1.55044718e+00, 7.17264545e-01, 3.14298605e-01,
1.16528492e-01, 3.84141241e-02, 1.15530672e-02, 3.20120674e-03,
8.20608411e-04, 1.94870760e-04, 4.28461687e-05, 8.70404409e-06,
1.62785983e-06, 2.78844775e-07, 4.34463936e-08, 6.10212689e-09,
7.63709211e-10, 8.39231664e-11, 7.94539407e-12, 6.32326226e-13,
4.09332903e-14, 2.05501534e-15, 7.55397827e-17, 4.81104905e-18,
8.98275758e-20]),
which, with the default value of rcond (5.55e-15 here), gets four of them truncated to 0.
The difference in magnitude between smallest and largest singular values indicates that perturbing the y-values by numbers of size 1e-15 can result in changes of about 1 to the coefficients. (Not every perturbation will do that, just some that happen to align with a singular vector for a small singular value).
Rank
The effective rank is just the number of singular values above the rcond threshold. In the above example it's 21. This means that even though the fit is for 25 points, and we get a polynomial with 25 coefficients, there are only 21 degrees of freedom in the solution.
I am trying to find higher order derivatives of a dataset (x,y). x and y are 1D arrays of length N.
Let's say I generate them as :
xder0=np.linspace(0,10,1000)
yder0=np.sin(xder0)
I define the derivative function which takes in 2 array (x,y) and returns (x1, y1) where y1 is the derivative calculated at each index as : (y[i+1]-y[i])/(x[i+1]-x[i]). x1 is just the mean of x[i+1] and x[i]
Here is the function that does it:
def deriv(x,y):
delx =np.zeros((len(x)-1), dtype=np.longdouble)
ydiff=np.zeros((len(x)-1), dtype=np.longdouble)
for i in range(len(x)-1):
delx[i] =(x[i+1]+x[i])/2.0
ydiff[i] =(y[i+1]-y[i])/(x[i+1]-x[i])
return delx, ydiff
Now to calculate the first derivative, I call this function as:
xder1, yder1 = deriv(xder0, yder0)
Similarly for second derivative, I call this function giving first derivatives as input:
xder2, yder2 = deriv(xder1, yder1)
And it goes on:
xder3, yder3 = deriv(xder2, yder2)
xder4, yder4 = deriv(xder3, yder3)
xder5, yder5 = deriv(xder4, yder4)
xder6, yder6 = deriv(xder5, yder5)
xder7, yder7 = deriv(xder6, yder6)
xder8, yder8 = deriv(xder7, yder7)
xder9, yder9 = deriv(xder8, yder8)
Something peculiar happens after I reach order 7. The 7th order becomes very noisy! Earlier derivatives are all either sine or cos functions as expected. However 7th order is a noisy sine. And hence all derivatives after that blow up.
Any idea what is going on?
This is a well known stability issue with numerical interpolation using equally-spaced points. Read the answers at http://math.stackexchange.com.
To overcome this problem you have to use non-equally-spaced points, like the roots of Lagendre polynomial. The instability occurs due to the unavailability of information at the boundaries, thus more concentration of points at the boundaries is required, as per the roots of say Lagendre polynomials or others with similar properties such as Chebyshev polynomial.
I found this chunk of code on http://rosettacode.org/wiki/Multiple_regression#Python, which does a multiple linear regression in python. Print b in the following code gives you the coefficients of x1, ..., xN. However, this code is fitting the line through the origin (i.e. the resulting model does not include a constant).
All I'd like to do is the exact same thing except I do not want to fit the line through the origin, I need the constant in my resulting model.
Any idea if it's a small modification to do this? I've searched and found numerous documents on multiple regressions in python, except they are lengthy and overly complicated for what I need. This code works perfect, except I just need a model that fits through the intercept not the origin.
import numpy as np
from numpy.random import random
n=100
k=10
y = np.mat(random((1,n)))
X = np.mat(random((k,n)))
b = y * X.T * np.linalg.inv(X*X.T)
print(b)
Any help would be appreciated. Thanks.
you only need to add a row to X that is all 1.
Maybe a more stable approach would be to use a least squares algorithm anyway. This can also be done in numpy in a few lines. Read the documentation about numpy.linalg.lstsq.
Here you can find an example implementation:
http://glowingpython.blogspot.de/2012/03/linear-regression-with-numpy.html
What you have written out, b = y * X.T * np.linalg.inv(X * X.T), is the solution to the normal equations, which gives the least squares fit with a multi-linear model. swang's response is correct (and EMS's elaboration)---you need to add a row of 1's to X. If you want some idea of why it works theoretically, keep in mind that you are finding b_i such that
y_j = sum_i b_i x_{ij}.
By adding a row of 1's, you are are setting x_{(k+1)j} = 1 for all j, which means that you are finding b_i such that:
y_j = (sum_i b_i x_{ij}) + b_{k+1}
because the k+1st x_ij term is always equal to one. Thus, b_{k+1} is your intercept term.