Fitting curves in Jupyter Notebook without using the fit function - python

For a task for school, I have to write my own fit function by using the least squares method. The problem is I don't know how to do that, specifically I don't know how to minimize my function to calculate my fit parameters.The problem here is also that my fit function is not linear, so my book says I have to try guess some values for my fit parameters and then minimize my function. But still I don't know how do to that. The code that you can find below is my code right now, I got it from somebody but I don't understand what it does so :).
Thanks in advance!
def fit(x,mu,gamma,back,A):
return A*(gamma/((x-mu)**2+gamma**2))+back
def Ls_rechte(y):
Ls = 0
for i in range(len(Positie)):
Ls = Ls + (Intensiteit[i]- fit(Positie[i],y[0],y[1],y[2],y[3]))**2/(FoutI[i]**2)
return Ls
nu = len(Positie)-4
mini = minimize(Ls_rechte,(150,0,100,1))
display(mini)

Related

How to make all cores work during scipi linalg solver?

I wrote an algorithm for Newmark integration of equation of motion. It works quite well, with the main solver set up as scipy.linalg.solve, however as far as i have been told this solver should use whole CPU but for my calculations it's only using one core. Can you please tell me what should i change / where the mistake is? Code of the main solver part below.
for i in range (1,NT):
t=dt*i
V1=(a1*U[:,i-1]+a4*Ud[:,i-1]+a5*Udd[:,i-1])
V2=(a0*U[:,i-1]+a2*Ud[:,i-1]+a3*Udd[:,i-1])
print("calculation in timestep t=", t)
CV=dot(numpy.array(Ceqbb).astype(numpy.float64),V1)
MA=dot(numpy.array(Beqbb).astype(numpy.float64),V2)
# apply forces
F=(numpy.array(FQbb(t)).astype(numpy.float64)).reshape(ndof)
FH=F+MA+CV
# solve for displacements
if(ndof>1):
Un = linalg.solve(KHnn(t), FH)
else:
Un=FH/KHnn(t)
Uddn=a0*(Un-U[:,i-1])-a2*Ud[:,i-1]-a3*Udd[:,i-1]
Udn=Ud[:,i-1]+a6*Udd[:,i-1]+a7*Uddn
U[:,i]=Un
Ud[:,i]=Udn
Udd[:,i]=Uddn

The largevalue on fipy internal boundary condition

I have tried internal boundary condition in code below.
I found that while I have not set an external boundary condition, the solved result will depend on the LargeValue. Besides, while I increase the largeValue, I must redefine the equation again, otherwise the equation couldn't be changed by just setting a new value to LargeValue.
I have used the sweep method to try to get a better result, but it does not work.
Below is my code. Is there any mistake? Hope someone will help me!
for step in range(steps):
equation2=DiffusionTerm(coeff=perittivity)==ImplicitSourceTerm(largeValue*mask)-largeValue*mask*value
potential.setValue(0)
k=0.5
residual=1
while residual>1e-10 and abs(k-residual)>1e-18 :
k=residual
residual=equation2.sweep(potential)
if __name__=="__main__":
viewer.plot()
print step,residual,k-residual#,equation2,largeValue
largeValue=Variable(value = largeValue.value*1.1)

Function definition and function call produce syntax error in python 3, even when copied and pasted

I am trying to solve the problem of a least squares fit of a power law spliced to a third order polynomial in python using gradient descent. I have computed gradients with respect to the parameters in Matlab. The boundary conditions I computed by hand. I am running into a syntax error in my chi-squared minimization algorithm, which must take into account the boundary conditions. I am doing this for a machine learning class in which I am completing a somewhat self-directed and self-proposed long term project, but I am stuck because of this syntax error that I am not sure how to overcome. I will not get class credit for this. It is simply something to put on my resume.
def polypowerderiv(x,a1,b1,c1,a2,b2,c2,d2,boundaryx,ydat):
#need to minimize square of ydat-polypower
#from Mathematica, to be careful
gradd2=2*(d2+c2*x+b2*x**2+a2*x**3-ydat)
gradc2=gradd2*x
gradb2=gradc2*x
grada2=gradb2*x
#again from Mathematica, to be careful
gradc1=2(c+a1*x**b1-ydat)
grada1=gradc1*x**b1
gradb1=grada1*a1*log(x)
return [np.sum(grada1),np.sum(gradb1),\
np.sum(gradc1),np.sum(grada2),np.sum(gradb2),\
np.sum(gradc2),np.sum(gradd2)]
def manualleastabsolutedifference(xdat, ydat, params,seed, maxiter, learningrate):
chisq=0 #chisq is the L2 error of the fit relative to the ydata
dof=len(xdat)-len(params)
xparams=seed
for step in np.arange(maxiter):
a1,b1,c1,a2,b2,c2,d2=params
chisq=polypowerlaw(xdat,params)
for i in np.arange(len(xdat)):
grad=np.zeros(len(seed))
for i in np.arange(seed):
polypowerlawboundarysolver=\
polypowerboundaryconstraint(xdat,a1,b1,c1,a2,b2,c2)
boundaryx=minimize(polypowerlawboundarysolver,x0=1000)
#hard coded to be half of len(xdat)
chisq+=abs(ydat-\
polypower(xdat,a1,b1,c1,a2,b2,c2,d2,boundaryx)
grad=\
polypowerderiv(xdat,a1,b1,c1,\
a2,b2,c2,d2,boundaryx,ydat)
params+=learningrate*grad
return params
The error I get is:
File "", line 14
grad=polypowerderiv(xdat,a1,b1,c1,a2,b2,c2,d2,boundaryx,ydat)
^
SyntaxError: invalid syntax
Also, I'm having some small trouble with formatting. Please help. This one of my first few posts to Stack Overflow ever, after many years of up and down votes. Thank you for your extensive help, community.
As per Alan-Fey, you forgot a closing bracket:
chisq+=abs(ydat-\
polypower(xdat,a1,b1,c1,a2,b2,c2,d2,boundaryx)
should be
chisq+=abs(ydat-\
polypower(xdat,a1,b1,c1,a2,b2,c2,d2,boundaryx))

How to find what can be tuned in theme()

New to python, I'm trying to fine tuning the plotnine graph, and explore what can be done in the theme() function. I'm just wondering what is the general way to find out what else is available for me to play with.
theme(plot_title = element_text(size=10, text='tile'),
axis_title_y = element_text(size=7, text='tlab'),
axis_title_x = element_text(size=7,text='xlab'))
In plotnine, they are called themeables.

Add random noise to Variable tensorflow

I am beginner in tensorflow and I have run into a problem: how to manually change Variable? More precisely, I want to add some noise to my Weights tensor, see how good it does, and based on that, apply/ignore the change.
W = tf.Variable(tf.randomNormal([xsize,ysize]))
TempW = W + tf.randomNormal([xsize,ysize])
compute = x*TempW
#initialize, run the computation etc.
# how can I make W = TempW now?
After kratenko pointed it out, I figured that there are methods like
tf.Variable.assign(value)
tf.Variable.assign_add(value)
tf.Variable.assign_subtract(value)
In my case, usage was:
#initialisation
apply = W.assign(TempW)
#usage
sess.run(apply)
So if anyone also skipped these ones in docs, I hope it helps.

Categories