I wanted to store the different integration steps taken by the solver itself when I call it :
solver1.integrate(t_end)
So I did a while loop and enabled the step option setting its value to True:
while solver1.successful() and solver1.t < t0+dt:
solver1.integrate(t_end,step=True)
time.append(solver1.t)
Then I plot y, the result of integration and here comes my issue. I have instabilities which appear in a located area :
I thought it was because of the loop or something like that so I checked the result removing the step :
while solver1.successful() and solver1.t < t0+dt:
solver1.integrate(t_end)
And surprise ... I have the correct result :
It's a quite weird situation ... I'd be grateful if someone of your guys could help me with this issue.
EDIT :
To set the solver I do :
solver1 = ode(y_dot,jac).set_integrator('vode',with_jacobian=True)
solver1.set_initial_value(x0,t0)
And I store the result using .append()
When you set step=True you are indirectly giving the vode._integrator.runner (a Fortran subroutine) an instruction to use itask=2, and the default is itask=1. You can get more details about this runner doing:
r._integrator.runner?
In SciPy 0.12.0 documentation you will not find an explanation about what is going on for the different itask=1 or itask=2, but you can find it here:
ITASK = An index specifying the task to be performed.
! Input only. ITASK has the following values and meanings.
! 1 means normal computation of output values of y(t) at
! t = TOUT(by overshooting and interpolating).
! 2 means take one step only and return.
! 3 means stop at the first internal mesh point at or
! beyond t = TOUT and return.
! 4 means normal computation of output values of y(t) at
! t = TOUT but without overshooting t = TCRIT.
! TCRIT must be input as RUSER(1). TCRIT may be equal to
! or beyond TOUT, but not behind it in the direction of
! integration. This option is useful if the problem
! has a singularity at or beyond t = TCRIT.
! 5 means take one step, without passing TCRIT, and return.
! TCRIT must be input as RUSER(1).
Related
I have tried internal boundary condition in code below.
I found that while I have not set an external boundary condition, the solved result will depend on the LargeValue. Besides, while I increase the largeValue, I must redefine the equation again, otherwise the equation couldn't be changed by just setting a new value to LargeValue.
I have used the sweep method to try to get a better result, but it does not work.
Below is my code. Is there any mistake? Hope someone will help me!
for step in range(steps):
equation2=DiffusionTerm(coeff=perittivity)==ImplicitSourceTerm(largeValue*mask)-largeValue*mask*value
potential.setValue(0)
k=0.5
residual=1
while residual>1e-10 and abs(k-residual)>1e-18 :
k=residual
residual=equation2.sweep(potential)
if __name__=="__main__":
viewer.plot()
print step,residual,k-residual#,equation2,largeValue
largeValue=Variable(value = largeValue.value*1.1)
I am trying to solve the problem of a least squares fit of a power law spliced to a third order polynomial in python using gradient descent. I have computed gradients with respect to the parameters in Matlab. The boundary conditions I computed by hand. I am running into a syntax error in my chi-squared minimization algorithm, which must take into account the boundary conditions. I am doing this for a machine learning class in which I am completing a somewhat self-directed and self-proposed long term project, but I am stuck because of this syntax error that I am not sure how to overcome. I will not get class credit for this. It is simply something to put on my resume.
def polypowerderiv(x,a1,b1,c1,a2,b2,c2,d2,boundaryx,ydat):
#need to minimize square of ydat-polypower
#from Mathematica, to be careful
gradd2=2*(d2+c2*x+b2*x**2+a2*x**3-ydat)
gradc2=gradd2*x
gradb2=gradc2*x
grada2=gradb2*x
#again from Mathematica, to be careful
gradc1=2(c+a1*x**b1-ydat)
grada1=gradc1*x**b1
gradb1=grada1*a1*log(x)
return [np.sum(grada1),np.sum(gradb1),\
np.sum(gradc1),np.sum(grada2),np.sum(gradb2),\
np.sum(gradc2),np.sum(gradd2)]
def manualleastabsolutedifference(xdat, ydat, params,seed, maxiter, learningrate):
chisq=0 #chisq is the L2 error of the fit relative to the ydata
dof=len(xdat)-len(params)
xparams=seed
for step in np.arange(maxiter):
a1,b1,c1,a2,b2,c2,d2=params
chisq=polypowerlaw(xdat,params)
for i in np.arange(len(xdat)):
grad=np.zeros(len(seed))
for i in np.arange(seed):
polypowerlawboundarysolver=\
polypowerboundaryconstraint(xdat,a1,b1,c1,a2,b2,c2)
boundaryx=minimize(polypowerlawboundarysolver,x0=1000)
#hard coded to be half of len(xdat)
chisq+=abs(ydat-\
polypower(xdat,a1,b1,c1,a2,b2,c2,d2,boundaryx)
grad=\
polypowerderiv(xdat,a1,b1,c1,\
a2,b2,c2,d2,boundaryx,ydat)
params+=learningrate*grad
return params
The error I get is:
File "", line 14
grad=polypowerderiv(xdat,a1,b1,c1,a2,b2,c2,d2,boundaryx,ydat)
^
SyntaxError: invalid syntax
Also, I'm having some small trouble with formatting. Please help. This one of my first few posts to Stack Overflow ever, after many years of up and down votes. Thank you for your extensive help, community.
As per Alan-Fey, you forgot a closing bracket:
chisq+=abs(ydat-\
polypower(xdat,a1,b1,c1,a2,b2,c2,d2,boundaryx)
should be
chisq+=abs(ydat-\
polypower(xdat,a1,b1,c1,a2,b2,c2,d2,boundaryx))
I have the following code in a function:
phi = fourier_matrix(y, fs)
N = np.size(phi,axis=1)
x = np.ones(N)
for i in range(136):
W_pk = np.diag(x)
temp = pinv(np.dot(phi, W_pk))
q_k = np.dot(temp, y)
x = np.dot(W_pk, q_k)
where phi is (96,750), W_pk is (750,750) and q_k is (750,).
This throws the following error:
Python(12001,0x7fff99d8b380) malloc: * error for object 0x7fc71aa37200: incorrect checksum for freed object - object was probably modified after being freed.
* set a breakpoint in malloc_error_break to debug
If I comment the last dot product the error does not appear.
I think I need to free memory in some way or maybe do the dot product in a different way?
Also, this only happens when I run it from a mac. On windows or linux it does not throw the error.
Python is 3.6 (tried with 3.7), and numpy is 1.14.5, also tried with 1.15
Any help would be greatly appreciated, since I really need to make this work!
Thanks in advance.
EDIT I:
I tried this portion of the code on a jupyter notebook, and it didn't fail. This confused me even more! It fails when I run it in Visual Studio Code on a mac. The rest of my code, an algorithm to remove artifacts from a signal, works as it should until I add that last piece of code x = np.dot(W_pk, q_k). Maybe it works on jupyter because I don't run the rest of the algorithm there? but as I said, it only crashes on that last dot product.
EDIT II: I added the piece of code above the for loop to this question, because I found that the problem is somehow related to how x is being used. You see it's declared above as a float64 ndarray. When it reaches the last line of the for loop, the dot product returns a complex128 (should be complex64, don't know what's happening there) and overwrites the x array. The first time works, but the second time it crashes when trying to overwrite. If I use a new variable for the dot product, say z, then it does not crash! not sure why... but I need to overwrite x in each iteration.
Furthermore, if I do something like this:
z = np.dot(W_pk, q_k)
x = abs(z) #I don't need complex numbers at this point
Then it crashes with the same error on the first dot product (presumably):
temp = pinv(np.dot(phi, W_pk))
Also, the memory consumption is not that bad, around 110M according some measurements, and the same algorithm does not crash on iPython with twice the memory usage. This is what I find the most obscure, why doesn't it crash on iPython??
I try to solve 2 coupled equations systems, called here system A and system B. One of these 2 systems are an ODE system.
To avoid to copy the shared data between the 2 systems, I would like have a structure with pointers. To that, I use the mechanism of numpy.view.
A bit of code :
import numpy as np
import scipy
t0,t1,dt = 0.0,5.0, 1.0
data = np.ones((5,2))
data[:,1]*=2
y=np.array([0.0,0.0]) ### no matter default value
r = scipy.integrate.ode(f)
r.set_integrator('dopri5', rtol=1e-3, atol=1e-6 )
r.set_f_params(0.05)
#r.set_initial_value(y, t0); r._y = data[2] ### Apparently equivalent
r.set_initial_value(data[2], t0) ### Apparently equivalent
print(np.shares_memory(r.y,y))
print(np.shares_memory(r.y,data))
Here, at the initial state, I have a synchronization between r.y (system A) and data[2] (the variable named data is the data of system B). If I modify one, the other is also modified and vice versa. Tape the command r.y.base confirm that r.y is just a view of the array named data. That the behavior that I desired.
Now, the problem start here. If I make progress my EDO system :
while r.successful() and r.t < t1:
r.integrate(r.t+dt, step=True)
print(r.t+dt,r.y)
print(np.shares_memory(r.y,data))
print(data)
data and r.y are no more synchronized. r.y are no more a view of data.
It looks that the integrate function creates a new instance of its attribute r.y rather than just update it. I have read the source code of this function
https://github.com/scipy/scipy/blob/v0.19.1/scipy/integrate/_ode.py#L396
but it rapidly refers to fortran code, and my understanding abilities stop here.
How can I solve (or got round) this problem by a different way of the data copy between r.y and data (that also implies a manual management of the synchronisation) ?
Is it possible that is a bug in scipy ?
Thanks for your help
I am using the PuLP linear programming module for Python to solve a linear problem.
I set up the problem, the constraints, and I use the default solver provided with PuLP which is CBC (the solver executable on my mac is called cbc-osx-64 for obvious reasons). When running this executable:
Welcome to the CBC MILP Solver
Version: 2.7.6
Build Date: Mar 3 2013
Revision Number: 1770
OK, I run the solver via PuLP and get a solution. When verifying that the constraints are satisfied I get a difference between the solution and what I requested (for some of the constraints, not all), which is less than 1e-6 but greater than 1e-7 (1.6e-7, e.g.).
Of course it makes sense to have a constraint tolerance, that is fine. But I need to be able to control this and I think this should be a very central and important parameter in any LP task?
So let us look at the "help" from the CBC solver (run the executable and type "?"), these are the arguments I can change:
Commands are:
Double parameters:
dualB(ound) dualT(olerance) primalT(olerance) primalW(eight) zeroT(olerance)
Branch and Cut double parameters:
allow(ableGap) cuto(ff) inc(rement) integerT(olerance) preT(olerance)
pumpC(utoff) ratio(Gap) sec(onds)
Integer parameters:
force(Solution) idiot(Crash) maxF(actor) maxIt(erations) output(Format)
slog(Level) sprint(Crash)
Branch and Cut integer parameters:
cutD(epth) cutL(ength) depth(MiniBab) hot(StartMaxIts) log(Level) maxN(odes)
maxS(olutions) passC(uts) passF(easibilityPump) passT(reeCuts) pumpT(une)
strat(egy) strong(Branching) trust(PseudoCosts)
Keyword parameters:
allC(ommands) chol(esky) crash cross(over) direction error(sAllowed)
fact(orization) keepN(ames) mess(ages) perturb(ation) presolve
printi(ngOptions) scal(ing) timeM(ode)
Branch and Cut keyword parameters:
clique(Cuts) combine(Solutions) combine2(Solutions) cost(Strategy) cplex(Use)
cuts(OnOff) Dins DivingS(ome) DivingC(oefficient) DivingF(ractional)
DivingG(uided) DivingL(ineSearch) DivingP(seudoCost) DivingV(ectorLength)
feas(ibilityPump) flow(CoverCuts) gomory(Cuts) greedy(Heuristic)
heur(isticsOnOff) knapsack(Cuts) lagomory(Cuts) lift(AndProjectCuts)
local(TreeSearch) mixed(IntegerRoundingCuts) node(Strategy)
pivotAndC(omplement) pivotAndF(ix) preprocess probing(Cuts)
rand(omizedRounding) reduce(AndSplitCuts) residual(CapacityCuts) Rens Rins
round(ingHeuristic) sos(Options) two(MirCuts) Vnd(VariableNeighborhoodSearch)
Actions or string parameters:
allS(lack) barr(ier) basisI(n) basisO(ut) directory dualS(implex)
either(Simplex) end exit export gsolu(tion) help import initialS(olve)
max(imize) min(imize) para(metrics) primalS(implex) printM(ask) quit
saveS(olution) solu(tion) stat(istics) stop
Branch and Cut actions:
branch(AndCut) doH(euristic) prio(rityIn) solv(e)
The values of these parameters have values:
dualTolerance has value 1e-07
primalTolerance has value 1e-07
zeroTolerance has value 1e-20
allowableGap has value 0
integerTolerance has value 1e-06
preTolerance has value 1e-08
ratioGap has value 0
The only parameter which could be associated with the constraint tolerance and also consistent with my observations is the "integerTolerance".
So, I changed this tolerance to 1e-8 but got the same result (that is, the solution differed from the ground truth by more than 1e-7).
Questions:
Can anyone shed some light on this? In particular, is there a way to set the constraint tolerance (the difference between a found solution and what we requested)?
If not for CBC, do you know of any other solver (GLPK, Gurobi, etc.) where this quantity can be set?
Thanks.
At least in the latest pulp Version you can set it directly via parameter.
https://pythonhosted.org/PuLP/solvers.html
Parameter fracgap should do and does for me.
I can't give you an exact answer, but I'd try primal or dual tolerance. Integer tolerance doesn't make sense for constraints to me.
Do you know how to change these options via the Python interface (I would like to experiment with it, but don't want to call the command line tool, and I'm not able to pass options to the solver) ?