I´m trying to solve a HUGE system of coupled complex differential equation(about 16 thousand equations) and I tried doing a couple of tricks in order to set the function with all the equations(it´s impossible to write them by hand) so that I can plug it into complex_ode. First, I defined a vector with as much scypy variables as I needed and anothe variable for time
side=181
A=sp.symbols('rho0:'+str(side*(side+1)/2))
time=sp.symbols('time')
next, I did a bunch of manipulations with those variables, so that in the end i´d end up with avector of equations "ecu"
ecu=#vector with (side)*(side+1)/2 equations
with this vector, I defined the following function
def dSdt(t,S):
resu=[uu.subs(time,t) for uu in ecus]
for ii in range(int(side*(side+1)/2)):
for jj in range(int(side*(side+1)/2)):
resu[ii]=resu[ii].subs(A[jj],S[jj])
return resu
and the set the initial conditions
S0=[]
for ii in range(int(side*(side+1)/2)):
S0.append(0)
S0[0]=1
with all the previous, I used complex_ode as follows
sol= complex_ode(dSdt)
sol.set_initial_value(S0 ,0) #initial conditions and initial time
tf=10
dt=1
while sol.successful() and sol.t < tf:
sol.integrate(sol.t+dt)
print(sol.t, sol.y)
but the, after a long time, it showed this error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-67-cb1595480520> in <module>
4 dt=1
5 while sol.successful() and sol.t < tf:
----> 6 sol.integrate(sol.t+dt)
7 print(sol.t, sol.y)
4 frames
/usr/local/lib/python3.8/dist-packages/sympy/core/expr.py in __float__(self)
347 return float(result)
348 if result.is_number and result.as_real_imag()[1]:
--> 349 raise TypeError("can't convert complex to float")
350 raise TypeError("can't convert expression to float")
351
TypeError: can't convert complex to float
What can I do in this situation?
Related
I am trying to implement a cost function in a pydrake Mathematical program, however I encounter problems whenever I try to divide by a decision variable and use the abs(). A shortened version of my attempted implementation is as follows, I tried to include only what I think may be relevant.
T = 50
na = 3
nq = 5
prog = MathematicalProgram()
h = prog.NewContinuousVariables(rows=T, cols=1, name='h')
qd = prog.NewContinuousVariables(rows=T+1, cols=nq, name='qd')
d = prog.NewContinuousVariables(1, name='d')
u = prog.NewContinuousVariables(rows=T, cols=na, name='u')
def energyCost(vars):
assert vars.size == 2*na + 1 + 1
split_at = [na, 2*na, 2*na + 1]
qd, u, h, d = np.split(vars, split_at)
return np.abs([qd.dot(u)*h/d])
for t in range(T):
vars = np.concatenate((qd[t, 2:], u[t,:], h[t], d))
prog.AddCost(energyCost, vars=vars)
initial_guess = np.empty(prog.num_vars())
solver = SnoptSolver()
result = solver.Solve(prog, initial_guess)
The error I am getting is:
RuntimeError Traceback (most recent call last)
<ipython-input-55-111da18cdce0> in <module>()
22 initial_guess = np.empty(prog.num_vars())
23 solver = SnoptSolver()
---> 24 result = solver.Solve(prog, initial_guess)
25 print(f'Solution found? {result.is_success()}.')
RuntimeError: PyFunctionCost: Output must be of .ndim = 0 (scalar) and .size = 1. Got .ndim = 2 and .size = 1 instead.
To the best of my knowledge the problem is the dimensions of the output, however I am unsure of how to proceed. I spent quite some time trying to fix this, but with little success. I also tried changing np.abs to pydrake.math.abs, but then I got the following error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-56-c0c2f008616b> in <module>()
22 initial_guess = np.empty(prog.num_vars())
23 solver = SnoptSolver()
---> 24 result = solver.Solve(prog, initial_guess)
25 print(f'Solution found? {result.is_success()}.')
<ipython-input-56-c0c2f008616b> in energyCost(vars)
14 split_at = [na, 2*na, 2*na + 1]
15 qd, u, h, d = np.split(vars, split_at)
---> 16 return pydrake.math.abs([qd.dot(u)*h/d])
17
18 for t in range(T):
TypeError: abs(): incompatible function arguments. The following argument types are supported:
1. (arg0: float) -> float
2. (arg0: pydrake.autodiffutils.AutoDiffXd) -> pydrake.autodiffutils.AutoDiffXd
3. (arg0: pydrake.symbolic.Expression) -> pydrake.symbolic.Expression
Invoked with: [array([<AutoDiffXd 1.691961398933386e-257 nderiv=8>], dtype=object)]
Any help would be greatly appreciated, thanks!
BTW, as Tobia has mentioned, dividing a decision variable in the cost function could be problematic. There are two approaches to avoid the problem
Impose a bound on your decision variable, and 0 is not included in this bound. For example, say you want to optimize
min f(x) / y
If you can impose a bound that y > 1, then SNOPT will not try to use y=0, thus you avoid the division by zero problem.
One trick is to introduce another variable as the result of division, and then minimize this variable.
For example, say you want to optimize
min f(x) / y
You could introduce a slack variable z = f(x) / y. And formulate this problem as
min z
s.t f(x) - y * z = 0
Some observations:
The kind of cost function you are trying to use does not need the use of a python function to be enforced. You can just say (even though it would raise other errors as is) prog.AddCost(np.abs([qd[t, 2:].dot(u[t,:])*h[t]/d])).
The argument of prog.AddCost must be a Drake scalar expression. So be sure that your numpy matrix multiplications return a scalar. In the case above they return a (1,1) numpy array.
To minimize the absolute value, you need something a little more sophisticated than that. In the current form you are passing a nondifferentiable objective function: solvers do not quite like that. Say you want to minimize abs(x). A standard trick in optimization is to add an extra (slack) variable, say s, and add the constraints s >= x, s >= -x, and then minimize s itself. All these constraints and this objective are differentiable and linear.
Regarding the division of the objective by an optimization variable. Whenever you can, you should avoid that. For example (I'm 90% sure) that solvers like SNOPT or IPOPT set the initial guess to zero if you do not provide one. This implies that, if you do not provide a custom initial guess, at the first evaluation of the constraints, the solver will have a division by zero and it'll crash.
I wrote this code:
%case-2
kp=3;
ki=2*kp;
Gc=kp+ki/s;
delta=2.5;
G=[(6*s-10*delta)/(s^2+3*delta*s+100*delta)];
sys=feedback(Gc*G,1);
%state transition matrix
[num,den]=tfdata(sys)
disp('with PI controller')
[A2,B2,C2,D2]=tf2ss(num{1},den{1})
%Find the state transition matrix ?(t).
syms s t
exp_1= inv((s*eye(3)-A2))
Exp_At2=ilaplace(exp_1,s,t);
disp('required state transtion matrix is ')
pretty (Exp_At2)
However it gives me this message:
error: Python exception: OverflowError: Python int too large to convert to C long
occurred at line 2 of the Python code block:
f = inverse_laplace_transform(F, s, t)
error: called from
pycall_sympy__ at line 178 column 7
ilaplace at line 171 column 5
TF at line 26 column 8
I was getting this error:
> float() argument must be a string or a number
So, why does this happen?(I tried commands like np.asarray() but it keeps failing).
mp.mpc(cmath.rect(a,b)))
The items in raizes are actually mpmath.mpc instances rather than native Python complex floats. numpy doesn't know how to deal with mpmath types, hence the TypeError.
You didn't mention mpmath at all in your original question. The problem would still have been easy to diagnose if you had posted the full traceback, rather than cutting off the most important part at the end:
In [10]: np.roots(Q)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-10-f3a270c7e8c0> in <module>()
----> 1 np.roots(Q)
/home/alistair/.venvs/mpmath/lib/python3.6/site-packages/numpy/lib/polynomial.py in roots(p)
220 # casting: if incoming array isn't floating point, make it floating point.
221 if not issubclass(p.dtype.type, (NX.floating, NX.complexfloating)):
--> 222 p = p.astype(float)
223
224 N = len(p)
TypeError: float() argument must be a string or a number, not 'mpc'
Whenever you ask for help with debugging on this site, please always post the whole traceback rather than just (part of) the last line - it contains a lot of information that can be helpful for diagnosing the problem.
The solution is simple enough - just don't convert the native Python complex floats returned by cmath.rect to mpmath.mpc complex floats:
raizes = []
for i in range(2*n):
a, f = cmath.polar(l[i])
if((f>np.pi/2) or (f<-np.pi/2)):
raizes.append(cmath.rect(a*r,f))
Q = np.poly(raizes)
print(np.roots(Q))
# [-0.35372430 +1.08865146e+00j -0.92606224 +6.72823602e-01j
# -0.35372430 -1.08865146e+00j -1.14467588 -9.11902316e-16j
# -0.92606224 -6.72823602e-01j]
Trying to use the awfully useful pandas to deal with data as time series, I am now stumbling over the fact that there do not seem to exist libraries that can directly interpolate (with a spline or similar) over data that has DateTime as an x-axis? I always seem to be forced to convert first to some floating point number, like seconds since 1980 or something like that.
I was trying the following things so far, sorry for the weird formatting, I have this stuff only in the ipython notebook, and I can't copy cells from there:
from scipy.interpolate import InterpolatedUnivariateSpline as IUS
type(bb2temp): pandas.core.series.TimeSeries
s = IUS(bb2temp.index.to_pydatetime(), bb2temp, k=1)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-67-19c6b8883073> in <module>()
----> 1 s = IUS(bb2temp.index.to_pydatetime(), bb2temp, k=1)
/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/scipy/interpolate/fitpack2.py in __init__(self, x, y, w, bbox, k)
335 #_data == x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier
336 self._data = dfitpack.fpcurf0(x,y,k,w=w,
--> 337 xb=bbox[0],xe=bbox[1],s=0)
338 self._reset_class()
339
TypeError: float() argument must be a string or a number
By using bb2temp.index.values (that look like these:
array([1970-01-15 184:00:35.884999, 1970-01-15 184:00:58.668999,
1970-01-15 184:01:22.989999, 1970-01-15 184:01:45.774000,
1970-01-15 184:02:10.095000, 1970-01-15 184:02:32.878999,
1970-01-15 184:02:57.200000, 1970-01-15 184:03:19.984000,
) as x-argument, interestingly, the Spline class does create an interpolator, but it still breaks when trying to interpolate/extrapolate to a larger DateTimeIndex (which is my final goal here). Here is how that looks:
all_times = divcal.timed.index.levels[2] # part of a MultiIndex
all_times
<class 'pandas.tseries.index.DatetimeIndex'>
[2009-07-20 00:00:00.045000, ..., 2009-07-20 00:30:00.018000]
Length: 14063, Freq: None, Timezone: None
s(all_times.values) # applying the above generated interpolator
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-74-ff11f6d6d7da> in <module>()
----> 1 s(tall.values)
/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/scipy/interpolate/fitpack2.py in __call__(self, x, nu)
219 # return dfitpack.splev(*(self._eval_args+(x,)))
220 # return dfitpack.splder(nu=nu,*(self._eval_args+(x,)))
--> 221 return fitpack.splev(x, self._eval_args, der=nu)
222
223 def get_knots(self):
/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/scipy/interpolate/fitpack.py in splev(x, tck, der, ext)
546
547 x = myasarray(x)
--> 548 y, ier =_fitpack._spl_(x, der, t, c, k, ext)
549 if ier == 10:
550 raise ValueError("Invalid input data")
TypeError: array cannot be safely cast to required type
I tried to use s(all_times) and s(all_times.to_pydatetime()) as well, with the same TypeError: array cannot be safely cast to required type.
Am I, sadly, correct? Did everybody get used to convert times to floating points so much, that nobody thought it's a good idea that these interpolations should work automatically? (I would finally have found a super-useful project to contribute..) Or would you like to prove me wrong and earn some SO points? ;)
Edit: Warning: Check your pandas data for NaNs before you hand it to the interpolation routines. They will not complain about anything but just silently fail.
The problem is that those fitpack routines that are used underneath require floats. So, at some point there has to be a conversion from datetime to floats. This conversion is easy. If bb2temp.index.values is your datetime array, just do:
In [1]: bb2temp.index.values.astype('d')
Out[1]:
array([ 1.22403588e+12, 1.22405867e+12, 1.22408299e+12,
1.22410577e+12, 1.22413010e+12, 1.22415288e+12,
1.22417720e+12, 1.22419998e+12])
You just need to pass that to your spline. And to convert the results back to datetime objects, you do results.astype('datetime64').
I would like to use the dct functionality from the scipy.fftpack with an array of numpy float64. However, it seems it is only implemented for np.float32. Is there any quick workaround I could do to get this done? I looked into it quickly but I am not sure of all the dependencies. So, before messing everything up, I thought I'd ask for tips here!
The only thing I have found so far about this is this link : http://mail.scipy.org/pipermail/scipy-svn/2010-September/004197.html
Thanks in advance.
Here is the ValueError it raises:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-12-f09567c28e37> in <module>()
----> 1 scipy.fftpack.dct(c[100])
/usr/local/Cellar/python/2.7.3/lib/python2.7/site-packages/scipy/fftpack/realtransforms.pyc in dct(x, type, n, axis, norm, overwrite_x)
118 raise NotImplementedError(
119 "Orthonormalization not yet supported for DCT-I")
--> 120 return _dct(x, type, n, axis, normalize=norm, overwrite_x=overwrite_x)
121
122 def idct(x, type=2, n=None, axis=-1, norm=None, overwrite_x=0):
/usr/local/Cellar/python/2.7.3/lib/python2.7/site-packages/scipy/fftpack/realtransforms.pyc in _dct(x, type, n, axis, overwrite_x, normalize)
215 raise ValueError("Type %d not understood" % type)
216 else:
--> 217 raise ValueError("dtype %s not supported" % tmp.dtype)
218
219 if normalize:
ValueError: dtype >f8 not supported
The problem is not the double precision. Double precision is of course supported. The problem is that you have a little endian computer and (maybe loading a file from a file?) have big endian data, note the > in dtype >f8 not supported. It seems you will simply have to cast it to native double yourself. If you know its double precision, you probably just want to convert everytiong to your native order once:
c = c.astype(float)
Though I guess you could also check c.dtype.byteorder which I think should be '=', and if, switch... something along:
if c.dtype.byteorder != '=':
c = c.astype(c.dtype.newbyteorder('='))
Which should work also if you happen to have single precision or integers...