I am playing with the cvxpy library in order to solve some particular optimisation problem
import cvxpy as cp
import numpy as np
(...)
prob = cp.Problem(
cp.Minimize(max(M*theta-b)) <= 45,
[-48 <= theta, theta <= 48])
(Here M and b are certain numpy matrices.)
Interestingly, it screams:
NotImplementedError Traceback (most recent call last)
<ipython-input-62-0296c965b1ff> in <module>
1 prob = cp.Problem(
----> 2 cp.Minimize(max(M*theta-b)) <= 45,
3 [-10 <= theta, theta <= 10])
~\Anaconda3\lib\site-packages\cvxpy\expressions\expression.py in __gt__(self, other)
595 """Unsupported.
596 """
--> 597 raise NotImplementedError("Strict inequalities are not allowed.")
NotImplementedError: Strict inequalities are not allowed.
however, to me, they do not look strict at all...
Same reason as in your earlier question (although things like that are hard to analyze).
You need to ask cvxpy for it's max function explicitly. This is always required / recommended.
cp.Minimize(max(M*theta-b))
should be
cp.Minimize(cp.max(M*theta-b))
You basically have to use only functions from cvxpy, except for the following:
The CVXPY function sum sums all the entries in a single expression. The built-in Python sum should be used to add together a list of expressions.
Related
I am computing these derivatives using the Montecarlo approach for a generic call option. I am interested in this combined derivative (with respect to both S and Sigma). Doing this with the algorithmic differentiation, I get an error that can be seen at the end of the page. What could be a possible solution? Just to explain something regarding the code, I am going to attach the formula used to compute the "X" in the code below:
from jax import jit, grad, vmap
import jax.numpy as jnp
from jax import random
Underlying_asset = jnp.linspace(1.1,1.4,100)
volatilities = jnp.linspace(0.5,0.6,100)
def second_derivative_mc(S,vol):
N = 100
j,T,q,r,k = 10000,1.,0,0,1.
S0 = jnp.array([S]).T #(Nx1) vector underlying asset
C = jnp.identity(N)*vol #matrix of volatilities with 0 outside diagonal
e = jnp.array([jnp.full(j,1.)])#(1xj) vector of "1"
Rand = np.random.RandomState()
Rand.seed(10)
U= Rand.normal(0,1,(N,j)) #Random number for Brownian Motion
sigma2 = jnp.array([vol**2]).T #Vector of variance Nx1
first = jnp.dot(sigma2,e) #First part equation
second = jnp.dot(C,U) #Second part equation
X = -0.5*first+jnp.sqrt(T)*second
St = jnp.exp(X)*S0
P = jnp.maximum(St-k,0)
payoff = jnp.average(P, axis=-1)*jnp.exp(-q*T)
return payoff
greek = vmap(grad(grad(second_derivative_mc, argnums=1), argnums=0)(Underlying_asset,volatilities)
This is the error message:
> UnfilteredStackTrace Traceback (most recent call
> last) <ipython-input-78-0cc1da97ae0c> in <module>()
> 25
> ---> 26 greek = vmap(grad(grad(second_derivative_mc, argnums=1), argnums=0))(Underlying_asset,volatilities)
>
> 18 frames UnfilteredStackTrace: TypeError: Gradient only defined for
> scalar-output functions. Output had shape: (100,).
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
The above exception was the direct cause of the following exception:
> TypeError Traceback (most recent call
> last) /usr/local/lib/python3.7/dist-packages/jax/_src/api.py in
> _check_scalar(x)
> 894 if isinstance(aval, ShapedArray):
> 895 if aval.shape != ():
> --> 896 raise TypeError(msg(f"had shape: {aval.shape}"))
> 897 else:
> 898 raise TypeError(msg(f"had abstract value {aval}"))
> TypeError: Gradient only defined for scalar-output functions. Output had shape: (100,).
As the error message indicates, gradients can only be computed for functions that return a scalar. Your function returns a vector:
print(len(second_derivative_mc(1.1, 0.5)))
# 100
For vector-valued functions, you can compute the jacobian (which is similar to a multi-dimensional gradient). Is this what you had in mind?
from jax import jacobian
greek = vmap(jacobian(jacobian(second_derivative_mc, argnums=1), argnums=0))(Underlying_asset,volatilities)
Also, this is not what you asked about, but the function above will probably not work as you intend even if you solve the issue in the question. Numpy RandomState objects are stateful, and thus will generally not work correctly with jax transforms like grad, jit, vmap, etc., which require side-effect-free code (see Stateful Computations In JAX). You might try using jax.random instead; see JAX: Random Numbers for more information.
I am trying to solve the following ordinary differential equation:
f'(t) = -2 if 300 <= t <= 303 else 0
import scipy.integrate as integr
import matplotlib.pyplot as plt
Y0=25
def f(Y,t):
a= -2 if 300 <=t <= 303 else 0
return a
T=np.linspace(0,500,5000)
sol=integr.odeint(f,Y0,T)
plt.plot(T,sol)
plt.show()
However the result is only a flat line :
[1]: https://i.stack.imgur.com/KAK4F.png
Whereas it works fine if the interval is bigger : 150 <= t <= 350 instead of 300 <= t <= 303
Any idea why?
Thanks in advance
odeint (which is a wrapper to LSODA) or any ODE solvers can't really deal with discontinuities like this in one fellow swoop. ODE solvers normally assume a smooth solution. The solution is flat because odeint is likely taking such large timesteps that it is stepping past the t = 300 to 303 region.
In these situations it is best to do a single integration for each smooth part. So integrate from t = 0 to 300, then stop. Use solution at 300 as initial conditions to integrate from 300 to 303, then stop. Then use solution at t = 303 to integrate from 303 to 500.
There are some packages which can automate this... I know Assimulo has this feature. But I don't know how to use it.
Here is a clunky solution with odeint
import scipy.integrate as integr
import matplotlib.pyplot as plt
def f1(y,t):
return 0.0
def f2(y,t):
return -2.0
y0 = np.array([25])
t1=np.linspace(0,300,1000)
sol1=integr.odeint(f1,y0,t1)
t2=np.linspace(300,303,1000)
sol2=integr.odeint(f2,sol1[-1],t2)
t3=np.linspace(303,500,1000)
sol3=integr.odeint(f1,sol2[-1],t3)
sol = np.concatenate((sol1, sol2, sol3))
t = np.concatenate((t1, t2, t3))
plt.plot(t,sol)
plt.show()
I have some data which I try to interpolate using scipy.interpolate.griddata. In my use-case I marked some of the numpy arrays read-only, which apparently breaks the interpolation:
import numpy as np
from scipy import interpolate
x0 = 10 * np.random.randn(100, 2)
y0 = np.random.randn(100)
x1 = np.random.randn(3, 2)
x0.flags.writeable = False
# x1.flags.writeable = False
interpolate.griddata(x0, y0, x1)
yields the following exception:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-14-a6e09dbdd371> in <module>()
6 # x1.flags.writeable = False
7
----> 8 interpolate.griddata(x0, y0, x1)
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/interpolate/ndgriddata.pyc in griddata(points, values, xi, method, fill_value, rescale)
216 ip = LinearNDInterpolator(points, values, fill_value=fill_value,
217 rescale=rescale)
--> 218 return ip(xi)
219 elif method == 'cubic' and ndim == 2:
220 ip = CloughTocher2DInterpolator(points, values, fill_value=fill_value,
scipy/interpolate/interpnd.pyx in scipy.interpolate.interpnd.NDInterpolatorBase.__call__ (scipy/interpolate/interpnd.c:3930)()
scipy/interpolate/interpnd.pyx in scipy.interpolate.interpnd.LinearNDInterpolator._evaluate_double (scipy/interpolate/interpnd.c:5267)()
scipy/interpolate/interpnd.pyx in scipy.interpolate.interpnd.LinearNDInterpolator._do_evaluate (scipy/interpolate/interpnd.c:6006)()
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/interpolate/interpnd.so in View.MemoryView.memoryview_cwrapper (scipy/interpolate/interpnd.c:17829)()
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/interpolate/interpnd.so in View.MemoryView.memoryview.__cinit__ (scipy/interpolate/interpnd.c:14104)()
ValueError: buffer source array is read-only
Clearly, the interpolation function doesn't like that the arrays are write-protected. However, I don't understand why they want to change this – I certainly don't expect my input to be mutated by a call to the interpolation function and this is also not mentioned in the documentation as far as I can tell. Why would the function behave like this?
Note that setting x1 readonly instead of x0 leads to a similar error.
The relevant code is written in Cython, and when Cython requests a memoryview of the input array, it always asks for a writeable one, even if you don't need it.
Since an array flagged as non-writeable will refuse to provide a writeable memoryview, the code fails, even though it didn't need to write to the array in the first place.
Using numba.jit to speed up right-hand-side calculations for odeint from scipy.integrate works fine:
from scipy.integrate import ode, odeint
from numba import jit
#jit
def rhs(t, X):
return 1
X = odeint(rhs, 0, np.linspace(0, 1, 11))
However using integrate.ode like this:
solver = ode(rhs)
solver.set_initial_value(0, 0)
while solver.successful() and solver.t < 1:
solver.integrate(solver.t + 0.1)
produces the following error with the decorator #jit:
capi_return is NULL
Call-back cb_f_in_dvode__user__routines failed.
Traceback (most recent call last):
File "sandbox/numba_cubic.py", line 15, in <module>
solver.integrate(solver.t + 0.1)
File "/home/pgermann/Software/anaconda3/lib/python3.4/site-packages/scipy/integrate/_ode.py", line 393, in integrate
self.f_params, self.jac_params)
File "/home/pgermann/Software/anaconda3/lib/python3.4/site-packages/scipy/integrate/_ode.py", line 848, in run
y1, t, istate = self.runner(*args)
TypeError: not enough arguments: expected 2, got 1
Any ideas how to overcome this?
You can use a wrapper function, but I think it will not improve your performance for small rhs functions.
#jit(nopython=True)
def rhs(t, X):
return 1
def wrapper(t, X):
return rhs(t, X)
solver = ode(wrapper)
solver.set_initial_value(0, 0)
while solver.successful() and solver.t < 1:
solver.integrate(solver.t + 0.1)
I do not know a reason or solution, however in this case Theano helped a lot to speed up the calculation. Theano essentially compiles numpy expressions, so it only helps when you can write the rhs as expression of multi-dimensional arrays (while jit knows for and friends). It also knows some algebra and optimizes the calculation.
Besides Theano can compile for the GPU (which was my reason to try numba.jit in the first place). However using the GPU turned out to only improve performance for huge systems (maybe one million equations) due to the overhead.
How can I use scipy.stats.kde.gaussian_kde and scipy.stats.kstest in a conformal way?
For example, the code:
from numpy import inf
import scipy.stat
my_pdf = scipy.stats.kde.gaussian_kde(sample)
scipy.stats.kstest(sample, lambda x: my_pdf.integrate_box_1d(-inf, x))
Gives the following answer:
(0.5396735893479544, 0.0)
Which is not true because a sample obviously belongs to the distribution which was constructed on this sample.
First of all, the right test to use for testing if two samples may have come from the same distribution is the two-sample KS test, implemented in scipy.stats.ks_2samp, which directly compares the empirical CDFs. KDE is density estimation, which smooths out the CDF, and is therefore a bunch of unnecessary work that also makes your estimate worse, statistically speaking.
But the reason you're seeing this problem is that the signature for your CDF parameter isn't quite right. kstest calls cdf(vals) (source), where vals is the sorted samples, to get out the CDF value for each of your samples. In your code, this ends up calling my_pdf.integrate_box_1d(-np.inf, samps), but integrate_box_1d wants both arguments to be scalars. The signature is wrong, and if you tried this with most arrays it'd crash with a ValueError:
>>> my_pdf.integrate_box_1d(-np.inf, samp[:10])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-38-81d0253a33bf> in <module>()
----> 1 my_pdf.integrate_box_1d(-np.inf, samp[:10])
/Library/Python/2.7/site-packages/scipy-0.12.0.dev_ddd617d_20120725-py2.7-macosx-10.8-x86_64.egg/scipy/stats/kde.pyc in integrate_box_1d(self, low, high)
311
312 normalized_low = ravel((low - self.dataset) / stdev)
--> 313 normalized_high = ravel((high - self.dataset) / stdev)
314
315 value = np.mean(special.ndtr(normalized_high) - \
ValueError: operands could not be broadcast together with shapes (10) (1,1000)
but unfortunately, when the second argument is samp, it can broadcast just fine since the arrays are the same shape, and then everything goes to hell. Presumably integrate_box_1d should check the shape of its arguments, but here's one way to do it correctly:
>>> my_cdf = lambda ary: np.array([my_pdf.integrate_box_1d(-np.inf, x) for x in ary])
>>> scipy.stats.kstest(sample, my_cdf)
(0.015597917205996903, 0.96809912578616597)
You could also use np.vectorize if you felt like it.
(But again, you probably actually want to use ks_2samp.)