cvxpy DPP Doesn't Work For Complex Parameter - python

I need to solve a large number of small convex optimisation problems with the same constraints, so I am trying to use cvxpy's DPP feature to cache/speedup compilation. It doesn't seem to work for my problem containing a single complex matrix parameter L.
import numpy as np
import cvxpy as cp
A = np.eye(4)+np.eye(4)*1j
L = cp.Parameter((4,4),complex=True)
X = cp.Variable((4,4),hermitian=True)
obj = cp.Minimize(cp.norm(X-L,'fro'))
prob = cp.Problem(obj)
L.value = A
assert prob.is_dcp(dpp=True)
prob.solve(solver=cp.SCS,verbose=True)
If I change the definitions of A and L to A=np.eye(4) and L = cp.Parameter((4,4)), then I do see the (Subsequent compilations of this problem, using the same arguments, should take less time.) message in the verbose print out.
I am using cvxpy version 1.2.1.
Does anyone know what's going on? Many thanks!

Related

Is it possible to somehow take this integral?

I have the following code, but it shows error:
IntegrationWarning: The maximum number of subdivisions (50) has been achieved.
If increasing the limit yields no improvement it is advised to analyze
the integrand in order to determine the difficulties. If the position of a
local difficulty can be determined (singularity, discontinuity) one will
probably gain from splitting up the interval and calling the integrator
on the subranges. Perhaps a special-purpose integrator should be used.
potC=sc.integrate.quad(lambda r: Psi(r,n2)*(-1/r)Psi(r,n1)(r**2),0,np.inf)
How to fix it?
import scipy as sc
import numpy as np
def Psi(r,n):
return 2*np.exp(-r/n)*np.sqrt(n)*sc.special.hyp1f1(1-n, 2, 2*r/n)/n**2
def p(n1,n2):
potC=sc.integrate.quad(lambda r: Psi(r,n2)*(-1/r)*Psi(r,n1)*(r**2),0,np.inf)
pot=potC[0]
return pot
print(p(15,15))
The error literally says what your problem is. Your function is not "well behaved" in some regions.
For example with n = 15 and r = 50 your special.hyp1f1(-14, 2, 2*50/15) result in NaN. I am not familiar with this function, so I do not know if this is the expected behaviour, but this is what happens.
You can try to isolate these points and exclude them from the integration (if you know lower and upper bounds of the function's value (if it is defined) you can also update the expected error of your integration) and just integrate in the well behaved regions.
If it is a bug in scipy, then please report it to them.
Ps.: Tested with scipy 1.8.0
Ps2.: With some reading I found, that you can get the values correctly, if you do your calculations with complex number, so the following code gives you a value:
import scipy as sc
from scipy import integrate
from scipy import special
import numpy as np
def Psi(r,n):
r = np.array(r,dtype=complex)
return 2*np.exp(-r/n)*np.sqrt(n)*special.hyp1f1(1-n, 2, 2*r/n)/n**2
def p(n1,n2):
potC=integrate.quad(lambda r: Psi(r,n2)*(-1/r)*Psi(r,n1)*(r**2),0,np.inf)
pot=potC[0]
return pot
print(p(15,15))

Python/Numpy array element assignment issue

I'm trying to use Python/Numpy for a project that I'd normally do in Matlab, so I'm somewhat new to this environment (though I have played with Python/Django on the web development side). I'm now running into what I have to believe is a super simple issue that occurs when I'm trying to assign an element of a numpy array to another numpy array. The basic offending code is as follows. It does have some other fluff around it which I don't believe could be causing the issue, but I can provide that code as well if it would help.
import numpy as np
tf = 100
dt = 10
X0 = np.array([6978,0,5.8787,5.8787])
xhist = np.zeros(tf/dt+1)
yhist = np.zeros(tf/dt+1)
xhist[0] = X0[0]
yhist[0] = X0[1]
print(X0[0])
print(xhist[0])
When I run the above code, the first print statement gives me 6978, as expected; however, the second print statement gives me 0, and I can't figure out for the life of me why this is. Any ideas? Thanks in advance!

Optimizing a multithreaded numpy array function

Given 2 large arrays of 3D points (I'll call the first "source", and the second "destination"), I needed a function that would return indices from "destination" which matched elements of "source" as its closest, with this limitation: I can only use numpy... So no scipy, pandas, numexpr, cython...
To do this i wrote a function based on the "brute force" answer to this question. I iterate over elements of source, find the closest element from destination and return its index. Due to performance concerns, and again because i can only use numpy, I tried multithreading to speed it up. Here are both threaded and unthreaded functions and how they compare in speed on an 8 core machine.
import timeit
import numpy as np
from numpy.core.umath_tests import inner1d
from multiprocessing.pool import ThreadPool
def threaded(sources, destinations):
# Define worker function
def worker(point):
dlt = (destinations-point) # delta between destinations and given point
d = inner1d(dlt,dlt) # get distances
return np.argmin(d) # return closest index
# Multithread!
p = ThreadPool()
return p.map(worker, sources)
def unthreaded(sources, destinations):
results = []
#for p in sources:
for i in range(len(sources)):
dlt = (destinations-sources[i]) # difference between destinations and given point
d = inner1d(dlt,dlt) # get distances
results.append(np.argmin(d)) # append closest index
return results
# Setup the data
n_destinations = 10000 # 10k random destinations
n_sources = 10000 # 10k random sources
destinations= np.random.rand(n_destinations,3) * 100
sources = np.random.rand(n_sources,3) * 100
#Compare!
print 'threaded: %s'%timeit.Timer(lambda: threaded(sources,destinations)).repeat(1,1)[0]
print 'unthreaded: %s'%timeit.Timer(lambda: unthreaded(sources,destinations)).repeat(1,1)[0]
Retults:
threaded: 0.894030461056
unthreaded: 1.97295164054
Multithreading seems beneficial but I was hoping for more than 2X increase given the real life dataset i deal with are much larger.
All recommendations to improve performance (within the limitations described above) will be greatly appreciated!
Ok, I've been reading Maya documentation on python and I came to these conclusions/guesses:
They're probably using CPython inside (several references to that documentation and not any other).
They're not fond of threads (lots of non-thread safe methods)
Since the above, I'd say it's better to avoid threads. Because of the GIL problem, this is a common problem and there are several ways to do the earlier.
Try to build a tool C/C++ extension. Once that is done, use threads in C/C++. Personally, I'd only try SIP to work, and then move on.
Use multiprocessing. Even if your custom python distribution doesn't include it, you can get to a working version since it's all pure python code. multiprocessing is not affected by the GIL since it spawns separate processes.
The above should've worked out for you. If not, try another parallel tool (after some serious praying).
On a side note, if you're using outside modules, be most mindful of trying to match maya's version. This may have been the reason because you couldn't build scipy. Of course, scipy has a huge codebase and the windows platform is not the most resilient to build stuff.

Optimization in Python for simple linear function

If the price charged for a crayon is p cents, then x thousand crayons
will be sold in a certain school store, where p(x)= 122-x/34 .
Using Python, calculate how many crayons must be sold to maximize
revenue.
I can solve this by hand much easily, the only problem is how can I do it using plain Python? I am using IDLE (Python GUI). I am new to Python and haven't downloaded any external libraries. Any help will be greatly appreciated.
What I've done up to this point is
import math
def f(x):
return (122-(x/34.0))
def g(x):
return x*f(x)
def h(x):
return (122-(2*x/34.0))
Use SymPy. It's simple, beautiful and powerful.
You can write down your equations with simpify(), like that:
p = simpify('122 - x/34')
And define symbols for symbolic evaluation with Symbol() and symbols().
With that you can do things like simply use solve() function for any given equation. i.e. x + 4 = 2x:
res = solve('x + 4 - 2*x')
It's pretty much the tool I use for any math work with python.
So, you should go and download an external library for this, as it's not functionality that python makes easy to implement natively. Also, if you're serious about doing mathematical computation in python I would suggest switching operating systems to something like OSX or linux, simply because compiling old FORTRAN libraries (required for much performant mathematical computing) is a huge pain on Windows.
You have to make use of the scipy library here, which has an optimize module. Specifically I would suggest using the optimize.minimize_scalar function. Docs can be found here.
>>> from scipy.optimize import minimize_scalar
>>> def g(x):
... return -(x*(122 - (x/34))) # inverse because you're minimizing.
>>> minimize_scalar(g, bounds=(1, 10000), method='bounded')
status: 0
nfev: 6
success: True
fun: -126514.0
x: 2074.0
message: 'Solution found.'

No solutions with exponents in python sympy

When I run this program, I get no solution at the end, but there should be a solution ( I believe). Any idea what I am doing wrong? If you take away the Q from e2 equation it seems to work correctly.
#!/usr/bin/python
from sympy import *
a,b,w,r = symbols('a b w r',real=True,positive=True)
L,K,Q = symbols('L K Q',real=True,positive=True)
e1=K
e2=(K*Q/2)**(a)
print solve(e1-e2,K)
It works if we do the following:
Set Q=1 or,
Change e2 to e2=(K*a)(Q/2)**(a)
I would still like it to work in the original way though, as my equations are more complicated than this.
This is just a deficiency of solve. solve is based mostly on heuristics, so sometimes it isn't able to figure out how to solve an equation when it's given in a particular form. The workaround here is to just call expand_power_base on the expression, since SymPy is able to solve K - K**a*(Q/2)**a:
In [8]: print(solve(expand_power_base(e1-e2),K))
[(2/Q)**(a/(a - 1))]
It's also worth pointing out that the result of [] from solve does not in any way mean that there are no solutions, only that solve was unable to find any. See the first note at http://docs.sympy.org/latest/tutorial/solvers.html.

Categories