Generating the coefficients of a Chebyshev polynomial in Python - python

I am trying to compute the coefficients of the kth Chebyshev polynomial. Let's just set k to 5 for this. So far, I have the following:
a = (0,0,0,0,0,1) #selects the 5th Chebyshev polynomial
p = numpy.polynomial.chebyshev.Chebyshev(a) #type here is Chebyshev
cpoly = numpy.polynomial.chebyshev.cheb2poly(p) #trying to convert to Poly
print cpoly.all_coeffs()
After the second line runs, I have an object of type Chebyshev, as expected. However, the third line fails to convert to a type Poly, and converts to type numpy.ndarray. Thus, I get an error saying that ndarray has no attribute all_coeffs.
Anyone know how to fix this?

#cel has the right idea in the comments - you need to pass the coefficients of the Chebyshev polynomial to cheb2poly, not the object itself:
import numpy as np
cheb = np.polynomial.chebyshev.Chebyshev((0,0,0,0,0,1))
coef = np.polynomial.chebyshev.cheb2poly(cheb.coef)
print(coef)
# [ 0., 5., 0., -20., 0., 16.]
i.e. 16x5 - 20x3 + 5x. You can confirm that these are the correct coefficients here.
To turn these coefficients into a Polynomial object, you just need to pass the array to the Polynomial constructor:
poly = np.polynomial.Polynomial(coef)

In [1]: import numpy.polynomial
In [2]: p = numpy.polynomial.Chebyshev.basis(5)
In [3]: p
Out[3]: Chebyshev([ 0., 0., 0., 0., 0., 1.], [-1., 1.], [-1., 1.])
In [4]: p.convert(kind=numpy.polynomial.Polynomial)
Out[4]: Polynomial([ 0., 5., 0., -20., 0., 16.], [-1., 1.], [-1., 1.])

Related

How to avoid two variables refering to the same data? #Pytorch

During initializing, I tried to reduce the repeats in my code, so instead of:
output= (torch.zeros(2, 3),
torch.zeros(2, 3))
I wrote:
z = torch.zeros(2, 3)
output= (z,z)
However, I find that the second method is wrong.
If I assign the data to variables h,c, any change on h would also be applied to c
h,c = output
print(h,c)
h +=torch.ones(2,3)
print('-----------------')
print(h,c)
results of the test above:
tensor([[0., 0., 0.],
[0., 0., 0.]]) tensor([[0., 0., 0.],
[0., 0., 0.]])
-----------------
tensor([[1., 1., 1.],
[1., 1., 1.]]) tensor([[1., 1., 1.],
[1., 1., 1.]])
Is there a more elegent way to initialize two indenpendent variables?
I agree that your initial line needs no modification but if you do want an alternative, consider:
z = torch.zeros(2, 3)
output= (z,z.clone())
The reason the other one (output = (z,z)) doesn't work, as you've correctly discovered is that no copy is made. You're only passing the same reference in each entry of the tuple to z
Assign it in a single statement but use separate reference for each, as below:
h, c = output, output

Python 3.x create 3D Volume with 2D slices

Python 3.x
I have for loop which is making some calculations and creating one Slice/2D Array lets say (x = 3, y = 3) per iteration and I want at the same time in the same for loop (append?/stack) them in a third dimension.
I have been trying with Numpy stack, vstack, hstack, dstack but I still don't get how to get them together in the 3rd dimension as I want.
So I would like to have at them end something like this:
(z = 10, x = 3, y = 3)
array([ [[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]],
[[2., 2., 2.],
[2., 2., 2.],
[2., 2., 2.]],
.
.
.
])
Thanks,
you can do it like this
arrays = []
for i in range(5):
arr = np.full((3,3), i)
arrays.append(arr)
np.asarray(arrays)
If you want to you can do np.asarray(arrays) inside loop. But it will be not very efficient. Not that np.concatenate will also effectively creates new numpy array so efficiency will be similar. Doing these operation once outside the loop is better

How to minimize an quadratic objective function with constraint violation using penalty method

I have compared many Quadratic Programming(QP) solvers like cvxopt, qpoases and osqp and found that osqp works faster and better for my application.
Now, I want to minimize an indefinite quadratic function with both equality and inequality constraints that may get violated depending on various factors. So I want to use l1 penalty method that penalizes the violating constraints.
for example,
I have modified an example, to violate the constraints.
import osqp
import scipy.sparse as sparse
import numpy as np
# Define problem data
P = sparse.csc_matrix([[4., 1.], [1., 2.]])
q = np.array([1., 1.])
A = sparse.csc_matrix([[1., 0.], [0., 1.], [1., 0.], [0., 1.]])
l = np.array([0., 0., 0.2, 1.1])
u = np.array([1., 1., 0.2, 1.1])
# Create an OSQP object
prob = osqp.OSQP()
# Setup workspace and change alpha parameter
prob.setup(P, q, A, l, u, alpha=1.0)
# Solve problem
res = prob.solve()
print res.x
Obviously, this is an infeasible problem, so we need to change the objective function to penalize the error.
So, I need help to formulate this problem that can be solved using osqp's python interface.
Or, please let me know if there is any other python interface available to solve this kind of constraint violation problems.
In general abs functions can be dangerous (they are non-differentiable). A standard way to deal with this is to add slacks. E.g.
g(x) <= 0
becomes
g(x) <= s
s >= 0
Now add a term mu*s to the objective.
For
h(x) = 0
one could do
h(x) = s1 - s2
s1, s2 >= 0
and add mu*(s1+s2) to the objective.
As usual: this is just one approach (there are other formulations).
I had the same problem and this question helped a lot. This is how I solved it in OSQP interface.
I redefined example to be:
# Define problem data
P = sparse.csc_matrix([[4., 1.], [1., 2.]])
q = np.array([1., 1.])
A = sparse.csc_matrix([[1., 0.], [0., 1.], [1., 1.]])
l = np.array([0., 0., 3])
u = np.array([1., 1., 3])
Here first and second variable are constrained to be at most 1. But their sum should equal 3. This makes this problem unfeasible.
Now let's transform inequality constraints as Erwin suggested by adding two slack variables.
# Redefine problem data with 2 slack variableы
# Added quadratic penalties to variables s1 and s2 with penalty coefficient == 1
P = sparse.csc_matrix([[4., 1., 0., 0.], [1., 2., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
# Zero linear penalties for s1 and s2.
q = np.array([1., 1., 0., 0.])
# First constraint is x1 <= s1, second is s1 >= 0.
# Third constraint is x2 <= s2, fourth is s2 >= 0.
A = sparse.csc_matrix([[1., 0., -1., 0.], [0., 0., 1., 0.], [0., 1., 0., -1.], [0., 0., 0., 1.], [1., 1., 0., 0.]])
l = np.array([-np.inf, 0., -np.inf, 0., 3])
u = np.array([0., np.inf, 0., np.inf, 3])
When I run solver, problem has a solution and is softly penalised for exceeding upper bounds.
iter objective pri res dua res rho time
1 -4.9403e-03 3.00e+00 5.99e+02 1.00e-01 8.31e-04s
50 1.3500e+01 1.67e-07 7.91e-08 9.96e-01 8.71e-04s
status: solved
number of iterations: 50
optimal objective: 13.5000
run time: 8.93e-04s
optimal rho estimate: 1.45e+00
[1.00 2.00 1.00 2.00]
Hope this helps somebody.

Scikit-learn cross val score: too many indices for array

I have the following code
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.cross_validation import cross_val_score
#split the dataset for train and test
combnum['is_train'] = np.random.uniform(0, 1, len(combnum)) <= .75
train, test = combnum[combnum['is_train']==True], combnum[combnum['is_train']==False]
et = ExtraTreesClassifier(n_estimators=200, max_depth=None, min_samples_split=10, random_state=0)
min_samples_split=10, random_state=0 )
labels = train[list(label_columns)].values
tlabels = test[list(label_columns)].values
features = train[list(columns)].values
tfeatures = test[list(columns)].values
et_score = cross_val_score(et, features, labels, n_jobs=-1)
print("{0} -> ET: {1})".format(label_columns, et_score))
Checking the shape of the arrays:
features.shape
Out[19]:(43069, 34)
And
labels.shape
Out[20]:(43069, 1)
and I'm getting:
IndexError: too many indices for array
and this relevant part of the traceback:
---> 22 et_score = cross_val_score(et, features, labels, n_jobs=-1)
I'm creating the data from Pandas dataframes and I searched here and saw some reference to possible errors via this method but can't figure out how to correct?
What the data arrays look like:
features
Out[21]:
array([[ 0., 1., 1., ..., 0., 0., 1.],
[ 0., 1., 1., ..., 0., 0., 1.],
[ 1., 1., 1., ..., 0., 0., 1.],
...,
[ 0., 0., 1., ..., 0., 0., 1.],
[ 0., 0., 1., ..., 0., 0., 1.],
[ 0., 0., 1., ..., 0., 0., 1.]])
labels
Out[22]:
array([[1],
[1],
[1],
...,
[1],
[1],
[1]])
When we do cross validation in scikit-learn, the process requires an (R,) shape label instead of (R,1). Although they are the same thing to some extend, their indexing mechanisms are different. So in your case, just add:
c, r = labels.shape
labels = labels.reshape(c,)
before passing it to the cross-validation function.
It seems to be fixable if you specify the target labels as a single data column from Pandas. If the target has multiple columns, I get a similar error. For example try:
labels = train['Y']
Adding .ravel() to the Y/Labels variable passed into the formula helped solve this problem within KNN as well.
try target:
y=df['Survived']
instead , i used
y=df[['Survived']]
which made the target y a dateframe, it seems series would be ok
You might need to play with the dimensions a bit, e.g.
et_score = cross_val_score(et, features, labels, n_jobs=-1)[:,n]
or
et_score = cross_val_score(et, features, labels, n_jobs=-1)[n,:]
n being the dimension.

Convolution & Deconvolution using Scipy

I am trying to compute Deconvolution using Python. I have a signal let say f(t) which is the convoluted by the window function say g(t). Is there some direct way to compute the deconvolution so I can get back the original signal?
For instance f(t) = exp(-t**2/3); Gaussian function
and g(t) = Trapezoidal function
Thanks in advance for your kind suggestion.
Is this an analytical or numerical problem?
If it's numerical, use scipy.signal.devconvolve: http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.deconvolve.html
From the docs:
>>> from scipy import signal
>>> sig = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1,])
>>> filter = np.array([1,1,0])
>>> res = signal.convolve(sig, filter)
>>> signal.deconvolve(res, filter)
(array([ 0., 0., 0., 0., 0., 1., 1., 1., 1.]),
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]))
Otherwise, if you want an analytic solution, you might be using the wrong tool.
Additionally, just a tip for future google-ing, when you're talking about convolution, the action is usually/often "convolved" not "convoluted", see https://english.stackexchange.com/questions/64046/convolve-vs-convolute

Categories