Python linprog to maximise objective function - python

It has been a while since I have done this so I am a bit rusty, but equation is:
max t(C)*x
s.t. Ax <=b
And I have my A matrix of constraints which is (1448x1359) :
[[ 1. 1. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 1. 1. 1.]]
Then I have my binding b (1448x1):
[ 1. 1. 7. ..., 2. 1. 2.]
And my objective function to be maximised which is a vector of ones (1359,1).
Now in other packages my maximised objective function is 841, however using linprog:
res = linprog(c=OBJ_N, A_ub=A, b_ub=b, options={"disp": True})
It optimised successfully to -0.0 so I wonder if I'm using the right command in python and have my constraints the right way around?
Edit: Ok that makes sense, it was trying to minimise. I have rewritten now (swapped c and b and transposed A to minimise).
# (max t(C)*x s.t. Ax <=b) = min t(b)*x s.t. ATy = c, y ≥ 0
# (i): minimise number of shops no bounds
ID = np.ones(len(w[0]))
print(ID)
print(ID.shape) #1359
At = A.transpose()
need_divest = (A.dot(ID)) - 1
print(need_divest)
print(need_divest.shape) #1448
res = linprog(c=need_divest, A_eq=At, b_eq=ID, options={"disp": True})
print(res)
However, I get "message: 'Optimzation failed. Unable to find a feasible starting point.'"

I guess you are probably minimizing instead of maximizing your objective function.
Try with this (inserting a - in front of your objective function coefficients) :
res = linprog(c=-OBJ_N, A_ub=A, b_ub=b, options={"disp": True})
Your result should then be -841.
This works simply because :
min(f(x))=-max(-f(x))

Related

How can I implement locally connected layer in pure Numpy

I would like to build a locally connected weight matrix that represents a locally connected neural network in pure python/numpy without deep learning frameworks like Torch or TensorFlow.
The weight matrix is a non-square 2D matrix with the dimension (number_input, number_output). (an autoencoder in my case; input>hidden)
So the function I would like to build, take the matrix dimension and the size of the receptive field (number of local connection) and give the associated weight matrix. I've already create a function like this, but for an input size of 8 and an output size of 4 (and RF = 4) my function output :
[[ 0.91822845 0. 0. 0. ]
[-0.24264655 -0.54754138 0. 0. ]
[ 0.55617366 0.12832513 -0.28733965 0. ]
[ 0.27993286 -0.33150324 0.06994107 0.61184121]
[ 0. 0.04286912 -0.20974503 -0.37633903]
[ 0. 0. -0.10386762 0.33553009]
[ 0. 0. 0. 0.09562682]
[ 0. 0. 0. 0. ]]
but I would like :
[[ 0.91822845 0. 0. 0. ]
[-0.24264655 -0.54754138 0. 0. ]
[ 0.55617366 0.12832513 0. 0. ]
[ 0 -0.33150324 0.06994107 0 ]
[ 0. 0.04286912 -0.20974503 0. ]
[ 0. 0. -0.10386762 0.33553009]
[ 0. 0. 0.11581854 0.09562682]
[ 0. 0. 0. 0.03448418]]
Here's my python code :
import numpy as np
def local_weight(input_size, output_size, RF):
input_range = 1.0 / input_size ** (1/2)
w = np.zeros((input_size, output_size))
for i in range(0, RF):
for j in range(0, output_size):
w[j+i, j] = np.random.normal(loc=0, scale=input_range, size=1)
return w
print(local_weight(8, 4, 4))
I look forward for your response!
The trick is in a small pad to work more comfortably (or control the limits).
Then you must define the step you will take with respect to the input (it is not more than the input / output). Once this is done you just have to fill in the gaps and then remove the pad.
import math
import numpy as np
def local_weight(input_size, output_size, RF):
input_range = 1.0 / input_size ** (1/2)
padding = ((RF - 1) // 2)
w = np.zeros(shape=(input_size + 2*padding, output_size))
step = float(w.shape[0] - RF) / (output_size - 1)
for i in range(output_size):
j = int(math.ceil(i * step))
j_next = j + RF
w[j:j_next, i] = np.random.normal(loc=0, scale=input_range, size=(j_next - j))
return w[padding:-padding, :]
I hope that is what you are looking for.
EDIT:
I think the implementation was misguided. I reimplement the function, we go by parts.
I calculate the radius of the receptive field (padding).
Determine the size of the W.
I calculate the step by removing the padding area so that I always stay inside.
I calculate the weights.
Remove the padding.

How can I include categorical distributions as observed data in PyMC3?

I have a dataset where each observation is basically a series of noisy measurements and one of the measurements contains signal.
Raw observed data y:
[[ 1.93542253e-01 1.39657327e-04 7.53918636e-01 5.23994535e-02]
[ 6.44964587e-02 8.50087384e-01 1.09894665e-02 7.44266910e-02]
[ 1.68387463e-02 5.38121456e-01 6.98554551e-02 3.75184342e-01]
...,
[ 5.79786789e-01 1.47417427e-02 3.15395731e-01 9.00757372e-02]
[ 8.66796124e-02 8.66999904e-02 4.47848127e-02 7.81835584e-01]
[ 8.18765043e-01 3.23448859e-03 5.61247840e-04 1.77439220e-01]]
I want to put each observation into an appropriate cluster based on this measurement data. For example the first datapoint above is drawn from the third column and the second datapoint above is drawn from the second column. If I sample from the known original distribution and provide those samples to the model as inputs to the Categorical I can get back the original distribution.
Sampled from original data y_choice:
[ 2. 3. 3. 1. 2. 2. 2. 2. 3. 3. 1. 2. 3. 0. 2. 0. 3. 1.
3. 0. 2. 0. 3. 0. 2. 0. 1. 0. 3. 0. 2. 0. 0. 0. 3. 0.
2. 0. 0. 3. 3. 1. ...
However this seems like I'm losing information because my choice sampler is outside the PyMC model. How can I supply the actual observed data y directly into the model? I'm guessing it has something to do with another model parameter based on the Dirichlet, but I haven't been able to wrap my head around how that works.
The sample code I'm operating from is below. I want to be able to supply y to the model and get the true_probs back out, but I've only managed to get it to work with y_choice so far.
import numpy as np
from pymc3 import *
import pymc3 as pm
import pandas as pd
print 'pymc3 version: ' + pm.__version__
def generate_noisy_observations():
y = np.ones((sample_size,k))
for i in range(sample_size):
#print("Iteration %d" % i)
true_category = np.random.choice(k, size=1, p=true_probs)
true_distribution = np.zeros(k)
true_distribution[true_category] = 1
noise_distribution = np.random.dirichlet(np.ones(k))
noise = np.random.normal(0,1,k)
distribution_weights = [0.9, 0.1]
raw_distribution = (true_distribution*distribution_weights[0] + noise**2*distribution_weights[1] )/\ (np.sum(true_distribution*distribution_weights[0])+np.sum(noise**2*distribution_weights[1]))
y[i] = raw_distribution
return y
def generate_choices_from_noisy_observations(y):
y_choice = np.ones(sample_size)
for i in range(sample_size):
y_choice[i] = np.random.choice(k, size=1, p=y[i])
return y_choice
sample_size = 1000
true_probs = [0.2, 0.1, 0.3, 0.4]
k = len(true_probs)
y = generate_noisy_observations()
y_choice = generate_choices_from_noisy_observations(y)
with pm.Model() as multinom_test:
probs = pm.Dirichlet('a', a=np.ones(k))
#data = Categorical('data',p = probs, observed = y)
data = Categorical('data',p = probs, observed = y_choice)
start = pm.find_MAP()
trace = pm.sample(50000, pm.Metropolis())
pm.traceplot(trace[500:])

Conditional numpy cumulative sum

I'm looking for a way to calculate the cumulative sum with numpy, but don't want to roll forward the value (or set it to zero) in case the cumulative sum is very close to zero and negative.
For instance
a = np.asarray([0, 4999, -5000, 1000])
np.cumsum(a)
returns [0, 4999, -1, 999]
but, I'd like to set the [2]-value (-1) to zero during the calculation. The problem is that this decision can only be done during calculation as the intermediate result isn't know a priori.
The expected array is: [0, 4999, 0, 1000]
The reason for this is that I'm getting very small values (floating point, not integers as in the example) which are due to floating point calculations which should in reality be zero. Calculating the cumulative sum compounds those values which leads to errors.
The Kahan summation algorithm could solve the problem. Unfortunately, it is not implemented in numpy. This means a custom implementation is required:
def kahan_cumsum(x):
x = np.asarray(x)
cumulator = np.zeros_like(x)
compensation = 0.0
cumulator[0] = x[0]
for i in range(1, len(x)):
y = x[i] - compensation
t = cumulator[i - 1] + y
compensation = (t - cumulator[i - 1]) - y
cumulator[i] = t
return cumulator
I have to admit, this is not exactly what was asked for in the question. (A value of -1 at the 3rd output of the cumsum is correct in the example). However, I hope this solves the actual problem behind the question, which is related to floating point precision.
I wonder if rounding will do what you are asking for:
np.cumsum(np.around(a,-1))
# the -1 means it rounds to the nearest 10
gives
array([ 0, 5000, 0, 1000])
It is not exactly as you put in your expected array from your answer, but using around, perhaps with the decimals parameter set to 0, might work when you apply it to the problem with floats.
Probably the best way to go is to write this bit in Cython (name the file cumsum_eps.pyx):
cimport numpy as cnp
import numpy as np
cdef inline _cumsum_eps_f4(float *A, int ndim, int dims[], float *out, float eps):
cdef float sum
cdef size_t ofs
N = 1
for i in xrange(0, ndim - 1):
N *= dims[i]
ofs = 0
for i in xrange(0, N):
sum = 0
for k in xrange(0, dims[ndim-1]):
sum += A[ofs]
if abs(sum) < eps:
sum = 0
out[ofs] = sum
ofs += 1
def cumsum_eps_f4(cnp.ndarray[cnp.float32_t, mode='c'] A, shape, float eps):
cdef cnp.ndarray[cnp.float32_t] _out
cdef cnp.ndarray[cnp.int_t] _shape
N = np.prod(shape)
out = np.zeros(N, dtype=np.float32)
_out = <cnp.ndarray[cnp.float32_t]> out
_shape = <cnp.ndarray[cnp.int_t]> np.array(shape, dtype=np.int)
_cumsum_eps_f4(&A[0], len(shape), <int*> &_shape[0], &_out[0], eps)
return out.reshape(shape)
def cumsum_eps(A, axis=None, eps=np.finfo('float').eps):
A = np.array(A)
if axis is None:
A = np.ravel(A)
else:
axes = list(xrange(len(A.shape)))
axes[axis], axes[-1] = axes[-1], axes[axis]
A = np.transpose(A, axes)
if A.dtype == np.float32:
out = cumsum_eps_f4(np.ravel(np.ascontiguousarray(A)), A.shape, eps)
else:
raise ValueError('Unsupported dtype')
if axis is not None: out = np.transpose(out, axes)
return out
then you can compile it like this (Windows, Visual C++ 2008 Command Line):
\Python27\Scripts\cython.exe cumsum_eps.pyx
cl /c cumsum_eps.c /IC:\Python27\include /IC:\Python27\Lib\site-packages\numpy\core\include
F:\Users\sadaszew\Downloads>link /dll cumsum_eps.obj C:\Python27\libs\python27.lib /OUT:cumsum_eps.pyd
or like this (Linux use .so extension/Cygwin use .dll extension, gcc):
cython cumsum_eps.pyx
gcc -c cumsum_eps.c -o cumsum_eps.o -I/usr/include/python2.7 -I/usr/lib/python2.7/site-packages/numpy/core/include
gcc -shared cumsum_eps.o -o cumsum_eps.so -lpython2.7
and use like this:
from cumsum_eps import *
import numpy as np
x = np.array([[1,2,3,4], [5,6,7,8]], dtype=np.float32)
>>> print cumsum_eps(x)
[ 1. 3. 6. 10. 15. 21. 28. 36.]
>>> print cumsum_eps(x, axis=0)
[[ 1. 2. 3. 4.]
[ 6. 8. 10. 12.]]
>>> print cumsum_eps(x, axis=1)
[[ 1. 3. 6. 10.]
[ 5. 11. 18. 26.]]
>>> print cumsum_eps(x, axis=0, eps=1)
[[ 1. 2. 3. 4.]
[ 6. 8. 10. 12.]]
>>> print cumsum_eps(x, axis=0, eps=2)
[[ 0. 2. 3. 4.]
[ 5. 8. 10. 12.]]
>>> print cumsum_eps(x, axis=0, eps=3)
[[ 0. 0. 3. 4.]
[ 5. 6. 10. 12.]]
>>> print cumsum_eps(x, axis=0, eps=4)
[[ 0. 0. 0. 4.]
[ 5. 6. 7. 12.]]
>>> print cumsum_eps(x, axis=0, eps=8)
[[ 0. 0. 0. 0.]
[ 0. 0. 0. 8.]]
>>> print cumsum_eps(x, axis=1, eps=3)
[[ 0. 0. 3. 7.]
[ 5. 11. 18. 26.]]
and so on, of course normally eps would be some small value, here integers are used just for the sake of demonstration / easiness of typing.
If you need this for double as well the _f8 variants are trivial to write and another case has to be handled in cumsum_eps().
When you're happy with the implementation you should make it a proper part of your setup.py - Cython setup.py
Update #1: If you have good compiler support in run environment you could try [Theano][3] to implement either compensation algorithm or your original idea:
import numpy as np
import theano
import theano.tensor as T
from theano.ifelse import ifelse
A=T.vector('A')
sum=T.as_tensor_variable(np.asarray(0, dtype=np.float64))
res, upd=theano.scan(fn=lambda cur_sum, val: ifelse(T.lt(cur_sum+val, 1.0), np.asarray(0, dtype=np.float64), cur_sum+val), outputs_info=sum, sequences=A)
f=theano.function(inputs=[A], outputs=res)
f([0.9, 2, 3, 4])
will give [0 2 3 4] output. In either Cython or this you get at least +/- performance of the native code.

Does this function compute convolution correctly?

I need to write a basic function that computes a 2D convolution between a matrix and a kernel.
I have recently got into Python, so I'm sorry for my mistakes.
My dissertation teacher said that I should write one by myself so I can handle it better and to be able to modify it for future improvements.
I have found an example of this function on a website, but I don't understand how the returned values are obtained.
This is the code (from http://docs.cython.org/src/tutorial/numpy.html )
from __future__ import division
import numpy as np
def naive_convolve(f, g):
# f is an image and is indexed by (v, w)
# g is a filter kernel and is indexed by (s, t),
# it needs odd dimensions
# h is the output image and is indexed by (x, y),
# it is not cropped
if g.shape[0] % 2 != 1 or g.shape[1] % 2 != 1:
raise ValueError("Only odd dimensions on filter supported")
# smid and tmid are number of pixels between the center pixel
# and the edge, ie for a 5x5 filter they will be 2.
#
# The output size is calculated by adding smid, tmid to each
# side of the dimensions of the input image.
vmax = f.shape[0]
wmax = f.shape[1]
smax = g.shape[0]
tmax = g.shape[1]
smid = smax // 2
tmid = tmax // 2
xmax = vmax + 2*smid
ymax = wmax + 2*tmid
# Allocate result image.
h = np.zeros([xmax, ymax], dtype=f.dtype)
# Do convolution
for x in range(xmax):
for y in range(ymax):
# Calculate pixel value for h at (x,y). Sum one component
# for each pixel (s, t) of the filter g.
s_from = max(smid - x, -smid)
s_to = min((xmax - x) - smid, smid + 1)
t_from = max(tmid - y, -tmid)
t_to = min((ymax - y) - tmid, tmid + 1)
value = 0
for s in range(s_from, s_to):
for t in range(t_from, t_to):
v = x - smid + s
w = y - tmid + t
value += g[smid - s, tmid - t] * f[v, w]
h[x, y] = value
return h
I don't know if this function does the weighted sum from input and filter, because I see no sum here.
I applied this with
kernel = np.array([(1, 1, -1), (1, 0, -1), (1, -1, -1)])
file = np.ones((5,5))
naive_convolve(file, kernel)
I got this matrix:
[[ 1. 2. 1. 1. 1. 0. -1.]
[ 2. 3. 1. 1. 1. -1. -2.]
[ 3. 3. 0. 0. 0. -3. -3.]
[ 3. 3. 0. 0. 0. -3. -3.]
[ 3. 3. 0. 0. 0. -3. -3.]
[ 2. 1. -1. -1. -1. -3. -2.]
[ 1. 0. -1. -1. -1. -2. -1.]]
I tried to do a manual calculation (on paper) for the first full iteration of the function and I got 'h[0,0] = 0', because of the matrix product: 'filter[0, 0] * matrix[0, 0]', but the function returns 1. I am very confused with this.
If anyone can help me understand what is going on here, I would be very grateful. Thanks! :)
Yes, that function computes the convolution correctly. You can check this using scipy.signal.convolve2d
import numpy as np
from scipy.signal import convolve2d
kernel = np.array([(1, 1, -1), (1, 0, -1), (1, -1, -1)])
file = np.ones((5,5))
x = convolve2d(file, kernel)
print x
Which gives:
[[ 1. 2. 1. 1. 1. 0. -1.]
[ 2. 3. 1. 1. 1. -1. -2.]
[ 3. 3. 0. 0. 0. -3. -3.]
[ 3. 3. 0. 0. 0. -3. -3.]
[ 3. 3. 0. 0. 0. -3. -3.]
[ 2. 1. -1. -1. -1. -3. -2.]
[ 1. 0. -1. -1. -1. -2. -1.]]
It's impossible to know how to explain all this to you since I don't know where to start, and I don't know how all the other explanations aren't working for you. I think, though, that you are doing all of this as a learning exercise so you can figure this out for yourself. From what I've seen on SO, asking big questions on SO is not a substitute for working it through yourself.
Your specific question of why does
h[0,0] = 0
in your calculation not match this matrix is a good one. In fact, both are correct. The reason for mismatch is that the output of the convolution doesn't have the mathematical indices specified, but instead they are implied. The center, which is mathematically indicated by the indices [0,0] corresponds to x[3,3] in the matrix above.

Opencv: Computing fundamental matrix from R and T

I want to compute the epipolar lines of a stereo camera.
I know both camera intrinsics matrix as well as R and T.
I tried to compute the essential matrix as told in Learning Opencv book and wikipedia.
where [t]x is the matrix representation of the cross product with t.
so
I tried to implement this with python and then use the opencv function cv2.computeCorrespondEpilines to compute the epilines.
The problem is that the lines I get don't converge in a point as they should...
I guess I must have a problem computing F.
This is the relevant pice of code:
T #Contains translation vector
R #Rotation matrix
S=np.mat([[0,-T[2],T[1]],[T[2],0,-T[1]],[-T[1],T[0],0]])
E=np.mat(R)*S
M1=np.mat(self.getCameraMatrix(cam1))
M1_inv=np.linalg.inv(M1)
M2=np.mat(self.getCameraMatrix(cam2))
M2_inv=np.linalg.inv(M2)
F=(M2_inv.T)*E*M1_inv
The matrices are:
M1=[[ 776.21275864 0. 773.70733324]
[ 0. 776.21275864 627.82872456]
[ 0. 0. 1. ]]
M2=[[ 764.35675708 0. 831.26052677]
[ 0. 764.35675708 611.85363745]
[ 0. 0. 1. ]]
R=[[ 0.9999902 0.00322032 0.00303674]
[-0.00387935 0.30727176 0.9516139 ]
[ 0.0021314 -0.95161636 0.30728124]]
T=[ 0.0001648 0.04149158 -0.02854541]
The ouput F I get it's something like:
F=[[ 4.75910592e-07 6.28777619e-08 -2.78886982e-04]
[ -4.66942275e-08 -7.62837993e-08 -7.34825205e-04]
[ -8.86965149e-04 -6.86717269e-04 1.40633035e+00]]
EDITED:
The cross multiplication matrix was wrong, it has to be:
S=np.mat([[0,-T2,T1],[T2,0,-T[0]],[-T1,T[0],0]])
The epilines converge now at the epipole.
Hum, your F matrix seems wrong - to begin with, the rank is closer to 3 than 2.
From your data I get:
octave:9> tx = [ 0 -T(3) T(2)
> T(3) 0 -T(1)
> -T(2) T(1) 0]
tx =
0.000000 0.028545 0.041492
-0.028545 0.000000 -0.000165
-0.041492 0.000165 0.000000
octave:11> E= R* tx
E =
-2.1792e-04 2.8546e-02 4.1491e-02
-4.8255e-02 4.6088e-05 -2.1160e-04
1.4415e-02 1.1148e-04 2.4526e-04
octave:12> F=inv(M1')*E*inv(M2)
F =
-3.6731e-10 4.8113e-08 2.4320e-05
-8.1333e-08 7.7681e-11 6.7289e-05
7.0206e-05 -3.7128e-05 -7.6583e-02
octave:14> rank(F)
ans = 2
Which seems to make more sense. Can you try that F matrix in your plotting code?

Categories