Chumpy minimization of gaussian pyramid leads to dimension mismatch - python

I am attempting to minimize an energy function between a rendered 3d scene and an image with opendr as in the example given in the OpenDR Paper
I have essentially copied the code from the paper, but used my own image and 3d object. Here is the code to create the renderer:
V = ch.array(m.v)
rn = TexturedRenderer()
rn.frustum = {'near': 1., 'far': 1., 'width': 350, 'height': 500}
rn.camera = ProjectPoints(v=m.v, t=np.array([0, 0, 1]), rt=np.zeros(3), f=[450, 450],
c=[350/2, 500/2], k=np.zeros(5))
rn.set(v=m.v, f=m.f, texture_image =img, ft=m.ft, vt=m.vt, bgcolor=ch.zeros(3))
A = SphericalHarmonics(vn=VertNormals(v=V, f=rn.f), components=ch.array([4,0,0,0]),
light_color=ch.array([1,1,1]))
rn.set(vc = A)
where m is the mesh of the object loaded with opendr's load_mesh. And then I calculate the energy and use ch.minimize just as in the paper:
translation, rotation = ch.array([0,0,4]), ch.zeros(3)
rn.v = translation + m.v.dot(Rodrigues(rotation))
# Create the energy
difference = rn - load_image(img_file)
E = gaussian_pyramid(difference, n_levels=5, normalization='SSE')
# Minimize the energy
light_parms = A.components
ch.minimize(E, x0=[translation])
ch.minimize(E, x0=[translation, rotation, light_parms])
I've confirmed the renderer and the image shape are both 500x350x3. It always returns a dimension mismatch which occurs in a scipy sparse matmul operation that occurs when the state is updated in the minimize_dogleg function of chumpy. I've modified the file to output the shapes of the matrices in the matmul operation through the iterations and with n=5 the shapes it returns are
(129978, 519912) (519912, 525000)
(31992, 127452) (127452, 129978)
(7686, 30744) (30744, 31992)
(1800, 7080) (7080, 7686)
(378, 1512) (1512, 1800)
(256, 256) (105279, 3)
with the last one being where it returns the error. I've altered the number of pyramids and the results are the same and the final shapes of the two matrices are also always the same (unless I go too high then it gives a different error because the matrix becomes too small). One thing I notice is that the error always occurs at n_levels + 1 iterations which seems odd to me as I would think it stops at n_levels, but I have little understanding of what's actually happening in the minimization. For a little extra context the mesh has 35093 vertices E has a length of 686970. Is there anyone that could assist me as to what exactly is happening here or how the minimize is functioning and why the error is occurring on n_levels + 1 iterations.

The problem was fixed by instead using the opendr version from https://github.com/polmorenoc/opendr as opposed to the mattloper version

Related

Resampling 2-d array using Fourier transform method

I have a question on the resampling 2-d array.
Sometimes, the original size of the geoscience data should be transformed to other size. If the ratio for each axis is equal, the task is simple, in which np.reshape allow a 2-d array of 100x100 to 50x50 without data loss. The code is shown as:
## creat a original data
xc1, xc2, yc1, yc2 = 100, 110, 35, 45
XSIZE,YSIZE=100,100
lon,lat = np.linspace(xc1,xc2,XSIZE),np.linspace(yc1,yc2,YSIZE)
pop = np.random.uniform(low=1000, high=50000, size=(XSIZE*YSIZE,)).reshape(YSIZE,XSIZE)
## reshape
shape = np.array(pop.shape, dtype=float)
coarseness = 2 # the new shape is in 50 x 50
new_shape = coarseness * np.ceil(shape/coarseness).astype(int)
zp_pop = np.zeros(new_shape)
zp_pop[:int(shape[0]), :int(shape[1])] = pop
temp = zp_pop.reshape((new_shape[0] // coarseness, coarseness,
new_shape[1] // coarseness, coarseness))
coarse_pop = np.sum(temp, axis=(1,3))
print (pop.sum())
print (coarse_pop.sum())
However, when the coarse factor is different for each axis, this method can not be implemented. I turned to apply other method. Here is an example I tried to use FFT to generate a 60*80 array as output
from scipy import fftpack
pop_fft = fftpack.fft2(pop,shape = (60,80))
pop_res = fftpack.ifft2(pop_fft).real
print(pop.sum())
print(pop_res.sum())
254208134.8356425
122048754.13639387
The data loss was significant. Thus, I posted my issue here. Maybe the resampling function I used was not correct. Or there are some better approach to deal with this situation. Any advices or comments are highly appreciated!
When you set up the 'coarse array' yourself you sum over adjacent entries, instead of computing the average or interpolating.
This way the sum over all elements in the coarse and original array are identical str((coarse_pop.sum()-pop.sum())/(0.5*(pop.sum()+coarse_pop.sum()))) gives '-1.1638426077573779e-16' only a tiny numerical error.
if you compare the mean of the fftpack resampled coarse array it matches up:
print(pop.mean())
print(pop_res.mean())
25606.832220313503
25496.03271480075
alternatively you can correct for the number of elements yourself:
print(pop.sum())
print(pop_res.sum()*100*100/(60*80))
256068322.20313504
254960327.14800745
I don't know about your problem but the fftpack way of downsampling the array makes more sense to me. if it's not what you want you can apply the prefactor to the original array, like pop_fft = fftpack.fft2(pop*100*100/(60*80),shape = (60,80))

How should I modify the test data for SVM method to be able to use the `precomputed` kernel function without error?

I am using sklearn.svm.SVR for a "regression task" which I want to use my "customized kernel method". Here is the dataset samples and the code:
index density speed label
0 14 58.844020 77.179139
1 29 67.624946 78.367394
2 44 77.679100 79.143744
3 59 79.361877 70.048869
4 74 72.529289 74.499239
.... and so on
from sklearn import svm
import pandas as pd
import numpy as np
density = np.random.randint(0,100, size=(3000, 1))
speed = np.random.randint(20,80, size=(3000, 1)) + np.random.random(size=(3000, 1))
label = np.random.randint(20,80, size=(3000, 1)) + np.random.random(size=(3000, 1))
d = np.hstack((a,b,c))
data = pd.DataFrame(d, columns=['density', 'speed', 'label'])
data.density = data.density.astype(dtype=np.int32)
def my_kernel(X,Y):
return np.dot(X,X.T)
svr = svm.SVR(kernel=my_kernel)
x = data[['density', 'speed']].iloc[:2000]
y = data['label'].iloc[:2000]
x_t = data[['density', 'speed']].iloc[2000:3000]
y_t = data['label'].iloc[2000:3000]
svr.fit(x,y)
y_preds = svr.predict(x_t)
the problem happens in the last line svm.predict which says:
X.shape[1] = 1000 should be equal to 2000, the number of samples at training time
I searched the web to find a way to deal with the problem but many questions alike (like {1}, {2}, {3}) were left unanswered.
Actually, I had used SVM methods with rbf, sigmoid, ... before and the code was working just fine but this was my first time using customized kernels and I suspected that it must be the reason why this error happened.
So after a little research and reading documentation I found out that when using precomputed kernels, the shape of the matrix for SVR.predict() must be like [n_samples_test, n_samples_train] shape.
I wonder how to modify x_test in order to get predictions and everything works just fine with no problem like when we don't use customized kernels?
If possible please describe "the reason that why the inputs for svm.predict function in precomputed kernel differentiates with the other kernels".
I really hope the unanswered questions that are related to this issue could be answered respectively.
The problem is in your kernel function, it doesn't do the job.
As the documentation https://scikit-learn.org/stable/modules/svm.html#using-python-functions-as-kernels says, "Your kernel must take as arguments two matrices of shape (n_samples_1, n_features), (n_samples_2, n_features) and return a kernel matrix of shape (n_samples_1, n_samples_2)." The sample kernel on the same page satisfies this criteria:
def my_kernel(X, Y):
return np.dot(X, Y.T)
In your function the second argument of dot is X.T and thus the output will have shape (n_samples_1, n_samples_1) which is not that is expected.
The shape does not match means the test data and train data are of not equal shape, always think about matrix or array in numpy. If you are doing any arithmetic operation you always need a similar shape. That's why we check array.shape.
[n_samples_test, n_samples_train] you can modify shapes but its not best idea.
array.shape, reshape, resize
are used for that

How to find Mahalanobis distance between two 1D arrays in Python?

I have two 1D arrays, and I need to find out the Mahalanobis distance between them.
Array 1
-0.125510275,0.067021735,0.140631825,-0.014300184,-0.122152582,0.002372072,-0.050777748,-0.106606245,0.149123222,-0.159149423,0.210138127,0.031959131,-0.068411253,-0.038253143,-0.024590122,0.101361006,-0.160774037,-0.183688596,-0.07163775,-0.096662685,-0.000117288,0.14251323,-0.030461289,-0.006710192,-0.217195332,-0.338565469,-0.030219197,-0.100772612,0.144092739,-0.092911556,-0.008420993,0.042907588,-0.212668449,-0.009366207,-7.01E-05,0.134508118,-0.015715659,-0.050884761,0.18804647,0.04946585,-0.242626131,0.099951334,0.053660966,0.275807977,0.216019884,-0.009127878,0.019819722,-0.043750495,0.12940146,-0.259942383,0.061821692,0.107142501,0.098196507,0.022301452,0.079412982,-0.131031215,-0.049483716,0.126781181,-0.195536733,0.077051811,0.061049294,-0.039563753,0.02573989,0.025330214,0.204785526,0.099218346,-0.050533134,-0.109173119,0.205652237,-0.168003649,-0.062734045,0.100320764,-0.063513778,-0.120843001,-0.223983109,0.075016715,0.481291831,0.107607022,-0.141365036,0.075003348,-0.042418435,-0.041501854,0.096700639,0.083469011,-0.033227846,-0.050748199,-0.045331556,0.065955319,0.26927036,0.082820699,-0.014033476,0.176714703,0.042264186,-0.011814327,0.041769091,-0.00132945,-0.114337325,-0.013483777,-0.111367472,-0.051828772,-0.022199111,0.030011443,0.015529033,0.171916366,-0.172722578,0.214662731,-0.0219073,-0.067695767,0.040487193,0.04814541,0.003313571,-0.01360167,0.115932293,-0.235844463,0.185181856,0.130868644,0.010789306,0.171733275,0.059378762,0.003508842,0.039326921,0.024174646,-0.195897669,-0.088932432,0.025385177,-0.134177506,0.08158315,0.049005955
And, Array 2
-0.120652862,0.030241199,0.146165773,-0.044423241,-0.138606027,-0.048646796,-0.00780057,-0.101798892,0.185339138,-0.210505784,0.1637595,0.015000292,-0.10359703,0.102251172,-0.043159217,0.183324724,-0.171825036,-0.173819616,-0.112194099,-0.161590934,-0.002507193,0.163269699,-0.037766434,0.041060638,-0.178659558,-0.268946916,-0.055348843,-0.11808344,0.113775767,-0.073903576,-0.039505914,0.032382272,-0.159118786,0.007761603,0.057116233,0.043675732,-0.057895001,-0.104836114,0.22844176,0.055832602,-0.245030299,0.006276659,0.140012532,0.21449241,0.159539059,-0.049584024,0.016899824,-0.074179329,0.119686954,-0.242336214,-0.001390997,0.097442642,0.059720818,0.109706804,0.073196828,-0.16272822,0.022305552,0.102650747,-0.192103565,0.104134969,0.099571452,-0.101140082,-0.038911857,0.071292967,0.202927336,0.12729995,-0.047885433,-0.165100336,0.220239595,-0.19612211,-0.075948663,0.096906625,-0.07410948,-0.108219706,-0.155030385,-0.042231761,0.484629512,0.093194947,-0.105109185,0.072906494,-0.056871444,-0.057923764,0.101847053,0.092042476,-0.061295755,-0.031595342,-0.01854251,0.074671492,0.266587347,0.052284949,0.003548023,0.171518356,0.053180017,-0.022400264,0.061757766,0.038441688,-0.139473096,-0.05759665,-0.101672307,-0.074863717,-0.02349415,-0.011674869,0.010008151,0.141401738,-0.190440938,0.216421023,-0.028323224,-0.078021556,-0.011468113,0.100600921,-0.019697987,-0.014288296,0.114862509,-0.162037179,0.171686187,0.149788797,-0.01235011,0.136169329,0.008751356,0.024811052,0.003802934,0.00500867,-0.1840965,-0.086204343,0.018549766,-0.110649876,0.068768717,0.03012047
I found that Scipy has already implemented the function. However, I am confused about what the value of IV should be. I tried to do the following
V = np.cov(np.array([array_1, array_2]))
IV = np.linalg.inv(V)
print(mahalanobis(array_1, array_2, IV))
But, I get the following error:
File
"C:\Users\XXXXXX\AppData\Local\Continuum\anaconda3\envs\face\lib\site-packages\scipy\spatial\distance.py",
line 1043, in mahalanobis
m = np.dot(np.dot(delta, VI), delta)
ValueError: shapes (128,) and (2,2) not aligned: 128 (dim 0) != 2 (dim 0)
EDIT:
array_1 = [-0.10577646642923355, 0.09617947787046432, 0.029290344566106796, 0.02092641592025757, -0.021434104070067406, -0.13410840928554535, 0.028282659128308296, -0.12082239985466003, 0.21936850249767303, -0.06512433290481567, 0.16812698543071747, -0.03302834928035736, -0.18088334798812866, -0.04598559811711311, -0.014739632606506348, 0.06391328573226929, -0.15650317072868347, -0.13678401708602905, 0.01166679710149765, -0.13967938721179962, 0.14632365107536316, 0.025218486785888672, 0.046839646995067596, 0.09690812975168228, -0.13414686918258667, -0.2883925437927246, -0.1435326784849167, -0.17896348237991333, 0.10746842622756958, -0.09142691642045975, 0.04860316216945648, 0.031577128916978836, -0.17280976474285126, -0.059613555669784546, -0.05718057602643967, 0.0401446670293808, 0.026440180838108063, -0.017025159671902657, 0.22091664373874664, 0.024703698232769966, -0.15607595443725586, -0.0018572667613625526, -0.037675946950912476, 0.3210170865058899, 0.10884962230920792, 0.030370134860277176, 0.056784629821777344, -0.030112050473690033, 0.023124486207962036, -0.1449904441833496, 0.08885903656482697, 0.17527811229228973, 0.08804896473884583, 0.038310401141643524, -0.01704210229218006, -0.17355971038341522, -0.018237406387925148, 0.030551932752132416, -0.23085585236549377, 0.13475817441940308, 0.16338199377059937, -0.06968289613723755, -0.04330683499574661, 0.04434924200177193, 0.22637797892093658, 0.07463733851909637, -0.15070196986198425, -0.07500549405813217, 0.10863590240478516, -0.22288714349269867, 0.0010778247378766537, 0.057608842849731445, -0.12828609347343445, -0.17236559092998505, -0.23064571619033813, 0.09910193085670471, 0.46647992730140686, 0.0634111613035202, -0.13985536992549896, 0.052741192281246185, -0.1558966338634491, 0.022585246711969376, 0.10514408349990845, 0.11794176697731018, -0.06241249293088913, 0.06389056891202927, -0.14145469665527344, 0.060088545083999634, 0.09667345881462097, -0.004665130749344826, -0.07927791774272919, 0.21978208422660828, -0.0016187895089387894, 0.04876316711306572, 0.03137822449207306, 0.08962501585483551, -0.09108036011457443, -0.01795950159430504, -0.04094596579670906, 0.03533276170492172, 0.01394269522279501, -0.08244197070598602, -0.05095399543642998, 0.04305890575051308, -0.1195211187005043, 0.16731074452400208, 0.03894471749663353, -0.0222858227789402, -0.07944411784410477, 0.0614166259765625, -0.1481470763683319, -0.09113290905952454, 0.14758692681789398, -0.24051085114479065, 0.164126917719841, 0.1753545105457306, -0.003193420823663473, 0.20875433087348938, 0.03357946127653122, 0.1259773075580597, -0.00022807717323303223, -0.039092566817998886, -0.13582147657871246, -0.01937306858599186, 0.015938198193907738, 0.00787206832319498, 0.05792934447526932, 0.03294186294078827]
array_2 = [-0.1966051608324051, 0.0940953716635704, -0.0031937970779836178, -0.03691547363996506, -0.07240629941225052, -0.07114037871360779, -0.07133384048938751, -0.1283963918685913, 0.15377545356750488, -0.091400146484375, 0.10803385823965073, -0.09235749393701553, -0.1866973638534546, -0.021168243139982224, -0.09094691276550293, 0.07300164550542831, -0.20971564948558807, -0.1847742646932602, -0.009817334823310375, -0.05971141159534454, 0.09904412180185318, 0.0278592761605978, -0.012554554268717766, 0.09818517416715622, -0.1747943013906479, -0.31632938981056213, -0.0864541232585907, -0.13249783217906952, 0.002135572023689747, -0.04935726895928383, 0.010047778487205505, 0.04549024999141693, -0.26334646344184875, -0.05263081565499306, -0.013573898002505302, 0.2042253464460373, 0.06646320968866348, 0.08540669083595276, 0.12267164140939713, -0.018634958192706108, -0.19135263562202454, 0.01208433136343956, 0.09216200560331345, 0.2779296934604645, 0.1531585156917572, 0.10681629925966263, -0.021275708451867104, -0.059720948338508606, 0.06610126793384552, -0.21058350801467896, 0.005440462380647659, 0.18833838403224945, 0.08883830159902573, 0.025969548150897026, 0.0337764173746109, -0.1585341989994049, 0.02370697632431984, 0.10416869819164276, -0.19022507965564728, 0.11423652619123459, 0.09144753962755203, -0.08765758574008942, -0.0032832929864525795, -0.0051014479249715805, 0.19875964522361755, 0.07349056005477905, -0.1031823456287384, -0.10447365045547485, 0.11358538269996643, -0.24666038155555725, -0.05960353836417198, 0.07124857604503632, -0.039664581418037415, -0.20122921466827393, -0.31481748819351196, -0.006801256909966469, 0.41940364241600037, 0.1236235573887825, -0.12495145946741104, 0.12580059468746185, -0.02020396664738655, -0.03004150651395321, 0.11967054009437561, 0.09008713811635971, -0.07470540702342987, 0.09324200451374054, -0.13763070106506348, 0.07720538973808289, 0.19568027555942535, 0.036567769944667816, 0.030284458771348, 0.14119629561901093, -0.03820852190256119, 0.06232285499572754, 0.036639824509620667, 0.07704029232263565, -0.12276224792003632, -0.0035170004703104496, -0.13103705644607544, 0.027697769924998283, -0.01527332328259945, -0.04027168080210686, -0.03659897670149803, 0.03330300375819206, -0.12293602526187897, 0.09043421596288681, -0.019673841074109077, -0.07563626766204834, -0.13991905748844147, 0.014788001775741577, -0.07630413770675659, 0.00017269013915210962, 0.16345393657684326, -0.25710681080818176, 0.19869503378868103, 0.19393865764141083, -0.07422225922346115, 0.19553625583648682, 0.09189949929714203, 0.051557887345552444, -0.0008843056857585907, -0.006250975653529167, -0.1680600494146347, -0.10320111364126205, 0.03232177346944809, -0.08931156992912292, 0.11964476853609085, 0.00814182311296463]
The co-variance matrix of the above arrays turn out to be a singular matrix, and thus I am unable to inverse it. Why does it end up being a singular matrix?
EDIT 2: Solution
Since the co-variance matrix here is singular matrix, I had to pseudo inverse it using np.linalg.pinv(V).
From the numpy.cov docs, the first argument should be an array m such that:
Each row of m represents a variable, and each column a single observation of all those variables.
So to fix your code just take the transpose (with .T) of your array before you call cov:
V = np.cov(np.array([array_1, array_2]).T)
IV = np.linalg.inv(V)
print(mahalanobis(array_1, array_2, IV))
I just tested this out on some random data, and I can confirm it works.
Also, calculating covariance from just two observations is a bad idea, and not likely to be very accurate. If your data is coming from an image, you should use the entire image img (or at least the entire region of interest) when calculating the covariance matrix, then use that matrix to find the Mahalanobis distance between the two vectors of interest:
V = np.cov(np.array(img))
IV = np.linalg.inv(V)
print(mahalanobis(array_1, array_2, IV))
You may or may not need to replace img with img.T, depending on how you generated array_1 and array_2 in the first place.
If you're getting singular covariance matrices, what you have is a math problem, not a code problem. It's apparently a common enough problem that the question "why is my covariance matrix singular?" has already been asked and answered. Very broadly, it seems like it can happen when enough of your data points are "too similar", in some sense. I'd imagine using just two data points also makes this more likely.

Numerical Stability of Forward Substitution in Python

I am implementing some basic linear equation solvers in Python.
I have currently implemented forward and backward substitution for triangular systems of equations (so very straightforward to solve!), but the precision of the solutions becomes very poor even with systems of about 50 equations (50x50 coefficient matrix).
The following code performs the forward/backward substitution:
FORWARD_SUBSTITUTION = 1
BACKWARD_SUBSTITUTION = 2
def solve_triang_subst(A: np.ndarray, b: np.ndarray,
substitution=FORWARD_SUBSTITUTION) -> np.ndarray:
"""Solves a triangular system via
forward or backward substitution.
A must be triangular. FORWARD_SUBSTITUTION means A should be
lower-triangular, BACKWARD_SUBSTITUTION means A should be upper-triangular.
"""
rows = len(A)
x = np.zeros(rows, dtype=A.dtype)
row_sequence = reversed(range(rows)) if substitution == BACKWARD_SUBSTITUTION else range(rows)
for row in row_sequence:
delta = b[row] - np.dot(A[row], x)
cur_x = delta / A[row][row]
x[row] = cur_x
return x
I am using numpy and 64-bit floats.
Simple Testing Tool
I have set up a simple test suite which generates coefficient matrices and x vectors, computes the b, and then uses forward or backward substitution to recover the x, comparing it to the its known value for validity.
The following code performs these checks:
import numpy as np
import scipy.linalg as sp_la
RANDOM_SEED = 1984
np.random.seed(RANDOM_SEED)
def check(sol: np.ndarray, x_gt: np.ndarray, description: str) -> None:
if not np.allclose(sol, x_gt, rtol=0.1):
print("Found inaccurate solution:")
print(sol)
print("Ground truth (not achieved...):")
print(x_gt)
raise ValueError("{} did not work!".format(description))
def fuzz_test_solving():
N_ITERATIONS = 100
refine_result = True
for mode in [FORWARD_SUBSTITUTION, BACKWARD_SUBSTITUTION]:
print("Starting mode {}".format(mode))
for iteration in range(N_ITERATIONS):
N = np.random.randint(3, 50)
A = np.random.uniform(0.0, 1.0, [N, N]).astype(np.float64)
if mode == BACKWARD_SUBSTITUTION:
A = np.triu(A)
elif mode == FORWARD_SUBSTITUTION:
A = np.tril(A)
else:
raise ValueError()
x_gt = np.random.uniform(0.0, 1.0, N).astype(np.float64)
b = np.dot(A, x_gt)
x_est = solve_triang_subst(A, b, substitution=mode,
refine_result=refine_result)
# TODO report error and count, don't throw!
# Keep track of error norm!!
check(x_est, x_gt,
"Mode {} custom triang iteration {}".format(mode, iteration))
if __name__ == '__main__':
fuzz_test_solving()
Note that the maximum size of a test matrix is 49x49. Even in this case, the system cannot always compute decent solutions, and fails by more than a margin of 0.1. Here's an example of such a failure (this is doing backward substitution, so the biggest error is in the 0th coefficient; all the test data are sampled uniformly from [0, 1[):
Solution found with Mode 2 custom triang iteration 24:
[ 0.27876067 0.55200497 0.49499509 0.3259397 0.62420183 0.47041149
0.63557676 0.41155446 0.47191956 0.74385864 0.03002819 0.4700286
0.37989592 0.56527691 0.15072607 0.05659282 0.52587574 0.82252197
0.65662833 0.50250729 0.74139748 0.10852731 0.27864265 0.42981232
0.16327331 0.74097937 0.24411709 0.96934199 0.890266 0.9183985
0.14842446 0.51806495 0.36966843 0.18227989 0.85399593 0.89615663
0.39819336 0.90445931 0.21430972 0.61212349 0.85205597 0.66758689
0.1793689 0.38067267 0.39104614 0.6765885 0.4118123 ]
Ground truth (not achieved...)
[ 0.20881608 0.71009766 0.44735271 0.31169033 0.63982328 0.49075813
0.59669585 0.43844108 0.47764942 0.72222069 0.03497499 0.4707452
0.37679884 0.56439738 0.15120397 0.05635977 0.52616387 0.82230625
0.65670245 0.50251426 0.74139956 0.10845974 0.27864289 0.42981226
0.1632732 0.74097939 0.24411707 0.96934199 0.89026601 0.91839849
0.14842446 0.51806495 0.36966843 0.18227989 0.85399593 0.89615663
0.39819336 0.90445931 0.21430972 0.61212349 0.85205597 0.66758689
0.1793689 0.38067267 0.39104614 0.6765885 0.4118123 ]
I have also implemented the iterative refinement method described in Section 2.5 of [0], and while it did help a little, the results are still poor for larger matrices.
MATLAB Sanity Check
I also did this experiment in MATLAB, and even there, once there are more than 100 equations, the estimation error shoots up exponentially.
Here is the MATLAB code I used for this experiment:
err_norms = [];
range = 1:3:120;
for size=range
A = rand(size, size);
A = tril(A);
x_gt = rand(size, 1);
b = A * x_gt;
x_sol = A\b;
err_norms = [err_norms, norm(x_gt - x_sol)];
end
plot(range, err_norms);
set(gca, 'YScale', 'log')
And here is the resulting plot:
Main Question
My question is: Is this normal behavior, seeing as there is essentially no structure in the problem, given that I randomly generate the A matrix and x?
What about solving linear systems of 100s of equations for various practical applications? Are these limitations simply an accepted fact, and e.g., optimization algorithms are just naturally robust to these issues? Or am I missing some important facets of this problem?
[0]: Press, William H. Numerical recipes 3rd edition: The art of scientific computing. Cambridge university press, 2007.
There are no limitations. This is a very fruitful exercise that we all came to realize; writing linear solvers are not that easy and that's why almost always LAPACK or its cousins in other languages are used with full confidence.
You are hit by almost singular matrices and because you are using matlab's backslash you don't see that matlab is switching to least squares solutions behind the scenes when near singularity is hit. If you just change A\b to linsolve(A,b) hence you restrict the solver to solve square systems you'll probably see lots of warnings on your console.
I didn't test it because I don't have a license anymore but if I write blindly this should show you the condition numbers of the matrices at each step.
err_norms = [];
range = 1:3:120;
for i=1:40
size = range(i);
A = rand(size, size);
A = tril(A);
x_gt = rand(size, 1);
b = A * x_gt;
x_sol = linsolve(A,b);
err_norms = [err_norms, norm(x_gt - x_sol)];
zzz(i) = rcond(A);
end
semilogy(range, err_norms);
figure,semilogy(range,zzz);
Note that because you are picking up numbers from a uniform distribution it becomes more and more likely to hit ill-conditioned matrices (wrt to inversion) as the rows have more probability to have rank deficiency. That's why the error becomes bigger and bigger. Sprinkle some identity matrix times a scalar and all errors should come back to eps*n levels.
But best, leave this to expert algorithms which have been tested through decades. It is really not that trivial to write any of these. You can read the Fortran codes, for example, dtrsm solves the triangular system.
On the Python side, you can use scipy.linalg.solve_triangular which uses ?trtrs routines from LAPACK.

cv2.kmeans usage in Python

I am considering to use OpenCV's Kmeans implementation since it says to be faster...
Now I am using package cv2 and function kmeans,
I can not understand the parameters' description in their reference:
Python: cv2.kmeans(data, K, criteria, attempts, flags[, bestLabels[, centers]]) → retval, bestLabels, centers
samples – Floating-point matrix of input samples, one row per sample.
clusterCount – Number of clusters to split the set by.
labels – Input/output integer array that stores the cluster indices for every sample.
criteria – The algorithm termination criteria, that is, the maximum number of iterations and/or the desired accuracy. The accuracy is specified as criteria.epsilon. As soon as each of the cluster centers moves by less than criteria.epsilon on some iteration, the algorithm stops.
attempts – Flag to specify the number of times the algorithm is executed using different initial labelings. The algorithm returns the labels that yield the best compactness (see the last function parameter).
flags –
Flag that can take the following values:
KMEANS_RANDOM_CENTERS Select random initial centers in each attempt.
KMEANS_PP_CENTERS Use kmeans++ center initialization by Arthur and Vassilvitskii [Arthur2007].
KMEANS_USE_INITIAL_LABELS During the first (and possibly the only) attempt, use the user-supplied labels instead of computing them from the initial centers. For the second and further attempts, use the random or semi-random centers. Use one of KMEANS_*_CENTERS flag to specify the exact method.
centers – Output matrix of the cluster centers, one row per each cluster center.
what is the argument flags[, bestLabels[, centers]]) mean? and what about his one: → retval, bestLabels, centers ?
Here's my code:
import cv, cv2
import scipy.io
import numpy
# read data from .mat file
mat = scipy.io.loadmat('...')
keys = mat.keys()
values = mat.viewvalues()
data_1 = mat[keys[0]]
nRows = data_1.shape[1]
nCols = data_1.shape[0]
samples = cv.CreateMat(nRows, nCols, cv.CV_32FC1)
labels = cv.CreateMat(nRows, 1, cv.CV_32SC1)
centers = cv.CreateMat(nRows, 100, cv.CV_32FC1)
#centers = numpy.
for i in range(0, nCols):
for j in range(0, nRows):
samples[j, i] = data_1[i, j]
cv2.kmeans(data_1.transpose,
100,
criteria=(cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_MAX_ITER, 0.1, 10),
attempts=cv2.KMEANS_PP_CENTERS,
flags=cv2.KMEANS_PP_CENTERS,
)
And I encounter such error:
flags=cv2.KMEANS_PP_CENTERS,
TypeError: <unknown> is not a numpy array
How should I understand the parameter list and the usage of cv2.kmeans? Thanks
the documentation on this function is almost impossible to find. I wrote the following Python code in a bit of a hurry, but it works on my machine. It generates two multi-variate Gaussian Distributions with different means and then classifies them using cv2.kmeans(). You may refer to this blog post to get some idea of the parameters.
Handle imports:
import cv
import cv2
import numpy as np
import numpy.random as r
Generate some random points and shape them appropriately:
samples = cv.CreateMat(50, 2, cv.CV_32FC1)
random_points = r.multivariate_normal((100,100), np.array([[150,400],[150,150]]), size=(25))
random_points_2 = r.multivariate_normal((300,300), np.array([[150,400],[150,150]]), size=(25))
samples_list = np.append(random_points, random_points_2).reshape(50,2)
random_points_list = np.array(samples_list, np.float32)
samples = cv.fromarray(random_points_list)
Plot the points before and after classification:
blank_image = np.zeros((400,400,3))
blank_image_classified = np.zeros((400,400,3))
for point in random_points_list:
cv2.circle(blank_image, (int(point[0]),int(point[1])), 1, (0,255,0),-1)
temp, classified_points, means = cv2.kmeans(data=np.asarray(samples), K=2, bestLabels=None,
criteria=(cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_MAX_ITER, 1, 10), attempts=1,
flags=cv2.KMEANS_RANDOM_CENTERS) #Let OpenCV choose random centers for the clusters
for point, allocation in zip(random_points_list, classified_points):
if allocation == 0:
color = (255,0,0)
elif allocation == 1:
color = (0,0,255)
cv2.circle(blank_image_classified, (int(point[0]),int(point[1])), 1, color,-1)
cv2.imshow("Points", blank_image)
cv2.imshow("Points Classified", blank_image_classified)
cv2.waitKey()
Here you can see the original points:
Here are the points after they have been classified:
I hope that this answer may help you, it is not a complete guide to k-means, but it will at least show you how to pass the parameters to OpenCV.
The problem here is your data_1.transpose is not a numpy array.
OpenCV 2.3.1 and higher python bindings do not take anything except numpy array as image/array parameters. so, data_1.transpose has to be a numpy array.
Generally, all the points in OpenCV are of type numpy.ndarray
eg.
array([[[100., 433.]],
[[157., 377.]],
.
.
[[147., 247.]], dtype=float32)
where each element of array is
array([[100., 433.]], dtype=float32)
and the element of that array is
array([100., 433.], dtype=float32)

Categories