I am trying to generate a matrix of numbers with 7 rows and 4 columns. Each row must sum to 100 and each column must have an even spread (if permitted) between a min and max range (specified below).
Goal:
C1 C2 C3 C4 sum range
1 low 100 ^
2 .. |
3 .. |
4 .. |
5 .. |
6 .. |
7 high _
c1_high = 98
c1_low = 75
c2_high = 15
c2_low = 6
c3_high = 8
c3_low = 2
c4_low = 0.05
c4_high =0.5
In addition to this, i need the spread of each row to be as linear as possible, though a line fitted to the data with a second order polynomial would suffice (with an r^2 value of >0.98).
I am currently trying to do this using the following sudocode:
generate random number between ranges for c1,c2,c3 and c4.
repeat this 7 times
check correlation between each generated c1 value and a range of numbers from 1-7. For example:
repeat step 3 for c2,c3 and c4.
Break loop when step 3 and 4 are successful
This has proven to be too burdensome in terms of the number of iterations required and as a result, the solution is never reached.
Is there a more efficient way of achieving this solution?
So far:
import pandas as pd
import numpy as np
from sklearn.utils import shuffle
c1_high = 98
c1_low = 75
c2_high = 15
c2_low = 6
c3_high = 8
c3_low = 2
c4_low = 0.05
c4_high =0.5
def matrix_gen(): #generates matrix within min and max values
container =[]
d={}
offset = np.linspace(0.05,1,9)
c1= np.linspace(c1_low, c1_high, 7)
c2= np.linspace(c2_low, c2_high, 7)
c3= np.linspace(c3_low, c3_high, 7)
c4= np.linspace(c4_low, c4_high, 7)
for i in np.arange(7):
d["row{0}".format(i)]=[item[i] for item in [c1,c2,c3,c4]]
df =pd.DataFrame(d)
df.loc[4,:] = df.iloc[0,:][::-1].values
df1 = df.drop(0)
df1.loc[5,:] = df1.sum(axis=0)
new_name = df1.index[-1]
df1 = df1.rename(index={new_name: 'sum'})
return df1
m = matrix_gen()
print(m)
out:
row0 row1 row2 row3 row4 row5 row6
1 6.00 7.500000 9.000000 10.500 12.000000 13.500000 15.0
2 2.00 3.000000 4.000000 5.000 6.000000 7.000000 8.0
3 0.05 0.125000 0.200000 0.275 0.350000 0.425000 0.5
4 98.00 94.166667 90.333333 86.500 82.666667 78.833333 75.0
sum 106.05 104.791667 103.533333 102.275 101.016667 99.758333 98.5
next function:
def shuf(): # attempts at shuffling the values around such that the 'sum' row is as close to 100 as possible.
df = matrix_gen()
df1 = df[1:4]
count =0
while True:
df1 = shuffle(df1)
df1.loc[5,:] = df1.sum(axis=0)
for i in df1.loc[5].values:
if 98<= i <=100:
print('solution')
return df1
else:
count+=1
print(count)
continue
opt = shuf()
print(opt)
next function will need to apply a deviation to each number to provide a sum of each row equal to 100. Optimization should include minimizing deviations.
I think an interesting approach would be to use an optimization model.
Ordered values
Let x(i,j) be the matrix your want to fill. Then we have:
sum(j, x(i,j)) = 100 ∀i
L(j) ≤ x(i,j) ≤ U(j) ∀i,j
x(i,j) = x(i-1,j) + step(j) + deviation(i,j)
special cases:
x(1,j) = L(j) + deviation(1,j)
and x(m,j) = U(j) + deviation(m,j)
step(j) ≥ 0
minimize sum((i,j), deviation(i,j)^2 )
This is a quadratic programming problem. It is possible to absolute deviations instead of squared ones. In that case you have an LP.
The model can be refined to minimize squared relative errors.
This is a little bit related to what is called matrix balancing (a statistical technique often used in economic modeling).
Unordered values
In the above I assumed the values had to be ordered. Now I understand this is not the case. I adapted the model to handle this as follows. First an overview of the results.
The input data is:
---- 17 PARAMETER LO
c1 80.000, c2 5.000, c3 0.500, c4 0.050
---- 17 PARAMETER UP
c1 94.000, c2 14.000, c3 5.000, c4 0.500
Warning: Note that this data has been changed by the poster. My answer is using the original LO and UP values before they were changed.
The model operates in three steps:
(1) populate a perfectly organized matrix without obeying the row sum constraints. This can be done outside the model. I generated simply:
---- 53 PARAMETER init initial matrix
c1 c2 c3 c4 rowsum
r1 80.000 5.000 0.500 0.050 85.550
r2 82.333 6.500 1.250 0.125 90.208
r3 84.667 8.000 2.000 0.200 94.867
r4 87.000 9.500 2.750 0.275 99.525
r5 89.333 11.000 3.500 0.350 104.183
r6 91.667 12.500 4.250 0.425 108.842
r7 94.000 14.000 5.000 0.500 113.500
I.e. from lo(j) to up(j) with equal steps.
(2) The second step is to permute the values within a column to achieve a solution that has a close match to the row sums. This gives:
---- 53 VARIABLE y.L after permutation
c1 c2 c3 c4 rowsum
r1 94.000 5.000 0.500 0.125 99.625
r2 82.333 12.500 4.250 0.500 99.583
r3 89.333 8.000 2.000 0.200 99.533
r4 87.000 9.500 2.750 0.275 99.525
r5 84.667 11.000 3.500 0.350 99.517
r6 91.667 6.500 1.250 0.050 99.467
r7 80.000 14.000 5.000 0.425 99.425
This is already very close and maintains "perfect" spread.
(3) Change the values a little bit by adding a deviation such that the row sums are exactly 100. Minimize the sum of the squared relative deviations. This gives:
---- 53 VARIABLE x.L final values
c1 c2 c3 c4 rowsum
r1 94.374 5.001 0.500 0.125 100.000
r2 82.747 12.503 4.250 0.500 100.000
r3 89.796 8.004 2.000 0.200 100.000
r4 87.469 9.506 2.750 0.275 100.000
r5 85.142 11.007 3.501 0.350 100.000
r6 92.189 6.510 1.251 0.050 100.000
r7 80.561 14.012 5.002 0.425 100.000
---- 53 VARIABLE d.L deviations
c1 c2 c3 c4
r1 0.374 0.001 1.459087E-5 1.459087E-7
r2 0.414 0.003 9.542419E-5 9.542419E-7
r3 0.462 0.004 2.579521E-4 2.579521E-6
r4 0.469 0.006 4.685327E-4 4.685327E-6
r5 0.475 0.007 7.297223E-4 7.297223E-6
r6 0.522 0.010 0.001 1.123123E-5
r7 0.561 0.012 0.002 1.587126E-5
Steps (2) and (3) have to be inside the optimization model: they have to be executed simultaneously to achieve proven optimal solutions.
The mathematical model can look like:
The model solves within a few seconds to proven global optimality using a solver like Cplex or Gurobi.
I think this is pretty cute model (ok, that is really nerdy, I know). The permutation is modeled with a permutation matrix P (binary values). This makes the model a MIQP (Mixed Integer Quadratic Programming) model. It can be linearized fairly easily: use absolute values instead of squares in the objective. After proper reformulation, we end up with a linear MIP model. There is lots of software available to handle this. This includes libraries and packages callable from Python.
Note: I probably should not divide by init(i,j) in the objective, but rather by the column means in the init matrix. Dividing by y(i,j) would be the best, but that leads to another non-linearity.
Your numbers are small enough for a smart brute force approach.
I use two methods to quantify and minimize deviations from the "clean" equidistant values (linspace(low, high, 7)). "abserr" for squared difference and "relerr" for squared error divided by squared clean value. I also check corrcoefs in the very end but I've never seen anything below 99.8%
The following code first finds the shuffle od the clean values with the smallest error. This takes just a few seconds, because we use the following tricks:
split the 4 columns into two pairs
each pair has 7! relative arrangements, a mangeable number even when squared (one factor for each pair)
compute these (7!)^2 shuffles and sum over pairs
to not have to iterate over all relative shuffles between the pairs we observe that the total error is minimized if the the two sets of pair sums are arranged in opposite order this is true for "abserr" and "relerr"
In the end the values are corrected to make rows sum to 100. Here again we use the fact that the summed error is minimized when evenly spread.
The code below contains two variants a legacy one solve which contains a small inaccuracy when minimizing relerr and a corrected version improved_solve. They frequently find different solutions but in more than 100 random problems only one led to a very slightly smaller error with improved_solve.
Answers to a few examples:
OP's example:
((75, 98), (6, 15), (2, 8), (0.05, 0.5))
solve relerr improved_solve relerr
table: table:
76.14213 15.22843 8.12183 0.50761 76.14213 15.22843 8.12183 0.50761
79.02431 13.53270 7.01696 0.42603 79.02431 13.53270 7.01696 0.42603
81.83468 11.87923 5.93961 0.34648 81.83468 11.87923 5.93961 0.34648
84.57590 10.26644 4.88878 0.26888 84.57590 10.26644 4.88878 0.26888
87.25048 8.69285 3.86349 0.19317 87.25048 8.69285 3.86349 0.19317
89.86083 7.15706 2.86282 0.11928 89.86083 7.15706 2.86282 0.11928
92.40924 5.65771 1.88590 0.04715 92.40924 5.65771 1.88590 0.04715
avgerr: avgerr:
0.03239 0.03239
corrcoefs: corrcoefs:
0.99977 0.99977 0.99977 0.99977 0.99977 0.99977 0.99977 0.99977
An example where sorting some colums ascending some descending is not optimal:
((11, 41), (4, 34), (37, 49), (0.01, 23.99))
Note that the solvers find different solutions, but the error is the same.
solve relerr improved_solve relerr
table: table:
10.89217 18.81374 46.53926 23.75483 11.00037 24.00080 49.00163 15.99720
26.00087 9.00030 49.00163 15.99720 16.00107 19.00127 45.00300 19.99467
31.00207 4.00027 45.00300 19.99467 25.74512 13.86276 36.63729 23.75483
16.00000 29.00000 43.00000 12.00000 35.99880 8.99970 46.99843 8.00307
20.99860 33.99773 40.99727 4.00640 41.00000 4.00000 43.00000 12.00000
40.99863 13.99953 36.99877 8.00307 20.99860 33.99773 40.99727 4.00640
36.35996 24.23998 39.38996 0.01010 31.30997 29.28997 39.38996 0.01010
avgerr: avgerr:
0.00529 0.00529
corrcoefs: corrcoefs:
0.99993 0.99994 0.99876 0.99997 0.99989 0.99994 0.99877 0.99997
This is the problem where improved_solve actually beats legacy solve:
((36.787862883725872, 43.967159949544317),
(40.522239654303483, 47.625869880574164),
(19.760537036548321, 49.183056694462799),
(45.701873101046154, 48.051424087501672))
solve relerr improved_solve relerr
table: table:
21.36407 23.53276 28.56241 26.54076 20.25226 26.21874 27.07599 26.45301
22.33545 24.52391 26.03695 27.10370 21.53733 26.33278 25.10656 27.02333
23.33149 25.54022 23.44736 27.68093 22.90176 26.45386 23.01550 27.62888
24.35314 26.58266 20.79119 28.27301 24.35314 26.58266 20.79119 28.27301
25.40141 27.65226 18.06583 28.88050 25.90005 26.71994 18.42047 28.95953
26.47734 28.75009 15.26854 29.50403 27.55225 26.86656 15.88840 29.69279
27.58205 29.87728 12.39644 30.14424 29.32086 27.02351 13.17793 30.47771
avgerr: avgerr:
0.39677 0.39630
corrcoefs: corrcoefs:
0.99975 0.99975 0.99975 0.99975 0.99847 0.99847 0.99847 0.99847
Code:
import numpy as np
import itertools
import math
N_CHUNKS = 3
def improved_solve(LH, errtype='relerr'):
N = math.factorial(7)
# accept anything that looks like a 2d array
LH = np.asanyarray(LH)
# build equidistant columns
C = np.array([np.linspace(l, h, 7) for l, h in LH])
# subtract offset; it's cheaper now than later
c0, c1, c2, c3 = C - 25
# list all permutiations of a single column
p = np.array(list(itertools.permutations(range(7))))
# split into left and right halves, compute all relative permutiations
# and sort them by their sums of corresponding elements.
# Left pairs in ascending, right pairs in descending order.
L = np.sort(c0 + c1[p], axis=1)
R = np.sort(c2 + c3[p], axis=1)[:, ::-1]
# For each pair of permutations l in L, r in R compute the smallest
# possible error (sum of squared deviations.)
if errtype == 'relerr':
err = np.empty((N, N))
split = np.linspace(0, N, N_CHUNKS+1, dtype=int)[1:-1]
for LCH, ECH in zip(np.split(L, split, axis=0),
np.split(err, split, axis=0)):
dev = LCH[:, None] + R[None, :]
((dev / (100+dev))**2).sum(axis=-1, out=ECH)
del dev
elif errtype == 'abserr':
err = (np.add.outer(np.einsum('ij,ij->i', L, L),
np.einsum('ij,ij->i', R, R))
+ np.einsum('ik, jk->ij', 2*L, R))
else:
raise ValueError
# find pair of pairs with smallest error
i = np.argmin(err.ravel())
i1, i3 = np.unravel_index(i, (N, N))
# recreate shuffled table
c0, c1, c2, c3 = C
lidx = np.argsort(c0 + c1[p[i1]])
ridx = np.argsort(c2 + c3[p[i3]])[::-1]
C = np.array([c0[lidx], c1[p[i1]][lidx], c2[ridx], c3[p[i3]][ridx]])
# correct rowsums, calculate error and corrcoef and return
if errtype == 'relerr':
result = C * (100.0 / C.sum(axis=0, keepdims=True))
err = math.sqrt((((result-C)/C)**2).mean())
else:
result = C + (25 - C.mean(axis=0, keepdims=True))
err = math.sqrt(((result-C)**2).mean())
rs = np.sort(result, axis=1)
cc = tuple(np.corrcoef(ri, range(7))[0, 1] for ri in rs)
return dict(table=result.T, avgerr=err, corrcoefs=cc)
def solve(LH, errtype='relerr'):
LH = np.asanyarray(LH)
if errtype=='relerr':
err1 = 200 / LH.sum()
diff = np.diff(LH * err1, axis=1).ravel()
elif errtype=='abserr':
err1 = 25 - LH.mean()
diff = np.diff(LH, axis=1).ravel()
else:
raise ValueError
C = np.array([np.linspace(-d/2, d/2, 7) for d in diff])
c0, c1, c2, c3 = C
p = np.array(list(itertools.permutations(range(7))))
L = np.sort(c0 + c1[p], axis=1)
R = np.sort(c2 + c3[p], axis=1)[:, ::-1]
err = (np.add.outer(np.einsum('ij,ij->i', L, L),
np.einsum('ij,ij->i', R, R))
+ np.einsum('ik, jk->ij', 2*L, R)).ravel()
i = np.argmin(err)
i1, i3 = np.unravel_index(i, (math.factorial(7), math.factorial(7)))
L = np.argsort(c0 + c1[p[i1]])
R = np.argsort(c2 + c3[p[i3]])[::-1]
ref = [np.linspace(l, h, 7) for l, h in LH]
if errtype=='relerr':
c0, c1, c2, c3 = [np.linspace(l, h, 7) for l, h in LH * err1]
C = np.array([c0[L], c1[p[i1]][L], c2[R], c3[p[i3]][R]])
err2 = 100 / np.sum(C, axis=0)
C *= err2
cs = list(map(sorted, C))
err = math.sqrt(sum((c/r-1)**2 for ci, ri in zip(cs, ref) for c, r in zip(ci, ri)) / 28)
elif errtype=='abserr':
c0, c1, c2, c3 = [np.linspace(l, h, 7) for l, h in LH + err1]
C = np.array([c0[L], c1[p[i1]][L], c2[R], c3[p[i3]][R]])
err2 = 25 - np.mean(C, axis=0)
C += err2
cs = list(map(sorted, C))
err = math.sqrt(sum((c-r)**2 for ci, ri in zip(cs, ref) for c, r in zip(ci, ri)) / 28)
else:
raise ValueError
cc = tuple(np.corrcoef(ci, range(7))[0, 1] for ci in cs)
return dict(table=C.T, avgerr=err, corrcoefs=cc)
for problem in [((75, 98), (6, 15), (2, 8), (0.05, 0.5)),
((11, 41), (4, 34), (37, 49), (0.01, 23.99)),
((80, 94), (5, 14), (0.5, 5), (0.05, 0.5)),
((36.787862883725872, 43.967159949544317),
(40.522239654303483, 47.625869880574164),
(19.760537036548321, 49.183056694462799),
(45.701873101046154, 48.051424087501672))]:
for errtype in ('relerr', 'abserr'):
print()
columns = []
for solver in (solve, improved_solve):
sol = solver(problem, errtype)
column = [[' '.join((solver.__name__, errtype))]] + \
[[k + ':'] + [' '.join([f'{e:8.5f}' for e in r])
for r in np.atleast_2d(v)]
for k, v in sol.items()]
column = (line for block in column for line in block)
columns.append(column)
for l, r in zip(*columns):
print(f"{l:39s} {r:39s}")
problems = []
for i in range(0):
problem = np.sort(np.random.random((4, 2)), axis=1) * 50
for errtype in ('relerr', 'abserr'):
sol0 = solve(problem, errtype)
sol1 = improved_solve(problem, errtype)
if not np.allclose(sol0['table'], sol1['table']):
print(i, end= " ")
if np.abs((sol0['avgerr']-sol1['avgerr'])
/(sol0['avgerr']+sol1['avgerr']))>1e-6:
print(problem)
problems.append(problem)
columns = []
for sol, name in [(sol0, 'old '), (sol1, 'improved ')]:
column = [[name + errtype]] + \
[[k + ':'] + [' '.join([f'{e:8.5f}' for e in r])
for r in np.atleast_2d(v)]
for k, v in sol.items()]
column = (line for block in column for line in block)
columns.append(column)
for l, r in zip(*columns):
print(f"{l:39s} {r:39s}")
Related
I have "random" points and would like to check which points can be connected by straight lines. Therefore I iterate through a list of points and draw a line at different angles. After all lines at all angles for every single point is drawn, I iterate over each line checking whether they are connecting 3 or more points. If the line connects 3 or more points, it is saved by appending it to a new list (newLines), if not the next line gets tested.
The problem which the following code is that it is way to slow... My testing image took about 30 min and my actual image was not done after about 14 hours. I read about speeding up for loops by using numpy (like in this article). I found plenty of examples for replacing for loops with numpy but in these example it was just simple iterating over a list without declaring the values as variables for usage.
Any hint for speeding up the following code is appreciated, it does not necessarily need to be numpy.
# list for saving rotated lines
lines=[]
for point in points:
# length of line is the diagonal of the point image so it still covers the whole image after rotation
length = sqrt(image.shape[0]**2+image.shape[1]**2)
start = Point(point)
end = Point(start.x+length, start.y)
line = LineString([start,end])
# rotating the generated line for 5 degrees and appeding it to the list
for a in range(0, 360, 5):
angle = np.deg2rad(a)
line = rotate(line, angle, origin=start, use_radians=True)
lines.append(line)
multiLines = MultiLineString(lines)
# list for rotated lines which connect 3 or more points
newLines = []
start = ()
for multiLine in multiLines.geoms:
lst = list(multiLine.coords)
# a: starting point of line | b: ending point of line
a = np.asarray(lst[0])
b = np.asarray(lst[1])
count = 0
# again iterating over point array to check which point is on line
for point in points:
p = np.asarray(point)
# check if point (p) is on line (a - b)
if np.cross(p-a,b-a) == 0:
if count == 0:
start = point
count += 1
else:
end = point
count += 1
if count >= 3:
line = (start, end)
newLines.append(line)
I'm not sure what your current benchmarks are, but you want to try with numpy you can do something like this. I'm using pandas which is a numpy wrapper, but it's effectively doing the same thing
I think this is doing the same thing as you want. I'm looking at each pair of points, calculating the m and c coefficients in the equation y = mx + c through the two points, then checking for cases where these match. I expect you might want some accepted error depending on your input data.
Sorry if I'm way off piste.
import pandas as pd
import numpy as np
import random
import itertools
import time
def get_matches(points):
# get all combinations of two points
combinations_of_points = ([(a[0], a[1], b[0], b[1]) for a, b in itertools.combinations(points, 2) if a != b])
data = pd.DataFrame(combinations_of_points, columns=['x1', 'y1', 'x2', 'y2'])
data['m'] = (data.y1 - data.y2) / (data.x1 - data.x2)
# swap negative gradients so all lines are in same direction
data.loc[np.isfinite(data.m) & data.m < 0, 'm'] = -(1 / data.m)
data.loc[np.isneginf(data.m), 'm'] = -data.m
# y = mx + c
data['c'] = data.y1 - (data.m * data.x1)
data = data.sort_values(['m', 'c', 'x1']).reset_index(drop=True)
# filter to items which are duplicated
filtered = data[
# matching m and c values
(np.isfinite(data.m) & data.duplicated(['m', 'c'], keep=False)) |
# infinite m and x equal (straight line up)
(np.isposinf(data.m) & data.duplicated(['m', 'x1'], keep=False))
]
return filtered
points = [(0, 0), (1, 1), (2, 2)]
print(get_matches(points))
random.seed(1)
count = 500
random_points = [(round(random.random(), 3), round(random.random(), 3)) for i in range(count)]
results = get_matches(random_points)
print(results)
print('\nPerformance with increasing points')
for i in [i ** 2 for i in range(5, 101, 5)]:
random.seed(1)
random_points = [(round(random.random(), 3), round(random.random(), 3)) for i in range(i)]
start = time.perf_counter()
results = get_matches(random_points)
stop = time.perf_counter()
print(f'{i:<9}{stop - start:03f}')
returns:
x1 y1 x2 y2 m c
0 0 0 1 1 1.0 0.0
1 0 0 2 2 1.0 0.0
2 1 1 2 2 1.0 0.0
x1 y1 x2 y2 m c
12243 0.606 0.262 0.400 0.880 -3.0 2.080
12244 0.606 0.262 0.440 0.760 -3.0 2.080
12251 0.378 0.970 0.506 0.586 -3.0 2.104
12252 0.505 0.589 0.378 0.970 -3.0 2.104
12253 0.505 0.589 0.506 0.586 -3.0 2.104
... ... ... ... ... ... ...
124741 0.971 0.382 0.971 0.716 inf -inf
124742 0.971 0.543 0.971 0.716 inf -inf
124744 0.983 0.593 0.983 0.296 inf -inf
124745 0.983 0.593 0.983 0.448 inf -inf
124746 0.983 0.296 0.983 0.448 inf -inf
[237 rows x 6 columns]
Performance with increasing points
25 0.010577
100 0.016897
225 0.045443
400 0.136834
625 0.338148
900 0.765913
1225 1.525819
1600 2.645753
2025 4.834811
2500 8.112012
3025 12.960043
3600 18.262522
4225 27.221498
4900 37.329662
5625 53.064736
6400 67.325213
7225 84.843119
8100 116.864120
9025 140.131420
10000 171.630961
As one of you comments pointed out earlier, the order of growth of the problem is approximately N ^ 2 because it is look at all the combinations of points so the performance very quickly degrades with increasing numbers of points. Note you could use this relationship to estimate how long it would take for your program to run if you know the number of points.
I am trying to solve a blending problem with a system of 3 equations and I have 3 objectives to reach for, or try to get the values as close as posible for the three of them:
The equations are:
def sat (c,s,a,f):
return (100*c)/(2.8*s+1.18*a+0.65*f) #For this I need sat = 98.5
def ms (s,a,f):
return s/(a+f) #For this I need ms = 2.5
def ma (a,f):
return (a/f) #For this I need ms = 1.3
#The total mix ratio:
r1+r2+r3+r4+r5+r6 = 1
material_1:
c = 51.29
s = 4.16
a = 0.97
f = 0.38
material_2:
c = 51.42
s = 4.16
a = 0.95
f = 0.37
material_3:
c = 6.88
s = 63.36
a = 13.58
f = 3.06
material_4:
c = 32.05
s = 1.94
a = 0.0
f = 0.0
material_5:
c = 4.56
s = 21.43
a = 3.82
f = 52.28
material_6:
c = 0.19
s = 7.45
a = 4.58
f = 0.42
#The aproximate values I am trying to find are around:
0.300 <= r1 <= 0.370
0.300 <= r2 <= 0.370
0.070 <= r3 <= 0.130
0.005 <= r4 <= 0.015
0.010 <= r5 <= 0.030
0.110 <= r6 <= 0.130
So how can I calculate the value for every ratio "r" in order to get the closets values to the objectives for the 3 equations?
I looked on some optimizers but as I am new with them I still can not understand how to set up the problem, the equations and constraints into them.
I guess I made it, of course the code is awful but I will try to make it look better later.
I added the cost of the components so I can give a function to "minimize", of course this is becouse I know the aproximated material ratio so it guide the solver to it.
I will post the code for it:
c1 = 51.42
c2 = 51.42
c3 = 6.88
c5 = 32.05
c6 = 4.56
c7 = 0.19
s1 = 4.16
s2 = 4.16
s3 = 63.36
s5 = 1.94
s6 = 21.43
s7 = 7.45
a1 = 0.97
a2 = 0.95
a3 = 13.58
a5 = 0.0
a6 = 3.82
a7 = 4.58
f1 = 0.38
f2 = 0.37
f3 = 3.06
f5 = 0.0
f6 = 52.28
f7 = 0.42
r7 = 0.125
r1 = cp.Variable()
r2 = cp.Variable()
r3 = cp.Variable()
r5 = cp.Variable()
r6 = cp.Variable()
#Costos
caliza = 10
arcilla = 20
hierro = 170
yeso = 80
objective = cp.Minimize(r1*caliza+r2*caliza+r3*arcilla+r5*yeso+r6*hierro)
constraints = [
r1-r2 == 0,
r1>= 0.20,
r1<= 0.40,
r3<=0.14,
r3>=0.06,
r5>=0.001,
r5<=0.008,
r6>=0.01,
r6<=0.03,
2.5*((r1*a1+r2*a2+r3*a3+r5*a5+r6*a6+r7*a7)+(f1*r1+f2*r2+f3*r3+f5*r5+f6*r6+f7*r7))-(r1*s1+r2*s2+r3*s3+r5*s5+r6*s6+r7*s7)==0,
(98.5*(2.8*(r1*s1+r2*s2+r3*s3+r5*s5+r6*s6+r7*s7)+1.18*(r1*a1+r2*a2+r3*a3+r5*a5+r6*a6+r7*a7)+0.65*(f1*r1+f2*r2+f3*r3+f5*r5+f6*r6+f7*r7))-100*(r1*c1+r2*c2+r3*c3+r5*c5+r6*c6+r7*c7)) == 0,
#1.3*(f1*r1+f2*r2+f3*r3+f5*r5+f6*r6+f7*r7)-(r1*a1+r2*a2+r3*a3+r5*a5+r6*a6+r7*a7) == 0,
r1+r2+r3+r5+r6+r7 == 1]
problem = cp.Problem(objective,constraints)
problem.solve()
print(r1.value,r2.value,r3.value,r5.value,r6.value)
print(problem.status)
This gives me the result:
0.3644382497863931 0.3644382497863931 0.12287226775076901 0.0009999999955268117 0.022251232680917873
optimal
Anyways, the only way to make a feasible result is to only consider 2 of the three constraints functions, becouse the components cant reach the 3 of them and this indicates that I need to check de material components before I try to reach the 3 constraints (wich were the sat, ma and ms).
Now I will try to make the code better using pandas so i can get the material components with somekind of for loop and laso use it for the ratios.
Thank you so much for your help👍.
So this is a simple/trivial example to show the intent that was mentioned in comment to minimize the square of errors... Instead of using a constraint to pin a value to an exact outcome, we let the solver find the best outcome that minimizes the square of the error where error = value - target. I think what I've written below is fairly clear. CVXPY likes to work in the linear algebra realm, and I'm sure this could be converted into vector / matrix format, but the concept is to remove constraints and let the solver figure out the best combo. Obviously, if there are hard constraints, those need to be added, but note that I've just made an example with 2 of your 3 targets (with some trivial math) and moved it into the objective.
Your problem with 3 constraints that aren't simultaneously satisfiable is probably a candidate for a conversion like this...
import cvxpy as cp
r1 = cp.Variable()
r2 = cp.Variable()
ma = 2.5
ms = 3.4
delta_1 = (r1 + r2 - ma)**2 # diff from r1 + r2 and ma
delta_2 = (3*r1 + 2*r2 - ms)**2 # diff from 3r1 + 2r2 and ms
prob = cp.Problem(cp.Minimize(delta_1 + delta_2))
prob.solve()
print(prob.value)
print(r1.value, r2.value)
Output
9.860761315262648e-31
-1.6000000000000014 4.100000000000002
Ok this is what i have done and works fine:
#I call the values from a pandas DF:
c1 = df.at[0, 'MAX']
c2 = df.at[4, 'MAX']
c3 = df.at[8, 'MAX']
c5 = df.at[12, 'MAX']
c6 = df.at[16, 'MAX']
c7 = df.at[20, 'MAX']
s1 = df.at[1, 'MAX']
s2 = df.at[5, 'MAX']
s3 = df.at[9, 'MAX']
s5 = df.at[13, 'MAX']
s6 = df.at[17, 'MAX']
s7 = df.at[21, 'MAX']
a1 = df.at[2, 'MAX']
a2 = df.at[6, 'MAX']
a3 = df.at[10, 'MAX']
a5 = df.at[14, 'MAX']
a6 = df.at[18, 'MAX']
a7 = df.at[22, 'MAX']
f1 = df.at[3, 'MAX']
f2 = df.at[7, 'MAX']
f3 = df.at[11, 'MAX']
f5 = df.at[15, 'MAX']
f6 = df.at[19, 'MAX']
f7 = df.at[23, 'MAX']
r1 = cp.Variable()
r2 = cp.Variable()
r3 = cp.Variable()
r5 = cp.Variable()
r6 = cp.Variable()
r7 = 12.5
#Objectives
ma = 1.3
ms = 2.50
lsf = 98.5
delta1 =(ms*((r1*a1+r2*a2+r3*a3+r5*a5+r6*a6+r7*a7)+(f1*r1+f2*r2+f3*r3+f5*r5+f6*r6+f7*r7))-(r1*s1+r2*s2+r3*s3+r5*s5+r6*s6+r7*s7))**2
delta2 =(ma*(f1*r1+f2*r2+f3*r3+f5*r5+f6*r6+f7*r7)-(r1*a1+r2*a2+r3*a3+r5*a5+r6*a6+r7*a7))**2
delta3 =((lsf*(2.8*(r1*s1+r2*s2+r3*s3+r5*s5+r6*s6+r7*s7)+1.18*(r1*a1+r2*a2+r3*a3+r5*a5+r6*a6+r7*a7)+0.65*(f1*r1+f2*r2+f3*r3+f5*r5+f6*r6+f7*r7))-100*(r1*c1+r2*c2+r3*c3+r5*c5+r6*c6+r7*c7)))**2
objective = cp.Minimize(delta1+delta2+delta3)
constraints = [r1-r2 == 0, #I added this to make r1=r2.
r1>= 0.20,
r3>=0, #I added these to make it non negative.
r5>=0,
r5<=0.008,
r6>=0,
r1+r2+r3+r5+r6+r7 == 1]
problem = cp.Problem(objective,constraints)
problem.solve()
print(r1.value,r2.value,r3.value,r5.value,r6.value)
print(problem.status)
Once again i want to thank you for your help guys.
Maybe you know how I can improve the code for get the variable values, maybe there is and example of using a for loop to get the values instead of put it directly from the DF for each one, the DF looks like this:
DATO MAX
0 c1 51.95000
1 s1 3.07000
2 a1 0.83000
3 f1 0.31000
4 c2 52.26000
5 s2 2.82000
6 a2 0.75000
...
I have a pandas dataframe of 3 variables, 2 categorical and 2 numeric.
ID
Trimester
State
Tax
rate
45
T1
NY
20
0.25
23
T3
FL
34
0.3
35
T2
TX
45
0.6
I would like to get a new table of the form:
ID
Trimester
State
Tax
rate
Tax_per_state_per_trimester
45
T1
NY
20
0.25
H
23
T3
FL
34
0.3
L
35
T2
TX
45
0.6
M
where the new variable 'Tax_per_state_per_trimester' is a categorical variable representing the tertiles of the corresponding subgroup, where L = first tertile, M = second tertile, L = last tertile
I understand I can do a double grouping with:
df.groupby(['State', 'Trimester'])
but i don't know how to go from there.
I guess apply or transform with the quantile function should prove useful, but how?
Can you take a look and see if this gives you the results you want ?
df = pd.read_excel('Tax.xlsx')
def mx(tri,state):
return df[(df['Trimester'].eq(tri)) & (df['State'].eq(state))] \
.groupby(['Trimester','State'])['Tax'].apply(max)[0]
for i,v in df.iterrows():
t = (v['Tax'] / mx(v['Trimester'],v['State']))
df.loc[i,'Tax_per_state_per_trimester'] = 'L' if t < 1/3 else 'M' if t < 2/3 else 'H'
I have a data frame df_ss_g as
ent_id,WA,WB,WC,WD
123,0.045251836,0.614582906,0.225930615,0.559766482
124,0.722324239,0.057781167,,0.123603561
125,,0.361074325,0.768542766,0.080434134
126,0.085781742,0.698045853,0.763116684,0.029084545
127,0.909758657,,0.760993759,0.998406211
128,,0.32961283,,0.90038336
129,0.714585519,,0.671905291,
130,0.151888772,0.279261613,0.641133263,0.188231227
now I have to compute the average(AVG_WEIGHTAGE) which is based on a weightage i.e. =(WA*0.5+WB*1+WC*0.5+WD*1)/(0.5+1+0.5+1)
but while I am computing it using below method i.e.
df_ss_g['AVG_WEIGHTAGE']= df_ss_g.apply(lambda x:((x['WA']*0.5)+(x['WB']*1)+(x['WC']*0.5)+(x['WD']*1))/(0.5+1+0.5+1) , axis=1)
IT output as i.e. for NaN value it is giving NaN as AVG_WEIGHTAGE as null which is wrong.
all I wanted is that null should not be considered in denominator and numerator
e.g.
ent_id,WA,WB,WC,WD,AVG_WEIGHTAGE
128,,0.32961283,,0.90038336,0.614998095 i.e. (WB*1+WD*1)/1+1
129,0.714585519,,0.671905291,,0.693245405 i.e. (WA*0.5+WC*0.5)/0.5+0.5
IIUC:
import numpy as np
weights = np.array([0.5, 1, 0.5, 1]))
values = df.drop('ent_id', axis=1)
df['AVG_WEIGHTAGE'] = np.dot(values.fillna(0).to_numpy(), weights)/np.dot(values.notna().to_numpy(), weights)
df['AVG_WEIGHTAGE']
0 0.436647
1 0.217019
2 0.330312
3 0.383860
4 0.916891
5 0.614998
6 0.693245
7 0.288001
Try this method using dot products -
def av(t):
#Define weights
wt = [0.5, 1, 0.5, 1]
#Create a vector with 0 for null and 1 for non null
nulls = [int(i) for i in ~t.isna()]
#Take elementwise products of the nulls vector with both weights and t.fillna(0)
wt_new = np.dot(nulls, wt)
t_new = np.dot(nulls, t.fillna(0))
#return division
return np.divide(t_new,wt_new)
df['WEIGHTED AVG'] = df.apply(av, axis=1)
df = df.reset_index()
print(df)
ent_id WA WB WC WD WEIGHTED AVG
0 123 0.045252 0.614583 0.225931 0.559766 0.481844
1 124 0.722324 0.057781 NaN 0.123604 0.361484
2 125 NaN 0.361074 0.768543 0.080434 0.484020
3 126 0.085782 0.698046 0.763117 0.029085 0.525343
4 127 0.909759 NaN 0.760994 0.998406 1.334579
5 128 NaN 0.329613 NaN 0.900383 0.614998
6 129 0.714586 NaN 0.671905 NaN 1.386491
7 130 0.151889 0.279262 0.641133 0.188231 0.420172
It boils down to masking the nan values with 0 so they don't contribute to either weights or sum:
# this is the weights
weights = np.array([0.5,1,0.5,1])
# the columns of interest
s = df.iloc[:,1:]
# where the valid values are
mask = s.notnull()
# use `fillna` and then `#` for matrix multiplication
df['AVG_WEIGHTAGE'] = (s.fillna(0) # weights) / (mask#weights)
Pandas has the very handy function to do pairwise correlation of columns using pd.corr().
That means it is possible to compare correlations between columns of any length. For instance:
df = pd.DataFrame(np.random.randint(0,100,size=(100, 10)))
0 1 2 3 4 5 6 7 8 9
0 9 17 55 32 7 97 61 47 48 46
1 8 83 87 56 17 96 81 8 87 0
2 60 29 8 68 56 63 81 5 24 52
3 42 76 6 75 7 59 19 17 3 63
...
Now it is possible to test correlation between all 10 columns with df.corr(method='pearson'):
0 1 2 3 4 5 6 7 8 9
0 1.000000 0.082789 -0.094096 -0.086091 0.163091 0.013210 0.167204 -0.002514 0.097481 0.091020
1 0.082789 1.000000 0.027158 -0.080073 0.056364 -0.050978 -0.018428 -0.014099 -0.135125 -0.043797
2 -0.094096 0.027158 1.000000 -0.102975 0.101597 -0.036270 0.202929 0.085181 0.093723 -0.055824
3 -0.086091 -0.080073 -0.102975 1.000000 -0.149465 0.033130 -0.020929 0.183301 -0.003853 -0.062889
4 0.163091 0.056364 0.101597 -0.149465 1.000000 -0.007567 -0.017212 -0.086300 0.177247 -0.008612
5 0.013210 -0.050978 -0.036270 0.033130 -0.007567 1.000000 -0.080148 -0.080915 -0.004612 0.243713
6 0.167204 -0.018428 0.202929 -0.020929 -0.017212 -0.080148 1.000000 0.135348 0.070330 0.008170
7 -0.002514 -0.014099 0.085181 0.183301 -0.086300 -0.080915 0.135348 1.000000 -0.114413 -0.111642
8 0.097481 -0.135125 0.093723 -0.003853 0.177247 -0.004612 0.070330 -0.114413 1.000000 -0.153564
9 0.091020 -0.043797 -0.055824 -0.062889 -0.008612 0.243713 0.008170 -0.111642 -0.153564 1.000000
Is there a simple way to also get the corresponding p-values (ideally in pandas), as it is returned e.g. by scipy's kendalltau()?
Why not using the "method" argument of pandas.DataFrame.corr():
pearson : standard correlation coefficient.
kendall : Kendall Tau correlation coefficient.
spearman : Spearman rank correlation.
callable: callable with input two 1d ndarrays and returning a float.
from scipy.stats import kendalltau, pearsonr, spearmanr
def kendall_pval(x,y):
return kendalltau(x,y)[1]
def pearsonr_pval(x,y):
return pearsonr(x,y)[1]
def spearmanr_pval(x,y):
return spearmanr(x,y)[1]
and then
corr = df.corr(method=pearsonr_pval)
Probably just loop. It's basically what pandas does in the source code to generate the correlation matrix anyway:
import pandas as pd
import numpy as np
from scipy import stats
df_corr = pd.DataFrame() # Correlation matrix
df_p = pd.DataFrame() # Matrix of p-values
for x in df.columns:
for y in df.columns:
corr = stats.pearsonr(df[x], df[y])
df_corr.loc[x,y] = corr[0]
df_p.loc[x,y] = corr[1]
If you want to leverage the fact that this is symmetric, so you only need to calculate this for roughly half of them, then do:
mat = df.values.T
K = len(df.columns)
correl = np.empty((K,K), dtype=float)
p_vals = np.empty((K,K), dtype=float)
for i, ac in enumerate(mat):
for j, bc in enumerate(mat):
if i > j:
continue
else:
corr = stats.pearsonr(ac, bc)
#corr = stats.kendalltau(ac, bc)
correl[i,j] = corr[0]
correl[j,i] = corr[0]
p_vals[i,j] = corr[1]
p_vals[j,i] = corr[1]
df_p = pd.DataFrame(p_vals)
df_corr = pd.DataFrame(correl)
#pd.concat([df_corr, df_p], keys=['corr', 'p_val'])
This will work:
from scipy.stats import pearsonr
column_values = [column for column in df.columns.tolist() ]
df['Correlation_coefficent'], df['P-value'] = zip(*df.T.apply(lambda x: pearsonr(x[column_values ],x[column_values ])))
df_result = df[['Correlation_coefficent','P-value']]
Does this work for you?
#call the correlation function, you could round the values if needed
df_c = df_c.corr().round(1)
#get the p values
pval = df_c.corr(method=lambda x, y: pearsonr(x, y)[1]) - np.eye(*rho.shape)
#set the p values, *** for less than 0.001, ** for less than 0.01, * for less than 0.05
p = pval.applymap(lambda x: ''.join(['*' for t in [0.001,0.01,0.05] if x<=t]))
#dfc_2 below will give you the dataframe with correlation coefficients and p values
df_c2 = df_c.astype(str) + p
#you could also plot the correlation matrix using sns.heatmap if you want
#plot the triangle
matrix = np.triu(df_c.corr())
#convert to array for the heatmap
df_c3 = df_c2.to_numpy()
#plot the heatmap
plt.figure(figsize=(13,8))
sns.heatmap(df_c, annot = df_c3, fmt='', vmin=-1, vmax=1, center= 0, cmap= 'coolwarm', mask = matrix)