I'm implementing a B&C and using a counter that sums 1 after each Lazy Constraint is added.
After solving, there is a big difference between what I count and what Gurobi retrieves as Lazy constraints. What could be causing this difference?
Thanks.
Changed value of parameter LazyConstraints to 1
Prev: 0 Min: 0 Max: 1 Default: 0
Optimize a model with 67 rows, 442 columns and 1154 nonzeros
Variable types: 22 continuous, 420 integer (420 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e-01, 5e+00]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 1e+01]
Presolve removed 8 rows and 42 columns
Presolve time: 0.00s
Presolved: 59 rows, 400 columns, 990 nonzeros
Variable types: 1 continuous, 399 integer (399 binary)
Root relaxation: objective 2.746441e+00, 37 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 4.18093 0 20 - 4.18093 - - 0s
H 0 0 21.2155889 4.18093 80.3% - 0s
0 0 5.91551 0 31 21.21559 5.91551 72.1% - 0s
H 0 0 18.8660609 5.91551 68.6% - 0s
0 0 6.35067 0 38 18.86606 6.35067 66.3% - 0s
H 0 0 17.9145774 6.35067 64.6% - 0s
0 0 6.85254 0 32 17.91458 6.85254 61.7% - 0s
H 0 0 17.7591641 6.85254 61.4% - 0s
0 0 7.20280 0 50 17.75916 7.20280 59.4% - 0s
H 0 0 17.7516768 7.20280 59.4% - 0s
0 2 7.91616 0 51 17.75168 7.91616 55.4% - 0s
* 80 62 30 17.6301180 8.69940 50.7% 10.7 0s
* 169 138 35 16.3820478 9.10423 44.4% 9.9 1s
* 765 486 22 14.6853796 9.65509 34.3% 9.2 2s
* 1315 762 27 14.6428113 9.97011 31.9% 9.4 3s
* 1324 415 14 12.0742408 9.97011 17.4% 9.4 3s
H 1451 459 11.8261154 10.02607 15.2% 9.7 4s
1458 463 11.78416 15 58 11.82612 10.02607 15.2% 9.6 5s
* 1567 461 33 11.6541357 10.02607 14.0% 10.6 6s
4055 906 11.15860 31 36 11.65414 10.69095 8.26% 12.4 10s
Cutting planes:
Gomory: 4
Flow cover: 1
Lazy constraints: 228
Explored 7974 nodes (98957 simplex iterations) in 14.78 seconds
Thread count was 4 (of 4 available processors)
Solution count 10: 11.6541 11.8261 12.0742 ... 17.9146
Optimal solution found (tolerance 1.00e-04)
Best objective 1.165413573861e+01, best bound 1.165413573861e+01, gap 0.0000%
My Lazy constraints counter: 654
The displayed statistics on cutting planes after the optimization has finished (or stopped) only shows the number of cutting planes that were active in the final LP relaxation that was solved. In particular, the number of lazy constraints that are active that that last node may be less than the total number lazy constraints that were added in a callback. For example, Gurobi may add internal cutting planes during the optimization that dominate the original lazy constraint, or use the lazy constraint from the callback to derive other cuts instead of adding the original one.
Related
I am looking to apply two different for loops on a single dataframe.
The data I have is taken from a PDF and looks like this upon reading into a DataFrame
Output
Summary
Prior Years
1
2
3
4
5
6
7
8
9
10
Total
Total Value 3,700
110
-
-
-
5
NaN
-
-
-
-
--
3,815
Total Value
115 100
-
-
-
10
NaN
-
-
-
-
--
225
The expected table output is
Expected Output
Summary
Prior Years
1
2
3
4
5
6
7
8
9
10
Total
Total Value
3,700
110
-
-
-
5
-
-
-
-
--
3,815
Total Value
115
100
-
-
-
10
-
-
-
-
--
225
To resolve the errors from the original output I did as follows
test.loc[:,"1":"5"]=test.loc[:,"Prior Years":"5"].shift(axis=1)
test[['Summary','Prior Years']]=test['Summary'].str.strip().str.extract(r'(\D*).*?([\d\,\.]*)' )
and
test.loc[:,"1":"5"]=test.loc[:,"Prior Years":"5"].shift(axis=1)
test[['Prior Years', '1']]=test['Prior Years'].str.split(' ',expand=True)
These solve the respective issues in both columns when isolated but I am looking to utilize both these conditions simultaneously
When I attempt to write 'for' loops using these conditions above, it affects the whole dataframe, rather than just the row where individual conditions are met
An example of this is
for i in test.loc[:,'Summary']:
if len(i)>12:
test.loc[:,"1":"5"]=test.loc[:,"Prior Years":"5"].shift(axis=1)
test[['Summary','Prior Years']]=test['Summary'].str.strip().str.extract(r'(\D*).*?([\d\,\.]*)' )
Which then outputs
Output
Summary
Prior Years
1
2
3
4
5
6
7
8
9
10
Total
Total Value
3,700
110
-
-
-
5
-
-
-
-
--
3,815
Total Value
115 100
-
-
-
10
-
-
-
-
--
225
I am using the string length criteria as the hit for the for loop as the 'Summary' Column and 'Prior Years' Column will have fairly uniform str lengths
Right now your operations are affecting the whole column. If you loop through the index instead, you can limit the operation to just the rows you want to change:
for idx in test.index:
if len(test.loc[idx, "Summary"]) > 12:
test.loc[idx,"1":"5"] = test.loc[idx,"Prior Years":"5"].shift(axis=1)
test.loc[idx, ['Summary','Prior Years']] = test.iloc[idx, 'Summary'].str.strip().str.extract(r'(\D*).*?([\d\,\.]*)' )
if len(test.loc[idx, "1"]) > 5:
test.loc[idx,"1":"5"] = test.loc[idx,"Prior Years":"5"].shift(axis=1)
test.loc[idx, ['Prior Years', '1']] = test.loc[idx, 'Prior Years'].str.split(' ',expand=True)
If this code is too slow, it's also possible to vectorize this:
mask = test.Summary > 12
test.loc[mask,"1":"5"] = test.loc[mask,"Prior Years":"5"].shift(axis=1)
test.loc[mask, ['Summary','Prior Years']] = test.iloc[mask, 'Summary'].str.strip().str.extract(r'(\D*).*?([\d\,\.]*)' )
mask = test["1"] > 5
test.loc[mask,"1":"5"] = test.loc[mask,"Prior Years":"5"].shift(axis=1)
test.loc[mask, ['Prior Years', '1']] = test.loc[mask, 'Prior Years'].str.split(' ',expand=True)
I'm trying to implement a Gurobi model with multiple objective functions (specifically 2) that solves lexicographically (in a hierarchy) but I'm running into an issue where when optimizing the second objective function it degrades the solution to the first one, which should not happen with hierarchical optimizations. It is degrading the first solution up by 1, to decrease the second by 5, could this be an error in how I setup my model hierarchically? This is the code where I set up my model:
m = Model('lexMin Model')
m.ModelSense = GRB.MINIMIZE
variable = m.addVars(k.numVars, vtype=GRB.BINARY, name='variable')
m.setObjectiveN(LinExpr(quicksum([variable[j]*k.obj[0][j] for j in range(k.numVars)])),0)
m.setObjectiveN(LinExpr(quicksum([variable[j]*k.obj[1][j] for j in range(k.numVars)])),1)
for i in range(0,k.numConst):
m.addConstr(quicksum([k.const[i,j]*variable[j] for j in range(k.numVars)] <= k.constRHS[i]))
m.addConstr(quicksum([variable[j]*k.obj[0][j] for j in range(k.numVars)]) >= r2[0][0])
m.addConstr(quicksum([variable[j]*k.obj[0][j] for j in range(k.numVars)]) <= r2[1][0])
m.addConstr(quicksum([variable[j]*k.obj[1][j] for j in range(k.numVars)]) >= r2[1][1])
m.addConstr(quicksum([variable[j]*k.obj[1][j] for j in range(k.numVars)]) <= r2[0][1])
m.Params.ObjNumber = 0
m.ObjNPriority = 1
m.update()
m.optimize()
I've double checked and the priority of the second function is 0, the value for the objective functions are nowhere near where they'd be if I prioritized the wrong function. When optimizing the first function it finds the right value, even, but when it moves on to the second value it chooses values that degrade the first value.
The Gurobi output looks like this:
Optimize a model with 6 rows, 375 columns and 2250 nonzeros
Model fingerprint: 0xac5de9aa
Variable types: 0 continuous, 375 integer (375 binary)
Coefficient statistics:
Matrix range [1e+01, 1e+02]
Objective range [1e+01, 1e+02]
Bounds range [1e+00, 1e+00]
RHS range [1e+04, 1e+04]
---------------------------------------------------------------------------
Multi-objectives: starting optimization with 2 objectives ...
---------------------------------------------------------------------------
Multi-objectives: applying initial presolve ...
---------------------------------------------------------------------------
Presolve time: 0.00s
Presolved: 6 rows and 375 columns
---------------------------------------------------------------------------
Multi-objectives: optimize objective 1 () ...
---------------------------------------------------------------------------
Presolve time: 0.00s
Presolved: 6 rows, 375 columns, 2250 nonzeros
Variable types: 0 continuous, 375 integer (375 binary)
Root relaxation: objective -1.461947e+04, 10 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 -14619.473 0 3 - -14619.473 - - 0s
H 0 0 -14569.00000 -14619.473 0.35% - 0s
H 0 0 -14603.00000 -14619.473 0.11% - 0s
H 0 0 -14608.00000 -14619.473 0.08% - 0s
H 0 0 -14611.00000 -14618.032 0.05% - 0s
0 0 -14617.995 0 5 -14611.000 -14617.995 0.05% - 0s
0 0 -14617.995 0 3 -14611.000 -14617.995 0.05% - 0s
H 0 0 -14613.00000 -14617.995 0.03% - 0s
0 0 -14617.995 0 5 -14613.000 -14617.995 0.03% - 0s
0 0 -14617.995 0 5 -14613.000 -14617.995 0.03% - 0s
0 0 -14617.995 0 7 -14613.000 -14617.995 0.03% - 0s
0 0 -14617.995 0 3 -14613.000 -14617.995 0.03% - 0s
0 0 -14617.995 0 4 -14613.000 -14617.995 0.03% - 0s
0 0 -14617.995 0 6 -14613.000 -14617.995 0.03% - 0s
0 0 -14617.995 0 6 -14613.000 -14617.995 0.03% - 0s
0 0 -14617.995 0 6 -14613.000 -14617.995 0.03% - 0s
0 0 -14617.720 0 7 -14613.000 -14617.720 0.03% - 0s
0 0 -14617.716 0 8 -14613.000 -14617.716 0.03% - 0s
0 0 -14617.697 0 8 -14613.000 -14617.697 0.03% - 0s
0 0 -14617.661 0 9 -14613.000 -14617.661 0.03% - 0s
0 2 -14617.661 0 9 -14613.000 -14617.661 0.03% - 0s
* 823 0 16 -14614.00000 -14616.351 0.02% 2.8 0s
Cutting planes:
Gomory: 6
Cover: 12
MIR: 4
StrongCG: 2
Inf proof: 6
Zero half: 1
Explored 1242 nodes (3924 simplex iterations) in 0.29 seconds
Thread count was 8 (of 8 available processors)
Solution count 6: -14614 -14613 -14611 ... -14569
No other solutions better than -14614
Optimal solution found (tolerance 1.00e-04)
Best objective -1.461400000000e+04, best bound -1.461400000000e+04, gap 0.0000%
---------------------------------------------------------------------------
Multi-objectives: optimize objective 2 () ...
---------------------------------------------------------------------------
Loaded user MIP start with objective -12798
Presolve removed 1 rows and 0 columns
Presolve time: 0.01s
Presolved: 6 rows, 375 columns, 2250 nonzeros
Variable types: 0 continuous, 375 integer (375 binary)
Root relaxation: objective -1.282967e+04, 28 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 -12829.673 0 3 -12798.000 -12829.673 0.25% - 0s
0 0 -12829.378 0 4 -12798.000 -12829.378 0.25% - 0s
0 0 -12829.378 0 3 -12798.000 -12829.378 0.25% - 0s
0 0 -12828.688 0 4 -12798.000 -12828.688 0.24% - 0s
H 0 0 -12803.00000 -12828.688 0.20% - 0s
0 0 -12825.806 0 5 -12803.000 -12825.806 0.18% - 0s
0 0 -12825.193 0 5 -12803.000 -12825.193 0.17% - 0s
0 0 -12823.156 0 6 -12803.000 -12823.156 0.16% - 0s
0 0 -12822.694 0 7 -12803.000 -12822.694 0.15% - 0s
0 0 -12822.679 0 7 -12803.000 -12822.679 0.15% - 0s
0 2 -12822.679 0 7 -12803.000 -12822.679 0.15% - 0s
Cutting planes:
Cover: 16
MIR: 6
StrongCG: 3
Inf proof: 4
RLT: 1
Explored 725 nodes (1629 simplex iterations) in 0.47 seconds
Thread count was 8 (of 8 available processors)
Solution count 2: -12803 -12798
No other solutions better than -12803
Optimal solution found (tolerance 1.00e-04)
Best objective -1.280300000000e+04, best bound -1.280300000000e+04, gap 0.0000%
So it finds the values (-14613,-12803) instead of (-14614,-12798)
The default MIPGap is 1e-4. The first objective is degrading by less than that. (1/14614 =~ 0.7 e-4). If you lower the MIPGap, your issue should go away. In your code add
m.setObjective('MipGap', 1e-6)
before the optimize.
One way to reason about this behavior is that since you had a MIPGap of 1e-4, you would have accepted the a solution with value -14113, even if you didn't have a second objective.
I have stolen diamonds in a lot of different places. The places are on a coordinate system (x,y) where each place is named after a number and have an d-time for example:
Name X Y dT
1 283 248 0
2 100 118 184
3 211 269 993
4 200 200 948
5 137 152 0
6 297 263 513
7 345 256 481
8 265 212 0
9 185 222 840
10 214 180 1149
11 153 218 0
12 199 199 0
13 289 285 149
14 177 184 597
15 359 192 0
16 161 207 0
17 94 121 316
18 296 246 0
19 193 122 423
20 265 216 11
dT stand for due time and it's given for each place, which is the fixed time when we need to get the diamonds back before the thief move their hideout away.
Starting point is always 1.
I need to visit all places only once and get the diamonds back such that the total delay is minimized.
Distance is calculated with Euclidean distance rounded to its closest integer.
The arrival time for each places is calculated as in distance + previous distance. The delay for each place is arrival-due and the total delay is the sum of the delays between places.
If the police can get the diamonds before the due time of that place, then the delay is equal to 0; otherwise, the delay equals the difference between the time of arrival and due time of the place.
My mission is to find the right order in which the police can visit each place once that minimizes the delay for two larger instances.
I think I'm pretty much close to an answer myself but I would love to know how would you solve it and also to get a better understanding of the math behind it to be able to program it better.
Here are my codes that calculate everything, the only thing missing is the way to find the right order :
#------------------------------------------------------------------------
poss=[(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)] # the order
here=[]
for p in range(len(poss)):
tempos=[]
for o in range(len(index)):
point=poss[p][o]
valuez=order[point-1]
tempos.append(valuez)
here.append(tempos)
#//DUE//
due =[[item[b][3] for b in range(len(index))] for item in here]
#//DISTANCE//
x_ = [[item[b][1] for b in range(len(index))] for item in here]
y_ = [[item[b][2] for b in range(len(index))] for item in here]
z = [list(zip(x_[a],y_[a])) for a in range(len(x_))]
dis = []
for aa in range(len(poss)) :
tempor=[]
for i in range(len(index)-1):
firstpoint = z[aa][i]
secondpoint = z[aa][i+1]
distance = round(np.linalg.norm(np.array(secondpoint)-np.array(firstpoint)))
distance = int(distance)
tempor.append(distance)
dis.append(tempor)
#//ARRIVAL TIME//
#Arrival time is the sum of the pv distance.
arrival = []
for v in range(len(poss)):
forone = [0,dis[v][0],]
for r in range(len(index)-2):
sumz = dis[v][r+1] + forone[r+1]
sumz = int(sumz)
forone.append(sumz)
arrival.append(forone)
#//DELAY//
delay=[]
for d in range(len(poss)) :
tempo=[]
for q in range(len(index)):
v=arrival[d][q]-due[d][q]
if arrival[d][q] <= due[d][q]:
tempo.append(0)
else :
tempo.append(v)
delay.append(tempo)
#//ORDER//
#goal is to find the right order that minimizes the delay for two larger instances.
total = [sum(x) for x in delay]
small= min(total)
final=here[total.index(small)]
print(small)
You could solve this by implmenting the travelling salesman problem, but it needs a simple modification. In the TSP, the cost of moving to the next location is just its distance from your current location. In your algorithm, the cost is calculated:
if current_travelling_time < dT then
cost = 0
else
cost = dT - current_travelling_time
The total cost of each path is calculated by summing up their costs. The path with the minimum cost is the one you want.
Solving this can be programmed simply by calculating these costs across all permutations of locations, bar the first location.
Note that this is very computationally expensive, so you should consider dynamic programming.
For this brute-force approach, see https://codereview.stackexchange.com/questions/110221/tsp-brute-force-optimization-in-python. The cost would need to be calculated differently, as I've mentioned.
I would like to write the following code in a vectorized way as the current code is pretty slow (and would like to learn Python best practices). Basically, the code is saying that if today's value is within 10% of yesterday's value, then today's value (in a new column) is the same as yesterday's value. Otherwise, today's value is unchanged:
def test(df):
df['OldCol']=(100,115,101,100,99,70,72,75,78,80,110)
df['NewCol']=df['OldCol']
for i in range(1,len(df)-1):
if df['OldCol'][i]/df['OldCol'][i-1]>0.9 and df['OldCol'][i]/df['OldCol'][i-1]<1.1:
df['NewCol'][i]=df['NewCol'][i-1]
else:
df['NewCol'][i]=df['OldCol'][i]
return df['NewCol']
The output should be the following:
OldCol NewCol
0 100 100
1 115 115
2 101 101
3 100 101
4 99 101
5 70 70
6 72 70
7 75 70
8 78 70
9 80 70
10 110 110
Can you please help?
I would like to use something like this but I did not manage to solve my issue:
def test(df):
df['NewCol']=df['OldCol']
cond=np.where((df['OldCol'].shift(1)/df['OldCol']>0.9) & (df['OldCol'].shift(1)/df['OldCol']<1.1))
df['NewCol'][cond[0]]=df['NewCol'][cond[0]-1]
return df
A solution in three steps :
df['variation']=(df.OldCol/df.OldCol.shift())
df['gap']=~df.variation.between(0.9,1.1)
df['NewCol']=df.OldCol.where(df.gap).fillna(method='ffill')
For :
OldCol variation gap NewCol
0 100 nan True 100
1 115 1.15 True 115
2 101 0.88 True 101
3 100 0.99 False 101
4 99 0.99 False 101
5 70 0.71 True 70
6 72 1.03 False 70
7 75 1.04 False 70
8 78 1.04 False 70
9 80 1.03 False 70
10 110 1.38 True 110
It seems to be 30x faster than loops on this exemple.
In one line :
x=df.OldCol;df['NewCol']=x.where(~(x/x.shift()).between(0.9,1.1)).fillna(method='ffill')
You should boolean mask your original dataframe:
df[(0.9 <= df['NewCol']/df['OldCol']) & (df['NewCol']/df['OldCol'] <= 1.1)] Will give you all rows where NewCol is within 10% of OldCol
So to set the NewCol field in these rows:
within_10 = df[(0.9 <= df['NewCol']/df['OldCol']) & (df['NewCol']/df['OldCol'] <= 1.1)]
within_10['NewCol'] = within_10['OldCol']
Since you seem to be on a good way of finding the "jump" days yourself I'll only show the trickier bit. So let's assume you have a numpy array with old of length N and a boolean numpy array jump of the same size. As a matter of convention the zeroth element of jump is set at True. Then you can first calculate the numbers of repeats between jumps:
jump_indices = np.where(jumps)[0]
repeats = np.diff(np.r_[jump_indices, [N]])
once you have these you can use np.repeat:
new = np.repeat(old[jump_indices], repeats)
I have made the following code:
from math import sqrt
import time
def factors(n):
x=(set(reduce(list.__add__,
([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0))))
return sorted(x,reverse=True)
n=10**7
m=0
start_time = time.time()
for i in xrange(1,int(sqrt(n))+1):
l=0
x=factors(i)
for d in xrange(i,n/i+1):
if i==d:
l+=i
else:
for b in x:
if d%b==0:
l+=2*b
break
m+=l
print i
elapsed_time = time.time() - start_time
print elapsed_time
print m
I think what the code does is add the greatest common divisor of i and d for all id≤n
Due to the "print i", I have realized that when i is small the second loop is slow. Why is this?, and how do I optimize?
I see that the iteration over d will be larger, but shouldn't it essentially just be iterating over all the values, whereas for the larger i, the third loop should take a longer time because of the greater size of x.
Could the second loop be slower for small values of i just because the xrange "spans" a larger amount of values for small i?
I mean, in the second loop declaration we have:
for d in xrange(i,n/i+1):
And the maximum value of the xrange (this is, n/i+1) is larger for small i (the quotient n/i is maximum at i=1, then it decreases
Your ideas about how long each loop should take to run a single time relative to one another are accurate, but the differences in them is minimal.
Your assumptions about how many times each loop is running are off by several orders of magnitude.
Loop i is executing ~3000 times. The total number of loop executions called per i varies but on average drops at an high rate. At the start, the d loop is getting called ~ 10,000,000 per i and then it drops off very quickly:
The total number of loops you run for i[0:215] is greater than i[215:3161]
i d_loops b_loops running_mean avg_last_10_loops
1 10000001 1 10000001.0 10000001.0
2 5000001 2 10000001.5 10000001.5
3 3333334 2 8888890.33333 8888890.33333
4 2500001 3 8541668.5 8541668.5
5 2000001 2 7633335.2 7633335.2
6 1666667 4 7472224.0 7472224.0
7 1428572 2 6812926.85714 6812926.85714
8 1250001 4 6586311.5 6586311.5
9 1111112 3 6224869.77778 6224869.77778
10 1000001 4 6002383.2 6002383.2
99 101011 6 1653200.16162 637628.2
199 50252 2 1035550.34171 324231.5
299 33445 4 779296.658863 203848.2
399 25063 8 634848.313283 192922.4
499 20041 2 540089.59519 149790.4
599 16695 2 472549.51586 114461.6
699 14307 4 421785.891273 103772.2
799 12516 4 382086.017522 100739.8
899 11124 4 349883.460512 80518.2
999 10011 8 323351.570571 80530.4
1099 9100 4 300961.77434 67638.0
1199 8341 4 281811.0 61978.2
1299 7699 4 265260.015396 65681.9
1399 7148 2 250684.336669 54528.4
1499 6672 2 237863.799199 49524.2
1599 6254 8 226449.282051 56452.4
1699 5886 2 216141.950559 47237.4
1799 5559 4 206859.735964 43485.8
1899 5266 6 198471.47762 49653.2
1999 5003 2 190769.076538 38112.8
2099 4765 2 183702.581706 34396.0
2199 4548 4 177231.36653 36467.0
2299 4350 6 171250.213136 35741.6
2399 4169 2 165683.010838 34256.8
2499 4002 12 160541.983994 39293.2
2599 3848 4 155707.039246 35478.6
2699 3706 2 151193.218229 27470.6
2799 3573 6 146973.628796 30790.8
2899 3450 4 143019.365643 29714.8
2999 3335 2 139271.870957 28064.8
3099 3227 4 135755.08777 27799.4
3153 3172 4 133946.542341 33037.6
3154 3171 8 133912.116677 30484.8
3155 3170 4 133873.691284 29208.8
3156 3169 12 133843.321926 29196.8
3157 3168 8 133808.95407 30460.0
3158 3167 4 133770.594047 29820.6
3159 3166 12 133740.27477 32349.4
3160 3165 16 133713.977215 25983.4
3161 3164 4 133675.679848 25979.4
3162 3163 16 133649.409235 27867.2