from gurobipy import *
import pandas as p
import numpy as np
d={
(1,1):2,(1,2):3,(1,3):3,(1,4):5,
(2,1):2,(2,2):3,(2,3):3,(2,4):5,
(3,1):2,(3,2):3,(3,3):3,(3,4):5,
(4,1):2,(4,2):3,(4,3):3,(4,4):5,
(5,1):2,(5,2):3,(5,3):3,(5,4):5,
}
P,p=multidict({1:1,2:1,3:2,4:2,5:2})
W,w=multidict({1:2,2:2,3:2,4:2})
D=100
total_p=sum([i for i in p.values()])
total_w=sum([j for j in w.values()])
if total_w/total_p<1:
R= total_w/total_p
else :
R=1
print(R)
model=Model("keibi optimizer")
x={}
for i in P:
for j in W:
x[i,j]=model.addVar(vtype="C",name="x(%s,%s)"%(i,j))
model.update()
for j in W:
model.addConstr(quicksum(x[i,j] for i in P) == w[j],name='Demand(%s)' %j)
for i in P:
model.addConstr(quicksum(x[i,j] for j in W) <= p[i],name='Capacity(%s)' %i)
for i in P:
for j in W:
model.addConstr(d[i,j]*x[i,j],"<=",D*x[i,j])
model.setObjective(quicksum(D-(sum((d[i,j]*x[i,j]/x[i,j]) for j in W if x!=0))for i in P),GRB.MAXIMIZE)
model.optimize()
print("Optimal value:", model.ObjVal/(total_p*D))
for(i,j) in x:
print("sending quantity %10s from factory %3s to customer %3s" %(x[i,j], j, i))
I made the above program using python gurobi9.12. But I cannot run the program.
error message
Divisor must be a constant
I guess model.setObjective(quicksum(D-(sum((d[i,j]*x[i,j]/x[i,j]) for j in W if x!=0))for i in P),GRB.MAXIMIZE) is complicated.
I want to make a program using the formula in the photo below as the objective function.
What should I do? Please help me.
So let's focus on: Σ_j∈W d[i,j]*x[i,j]/Σj∈W x[i,j]. First, the math is confusing. I never, ever have a sum over j inside a sum over j. We don't know what j is inside the inner sum.
I'll use the notation:
sum(j, d[i,j]*x[i,j]/y[i])
y[i] = sum(j, x[i,j])
for our problem. Now we can do:
sum(j, d[i,j]*z[i,j])
y[i]*z[i,j] = x[i,j] for all i,j
y[i] = sum(j, x[i,j]) for all i
y[i],z[i,j] free variables
As you can see, there is no division anymore. Note that we introduced a non-convex quadratic constraint here. That is a bit expensive but better than division which cannot be handled by Gurobi at all. Further note that we allow y[i]=0 here (multiplication by zero is allowed, but dividing by zero is not).
In general, if you see z=x/y, you need to consider using x=z*y.
Related
️Problem asked in Directi Interview
Take an input array, say A and print the maximum value of x
where x = |(A[i] – A[j]) + (i – j)|
Constraints:
Max array size: 20000
Time limit: 0.1s
Time limit is a major factor in this question.
Here is the setter's solution for this question.
'''
THE BRUTE FORCE APPROACH
def maximum(arr):
res=0
n=len(arr)
for i in range (n):
for j in range(n):
res=max(res,abs(arr[i]-arr[j])+abs(i-j))
return res
'''
import sys
def maximum(arr):
max1=max2=-sys.maxsize-1
min1=min2=sys.maxsize
ans=0
n=len(arr)
for i in range(n):
max1=max(max1,arr[i]+i)
max2=max(max2,arr[i]-i)
min1=min(min1,arr[i]+i)
min2=min(min2,arr[i]-i)
ans=max(ans,max2-min2)
ans=max(ans,max1-min1)
return ans
But I tried solving the problem using sort
def maximum(array):
n=len(array)
array.sort()
return (array[n-1]-array[0]) + (n-1)
if __name__=="__main__":
n=int(input())
array= list(map(int,input("\nEnter the numbers : ").strip().split()))[:n]
print(maximum(array))
Is my approach correct ? Is it optimised?
Thanks in advance.
The answer suggested, of first sorting and taking the elements, is incorrect. Take the counter example of: [2,1,3]
The solution for this problem should yield 3: (3-1) + (2-1) or
(3-2) + (2-0)
However, suggested solution will yield 4: (3-1) + (2-0)
A possible (linear time) solution:
Let's start with some algebra, and drop the absolute value for a minute.
(A[i] – A[j]) + (i – j) = (A[i] + i) - (A[j] + j)
We are looking for maximal value, so
We want to minimize the value of (A[j] + j)
We want to maximize the value of (A[i] + i).
Note that they are completely independent of each other.
You can find two integers, one that maximizes (A[i] + i), and the other that minimizes (A[j] + j). Finding such 2 numbers can be done simply in linear pass.
Repeat for the other way around (when (A[i] – A[j]) + (i – j) is negative):
Find i that minimizes (A[i] + i)
Fine j that maximizes (A[j] + j).
Both are done in linear time, yielding O(n) solution
Sorting disturbs the original array and the mapping of elements at their respective indices gets lost. So logically, sorting will lead to wrong answer.
For example, as correctly described by #amit in his comments:
A = [2, 1, 3]
Correct answer = 3
Suggested solution's answer = 4
I refer to the dissertation written by Marcel R. Ackermann found https://d-nb.info/100345531X/34 . In the dissertation, Marcel wrote a pseudo-code for optimal 1-Dimensional K-Median algorithm. It is shown as such:
pseudo-code for optimal K-Median
I tried to convert the code into python, as shown below:
import math
import statistics
def cost(arr, median):
cost = 0
for i in range(len(arr)):
cost = cost + abs(arr[i] - median)
return cost
def simpleCluster1D(arr, k):
n = len(arr)
B = [[0] * k for i in range(n)]
C = [[0] * k for i in range(n)]
for i in range(k):
c = statistics.median(arr[:i+1])
B[i][0] = cost(arr[:i+1], c)
C[i][0] = c
for j in range(1, k):
for i in range(j, n):
B[i][j] = math.inf
C[i][j] = []
for t in range (j, i+1):
c = statistics.median(arr[t:i+1])
b = B[t-1][j-1] + cost(arr[t:i+1],c)
if b < B[i][j]:
B[i][j] = b
tmp = C[t-1][j-1]
C[i][j] = [C[t-1][j-1]] + [c]
return C[n-1][k-1]
However, the results i obtained is not intuitive.
For example, when
arr = [50,60,70,80]
k = 2
simpleCluster1D(arr, k)
The result is [0,80], which is wrong. The answer should be [55,75] or [50,70].
I don't know where I have gone wrong.
I am wondering if anyone can help me with this conversion? I am a little confused as to the declaration of the array C - column 1 of the array contains the median, and column 2 contains a list in each array index. How do I do that?
Also, are the libraries/packages available online for R/Python (e.g flexclust in R and pyclustering in Python) already has a built-in optimal 1-D solver? I know that for d >1, it is impossible to achieve optimal result and thus heuristics are used to obtain local optimal solution. Which is why I concluded that these libraries will also solve 1-D problems with heuristics and hence answer is not deterministic. Am I right to come to that conclusion?
I don't know where I have gone wrong.
You haven't. The error is in the dissertation; the line
1: for i = 1,2,...,k do
has to be
1: for i = 1,2,...,n do
- otherwise the rows from k+1 to n of the arrays B and C aren't fully initialized.
I tried to speed up my python code with cupy instead of numpy. The problem here is, that using cupy, my code got drastically slower. Maybe I went a little bit to naive on that problem.
Maybe anyone can find a bottleneck in my code:
import cupy as np
import time as ti
def f(y, t):
y_ = np.zeros(2 * N_1*N_2) # n: e-6, c: e-5
for i in range(0, N_1*N_2):
y_[i] = y[i + N_1*N_2] # n: e-7, c: e-5 or e-6
for i in range(N_1*N_2):
sum = -4*y[i] # n: e-7, c: e-7 after some statements e-5
if (i + 1 in indexes) and (not (i in indi)):
sum += y[i+1] # n: e-7, c: e-7 after some statements e-5
if (i - 1) in indexes and (i % N_1 != 0):
sum += y[i-1] # n: e-7, c: e-7 after some statements e-5
if i + N_1 in indexes:
sum += y[i+N_1] # n: e-7, c: e-7 after some statements e-5
if i - N_1 in indexes:
sum += y[i-N_1] # n: e-7, c: e-7 after some statements e-5
y_[i + N_1*N_2] = sum
return y_
def k_1(y, t, h):
return np.asarray(f(y, t)) * h
def k_2(y, t, h):
return np.asarray(f(np.add(np.asarray(y) , np.multiply(1/2 , k_1(y, t, h))), t + 1/2 * h)) * h
# k_2, k_4 look just like k_2, may be with an 1/2 here or there
# some init stuff is happening here
while t < T_end:
# also some magic happening here which is just data saving
y = np.asarray(y) + 1/6*(k_1(y, t, m) + 2*k_2(y, t, m) + 2*k_3(y, t, m) + k_4(y, t, m))
t += m
EDIT
I tried to benchmark my code and here are some results they can be seen as a comment in the code. Each number stays for one line. The units are seconds. n: Numpy, c:CuPy, i mostly give a rough estimate of the order.
Additional i tested
np.multiply # n: e-6, c: e-5
and
np.add # n: e-5 or e-6, c: 0.005 or e-5
Your code is not slow because numpy is slow but because you call many (python) functions, and calling functions (and iterating and accessing objects and basically everything in python) is slow in python. Thus cupy will not help you (but probably harm performance because it has to do more setup e.g. copying data over to the gpu). If you can formulate your algorithm to use less python functions (vectorizing as in the other answer) this will speedup your code tremendously (you probably do not need cupy).
You could also look into numba which compiles your code with llvm in native code. If you do so be sure to read some documenation and use nopython=True, otherwise you will only switch slow cupy code with slow numba code.
Your code example doesn't work since you haven't defined N_1, N_2, indexes and indi anywhere. Also your comments in the code doesn't seem to help others understand what's going on.
Your code probably won't benefit from numba/cupy since you haven't vectorized the operations in your code. Lists would probably be just as fast as numpy arrays in the way your code works at the moment.
If you get rid of your for loops and change
y_ = np.zeros(2 * N_1*N_2)
for i in range(0, N_1*N_2):
y_[i] = y[i + N_1*N_2]
to
n = N1*N2
y_ = np.zeros(2*n)
y_[:n] = y[n:2*n]
and so forth, you will speed your code up substantially.
I am trying to implement a circular rotation algorithm for a hackerrank challenge question. My code(middle block) seems to run fine for small inputs but fails for larger inputs due to timeout. Any help optimizing the code will be much appreciated.
Here is my code:
import sys
n,k,q = raw_input().strip().split(' ')
n,k,q = [int(n),int(k),int(q)]
a = map(int,raw_input().strip().split(' '))
for j in range(0,k):
temp = a[n-1]
for i in range(n-2, -1, -1):
a[i+1] = a[i]
a[0] = temp
for a0 in xrange(q):
m = int(raw_input().strip())
print a[m]
You don't have to actually rotate the array to find the item but you can use modulo calculus to do that.
If we have index i and we move it k places his new index will be m=(i+k)%n so if we have an index m that has been moved k places then it's previous location was i=(m-k)%n, but since we have to handle it becoming negative if k > m we add len(a), python handles this but in general it's the more complete answer.
Knowing that we can write the following:
for a0 in xrange(q):
m = int(raw_input().strip())
prev_index = (len(a) + m - k) % n
print a[prev_index]
I am trying to implement Pollard's rho algorithm for computing discrete logarithms based on the description in the book Prime Numbers: A Computational Perspective by Richard Crandall and Carl Pomerance, section 5.2.2, page 232. Here is my Python code:
def dlog(g,t,p):
# l such that g**l == t (mod p), with p prime
# algorithm due to Crandall/Pomerance "Prime Numbers" sec 5.2.2
from fractions import gcd
def inverse(x, p): return pow(x, p-2, p)
def f(xab):
x, a, b = xab[0], xab[1], xab[2]
if x < p/3:
return [(t*x)%p, (a+1)%(p-1), b]
if 2*p/3 < x:
return [(g*x)%p, a, (b+1)%(p-1)]
return [(x*x)%p, (2*a)%(p-1), (2*b)%(p-1)]
i, j, k = 1, [1,0,0], f([1,0,0])
while j[0] <> k[0]:
print i, j, k
i, j, k = i+1, f(j), f(f(k))
print i, j, k
d = gcd(j[1] - k[1], p - 1)
if d == 1: return ((k[2]-j[2]) * inverse(j[1]-k[1],p-1)) % (p-1)
m, l = 0, ((k[2]-j[2]) * inverse(j[1]-k[1],(p-1)/d)) % ((p-1)/d)
while m <= d:
print m, l
if pow(g,l,p) == t: return l
m, l = m+1, (l+((p-1)/d))%(p-1)
return False
The code includes debugging output to show what is happening. You can run the code at http://ideone.com/8lzzOf, where you will also see two test cases. The first test case, which follows the d > 1 path, calculates the correct value. The second test case, which follows the d == 1 path, fails.
Please help me find my error.
Problem 1
One thing that looks suspicious is this function:
def inverse(x, p): return pow(x, p-2, p)
This is computing a modular inverse of x modulo p using Euler's theorem. This is fine if p is prime, but otherwise you need to raise x to the power phi(p)-1.
In your case you are calling this function with a modulo of p-1 which is going to be even and therefore give an incorrect inverse.
As phi(p-1) is hard to compute, it may be better to use the extended Euclidean algorithm for computing the inverse instead.
Problem 2
Running your code for the case g=83, t=566, p=997 produces 977, while you were expecting 147.
In fact 977 is indeed a logarithm of 83 as we can see if we compute:
>>> pow(83,977,997)
566
but it is not the one you were expecting (147).
This is because the Pollard rho method requires g to be a generator of the group. Unfortunately, 83 is not a generator of the group 1,2,..997 because pow(83,166,997)==1. (In other words, after generating 166 elements of the group you start repeating elements.)