I'm trying to get a quick implementation of the following problem, ideally such that it would work in a numba function. The problem is the following: I have two random integers a & b and consider their binary representation of length L, e.g.
L=4: a=10->1010, b=6->0110.
This is the information that is feed into the function. Then I cut both binary representations in two at the same random position and fuse one of the two results, e.g.
L=4: a=1|010, b=0|110 ---> c=1110 or 0010.
One of the two outcome is chosen with equal probability and that is the outcome of the function. The cut occurs between the first 1/0 and the last 0/1 of the binary representation.
This is currently my code:
def func(a,b,l):
bin_a = [int(i) for i in str(bin(a))[2:].zfill(l)]
bin_b = [int(i) for i in str(bin(b))[2:].zfill(l)]
randint = random.randint(1, l - 1)
print("randint", randint)
if random.random() < 0.5:
result = bin_a[0:randint]+bin_b[randint:l]
else:
result = bin_b[0:randint] + bin_a[randint:l]
return result
I have the feeling that there a possibly many shortcuts to this problem that I do not come up with. Also my code does not work in numba :/. Thanks for any help!
Edit: This is an update of my code, thanks to Prunes help! It also works as a numba function. If there is no further improvements to that, I would close the question.
def func2(a,b,l):
randint = random.randint(1, l - 1)
print("randint", randint)
bitlist_l = [1]*randint+[0]*(l-randint)
bitlist_r = [0]*randint+[1]*(l-randint)
print("bitlist_l", bitlist_l)
print("bitlist_r", bitlist_r)
l_mask = 0
r_mask = 0
for i in range(l):
l_mask = (l_mask << 1) | bitlist_l[i]
r_mask = (r_mask << 1) | bitlist_r[i]
print("l_mask", l_mask)
print("r_mask", r_mask)
if random.random() < 0.5:
c = (a & l_mask) | (b & r_mask)
else:
c = (b & l_mask) | (a & r_mask)
return c
You lose a lot of time converting between string and int. Try bit operations instead. Mask the items you want and construct the output without all the conversions. Try these steps:
size = [length of larger number in bits] There are many ways to get this.
Make a mask template, size 1-bits.
Pick your random position, pos randint is a poor anem, as it shadows the function you're using.
Make two masks: l_mask = mask << pos; r_mask = mask >> pos. This gives you two mutually exclusive and exhaustive bit-maps for your inputs.
Flip your random coin, the 50-50 chance. The < 0.5 result would be ...
(a & l_mask) | (b & rmask)
For the >= 0.5 result, switch a and b in that expression.
You can improve your code by realizing that you do not need a "human readable" binary representation to do binary operations.
For example, creating the mask:
m = (1<<randompos) - 1
The crossover can be done like so:
c = (a if coinflip else b) ^ ((a^b)&m)
And that's all.
Full example:
# create random sample
a,b = np.random.randint(1<<32,size=2)
randompos = np.random.randint(1,32)
coinflip = np.random.randint(2)
randompos
# 12
coinflip
# 0
# do the crossover
m = (1<<randompos) - 1
c = (a if coinflip else b) ^ ((a^b)&m)
# check
for i in (a,b,m,c):
print(f"{i:032b}")
# 11100011110111000001001111100011
# 11010110110000110010101001111011
# 00000000000000000000111111111111
# 11010110110000110010001111100011
Related
I'm a beginner to Python and I'm trying to calculate the angles (-26.6 &18.4) for this figure below and so on for the rest of the squares by using Python code.
I have found the code below and I'm trying to understand very well. How could it work here? Any clarification, please?
Python Code:
def computeDegree(a,b,c):
babc = (a[0]-b[0])*(c[0]-b[0])+(a[1]-b[1])*(c[1]-b[1])
norm_ba = math.sqrt((a[0]-b[0])**2 + (a[1]-b[1])**2)
norm_bc = math.sqrt((c[0]-b[0])**2 + (c[1]-b[1])**2)
norm_babc = norm_ba * norm_bc
radian = math.acos(babc/norm_babc)
degree = math.degrees(radian)
return round(degree, 1)
def funcAngle(p, s, sn):
a = (s[0]-p[0], s[1]-p[1])
b = (sn[0]-p[0], sn[1]-p[1])
c = a[0] * b[1] - a[1] * b[0]
if p != sn:
d = computeDegree(s, p, sn)
else:
d = 0
if c > 0:
result = d
elif c < 0:
result = -d
elif c == 0:
result = 0
return result
p = (1,4)
s = (2,2)
listSn= ((1,2),(2,3),(3,2),(2,1))
for sn in listSn:
func(p,s,sn)
The results
I expected to get the angles in the picture such as -26.6, 18.4 ...
Essentially, this uses the definition of dot products to solve for the angle. You can read more it at this link (also where I found these images).
To solve for the angle you first need to convert your 3 input points into two vectors.
# Vector from b to a
# BA = (a[0] - b[0], a[1] - b[1])
BA = a - b
# Vector from b to c
# BC = (a[0] - c[0], a[1] - c[1])
BC = c - b
Using the two vectors you can then find the angle between them by first finding the value of the dot product with the second formula.
# babc = (a[0]-b[0])*(c[0]-b[0])+(a[1]-b[1])*(c[1]-b[1])
dot_product = BA[0] * BC[0] + BA[1] * BC[1]
Then by going back to the first definition, you can divide off the lengths of the two input vectors and the resulting value should be the cosine of the angle between the vectors. It may be hard to read with the array notation but its just using the Pythagoras theorem.
# Length/magnitude of vector BA
# norm_ba = math.sqrt((a[0]-b[0])**2 + (a[1]-b[1])**2)
length_ba = math.sqrt(BA[0]**2 + BA[1]**2)
# Length/magnitude of vector BC
# norm_bc = math.sqrt((c[0]-b[0])**2 + (c[1]-b[1])**2)
length_bc = math.sqrt(BC[0]**2 + BC[1]**2)
# Then using acos (essentially inverse of cosine), you can get the angle
# radian = math.acos(babc/norm_babc)
angle = Math.acos(dot_product / (length_ba * length_bc))
Most of the other stuff is just there to catch cases where the program might accidentally try to divide by zero. Hopefully this helps to explain why it looks the way it does.
Edit: I answered this question because I was bored and didn't see harm in explaining the math behind that code, however in the future try to avoid asking questions like 'how does this code work' in the future.
Let's start with funcAngle since it calls computeDegree later.
The first thing it does is define a as a two item tuple. A lot of this code seems to use two item tuples, with the two parts referenced by v[0] and v[1] or similar. These are almost certainly two dimensional vectors of some sort.
I'm going to write these as šÆ for the vector and vā and vįµ§ since they're probably the two components.
[don't look too closely at that second subscript, it's totally a y and not a gamma...]
a is the vector difference between s and p: i.e.
a = (s[0]-p[0], s[1]-p[1])
is aā=sā-pā and aįµ§=sįµ§-pįµ§; or just š=š¬-š© in vector.
b = (sn[0]-p[0], sn[1]-p[1])
again; š=š¬š§-š©
c = a[0] * b[1] - a[1] * b[0]
c=aābįµ§-aįµ§bā; c is the cross product of š and š (and is just a number)
if p != sn:
d = computeDegree(s, p, sn)
else:
d = 0
I'd take the above in reverse: if š© and š¬š§ are the same, then we already know the angle between them is zero (and it's possible the algorithm fails badly) so don't compute it. Otherwise, compute the angle (we'll look at that later).
if c > 0:
result = d
elif c < 0:
result = -d
elif c == 0:
result = 0
If c is pointing in the normal direction (via the left hand rule? right hand rule? can't remember) that's fine: if it isn't, we need to negate the angle, apparently.
return result
Pass the number we've just worked out to some other code.
You can probably invoke this code by adding something like:
print (funcangle((1,0),(0,1),(2,2))
at the end and running it. (Haven't actually tested these numbers)
So this function works out a and b to get c; all just to negate the angle if it's pointing the wrong way. None of these variables are actually passed to computeDegree.
so, computeDegree():
def computeDegree(a,b,c):
First thing to note is that the variables from before have been renamed. funcAngle passed s, p and sn, but now they're called a, b and c. And the note the order they're passed in isn't the same as they're passed to funcAngle, which is nasty and confusing.
babc = (a[0]-b[0])*(c[0]-b[0])+(a[1]-b[1])*(c[1]-b[1])
babc = (aā-bā)(cā-bā)+(aįµ§-bįµ§)(cįµ§-bįµ§)
If š' and š' are š-š and š-š respectively, this is just
a'āc'ā+a'įµ§c'įµ§, or the dot product of š' and š'.
norm_ba = math.sqrt((a[0]-b[0])**2 + (a[1]-b[1])**2)
norm_bc = math.sqrt((c[0]-b[0])**2 + (c[1]-b[1])**2)
norm_ba = ā[(aā-bā)Ā² + (aįµ§-bįµ§)Ā²] (and norm_bc likewise).
This looks like the length of the hypotenuse of š' (and š' respectively)
norm_babc = norm_ba * norm_bc
which we then multiply together
radian = math.acos(babc/norm_babc)
We use the arccosine (inverse cosine, cos^-1) function, with the length of those multiplied hypotenuses as the hypotenuse and that dot product as the adjacent length...
degree = math.degrees(radian)
return round(degree, 1)
but that's in radians, so we convert to degrees and round it for nice formatting.
Ok, so now it's in maths, rather than Python, but that's still not very easy to understand.
(sidenote: this is why descriptive variable names and documentation is everyone's friend!)
I'm trying to make a Z3 program (in Python) that generates boolean circuits that do certain tasks (e.g. adding two n-bit numbers) but the performance is terrible to the point where a brute-force search of the entire solution space would be faster. This is my first time using Z3 so I could be doing something that impacts my performance, but my code seems fine.
The following is copied from my code here:
from z3 import *
BITLEN = 1 # Number of bits in input
STEPS = 1 # How many steps to take (e.g. time)
WIDTH = 2 # How many operations/values can be stored in parallel, has to be at least BITLEN * #inputs
# Input variables
x = BitVec('x', BITLEN)
y = BitVec('y', BITLEN)
# Define operations used
op_list = [BitVecRef.__and__, BitVecRef.__or__, BitVecRef.__xor__, BitVecRef.__xor__]
unary_op_list = [BitVecRef.__invert__]
for uop in unary_op_list:
op_list.append(lambda x, y : uop(x))
# Chooses a function to use by setting all others to 0
def chooseFunc(i, x, y):
res = 0
for ind, op in enumerate(op_list):
res = res + (ind == i) * op(x, y)
return res
s = Solver()
steps = []
# First step is just the bits of the input padded with constants
firststep = Array("firststep", IntSort(), BitVecSort(1))
for i in range(BITLEN):
firststep = Store(firststep, i * 2, Extract(i, i, x))
firststep = Store(firststep, i * 2 + 1, Extract(i, i, y))
for i in range(BITLEN * 2, WIDTH):
firststep = Store(firststep, i, BitVec("const_0_%d" % i, 1))
steps.append(firststep)
# Generate remaining steps
for i in range(1, STEPS + 1):
this_step = Array("step_%d" % i, IntSort(), BitVecSort(1))
last_step = steps[-1]
for j in range(WIDTH):
func_ind = Int("func_%d_%d" % (i,j))
s.add(func_ind >= 0, func_ind < len(op_list))
x_ind = Int("x_%d_%d" % (i,j))
s.add(x_ind >= 0, x_ind < WIDTH)
y_ind = Int("y_%d_%d" % (i,j))
s.add(y_ind >= 0, y_ind < WIDTH)
node = chooseFunc(func_ind, Select(last_step, x_ind), Select(last_step, y_ind))
this_step = Store(this_step, j, node)
steps.append(this_step)
# Set the result to the first BITLEN bits of the last step
if BITLEN == 1:
result = Select(steps[-1], 0)
else:
result = Concat(*[Select(steps[-1], i) for i in range(BITLEN)])
# Set goal
goal = x | y
s.add(ForAll([x, y], goal == result))
print(s)
print(s.check())
print(s.model())
The code basically lays out the inputs as individual bits, then at each "step" one of 5 boolean functions can operate on the values from the previous step, where the final step represents the end result.
In this example, I generate a circuit to calculate the boolean OR of two 1-bit inputs, and an OR function is available in the circuit, so the solution is trivial.
I have a solution space of only 5*5*2*2*2*2=400:
5 Possible functions (two function nodes)
2 Inputs for each function, each of which has two possible values
This code takes a few seconds to run and provides a correct answer, but I feel like it should run instantaneously as there are only 400 possible solutions, of which quite a few are valid. If I increase the inputs to be two bits long, the solution space has a size of (5^4)*(4^8)=40,960,000 and never finishes on my computer, though I feel this should be easily doable with Z3.
I also tried effectively the same code but substituted Arrays/Store/Select for Python lists and "selected" the variables by using the same trick I used in chooseFunc(). The code is here and it runs in around the same time the original code does, so no speedup.
Am I doing something that would drastically slow down the solver? Thanks!
You have a duplicated __xor__ in your op_list; but that's not really the major problem. The slowdown is inevitable as you increase bit-size, but on a first look you can (and should) avoid mixing integer reasoning with booleans here. I'd code your chooseFunc as follows:
def chooseFunc(i, x, y):
res = False;
for ind, op in enumerate(op_list):
res = If(ind == i, op (x, y), res)
return res
See if that improves run-times in any meaningful way. If not, the next thing to do would be to get rid of arrays as much as possible.
We have some large binary number N (large means millions of digits). We also have binary mask M where 1 means that we must remove digit in this position in number N and move all higher bits one position right.
Example:
N = 100011101110
M = 000010001000
Res 1000110110
Is it possible to solve this problem without cycle with some set of logical or arithmetical operations? We can assume that we have access to bignum arithmetic in Python.
Feels like it should be something like this:
Res = N - (N xor M)
But it doesn't work
UPD: My current solution with cycle is following:
def prepare_reduced_arrays(dict_of_N, mask):
'''
mask: string '0000011000'
each element of dict_of_N - big python integer
'''
capacity = len(mask)
answer = dict()
for el in dict_of_N:
answer[el] = 0
new_capacity = 0
for i in range(capacity - 1, -1, -1):
if mask[i] == '1':
continue
cap2 = (1 << new_capacity)
pos = (capacity - i - 1)
for el in dict_of_N:
current_bit = (dict_of_N[el] >> pos) & 1
if current_bit:
answer[el] |= cap2
new_capacity += 1
return answer, new_capacity
While this may not be possible without a loop in python, it can be made extremely fast with numba and just in time compilation. I went on the assumption that your inputs could be easily represented as boolean arrays, which would be very simple to construct from a binary file using struct. The method I have implemented involves iterating a few different objects, however these iterations were chosen carefully to make sure they were compiler optimized, and never doing the same work twice. The first iteration is using np.where to locate the indices of all the bits to delete. This specific function (among many others) is optimized by the numba compiler. I then use this list of bit indices to build the slice indices for slices of bits to keep. The final loop copies these slices to an empty output array.
import numpy as np
from numba import jit
from time import time
def binary_mask(num, mask):
num_nbits = num.shape[0] #how many bits are in our big num
mask_bits = np.where(mask)[0] #which bits are we deleting
mask_n_bits = mask_bits.shape[0] #how many bits are we deleting
start = np.empty(mask_n_bits + 1, dtype=int) #preallocate array for slice start indexes
start[0] = 0 #first slice starts at 0
start[1:] = mask_bits + 1 #subsequent slices start 1 after each True bit in mask
end = np.empty(mask_n_bits + 1, dtype=int) #preallocate array for slice end indexes
end[:mask_n_bits] = mask_bits #each slice ends on (but does not include) True bits in the mask
end[mask_n_bits] = num_nbits + 1 #last slice goes all the way to the end
out = np.empty(num_nbits - mask_n_bits, dtype=np.uint8) #preallocate return array
for i in range(mask_n_bits + 1): #for each slice
a = start[i] #use local variables to reduce number of lookups
b = end[i]
c = a - i
d = b - i
out[c:d] = num[a:b] #copy slices
return out
jit_binary_mask = jit("b1[:](b1[:], b1[:])")(binary_mask) #decorator without syntax sugar
###################### Benchmark ########################
bignum = np.random.randint(0,2,1000000, dtype=bool) # 1 million random bits
bigmask = np.random.randint(0,10,1000000, dtype=np.uint8)==9 #delete about 1 in 10 bits
t = time()
for _ in range(10): #10 cycles of just numpy implementation
out = binary_mask(bignum, bigmask)
print(f"non-jit: {time()-t} seconds")
t = time()
out = jit_binary_mask(bignum, bigmask) #once ahead of time to compile
compile_and_run = time() - t
t = time()
for _ in range(10): #10 cycles of compiled numpy implementation
out = jit_binary_mask(bignum, bigmask)
jit_runtime = time()-t
print(f"jit: {jit_runtime} seconds")
print(f"estimated compile_time: {compile_and_run - jit_runtime/10}")
In this example, I execute the benchmark on a boolean array of length 1,000,000 a total of 10 times for both the compiled and un-compiled version. On my laptop, the output is:
non-jit: 1.865583896636963 seconds
jit: 0.06370806694030762 seconds
estimated compile_time: 0.1652850866317749
As you can see with a simple algorithm like this, very significant performance gains can be seen from compilation. (in my case about 20-30x speedup)
As far as I know, this can be done without the use of loops if and only if M is a power of 2.
Let's take your example, and modify M so that it is a power of 2:
N = 0b100011101110 = 2286
M = 0b000000001000 = 8
Removing the fourth lowest bit from N and shifting the higher bits to the right would result in:
N = 0b10001110110 = 1142
We achieved this using the following algorithm:
Begin with N = 0b100011101110 = 2286
Iterate from the most-significant bit to the least-significant bit in M.
If the current bit in M is set to 1, then store the lower bits in some variable, x:
x = 0b1101110
Then, subtract every bit up to and including the current bit in M from N, so that we end up with the following:
N - (0b10000000 + x) = N - (0b10000000 + 0b1101110) = 0b100011101110 - 0b11101110 = 0b100000000000
This step can also be achieved by and-ing the bits with 0, which may be more efficient.
Next, we shift the result once to the right:
0b100000000000 >> 1 = 0b10000000000
Finally, we add back x to the shifted result:
0b10000000000 + x = 0b10000000000 + 0b1101110 = 0b10001101110 = 1142
There may be a possibility that this can somehow be done without loops, but it would actually be efficient if you were to simply iterate over M (from the most-significant bit to the least-significant bit) and performed this process on every set bit, as the time complexity would be O(M.bit_length()).
I wrote up the code for this algorithm as well, and I believe it's relatively efficient, but I don't have any big binary numbers to test it with:
def remove_bits(N, M):
bit = 2 ** (M.bit_length() - 1)
while bit != 0:
if M & bit:
ones = bit - 1
# Store lower `bit` bits.
temp = N & ones
# Clear lower `bit` bits.
N &= ~ones
# Shift once to the right.
N >>= 1
# Set stored lower `bit` bits.
N |= temp
bit >>= 1
return N
if __name__ == '__main__':
N = 0b100011101110
M = 0b000010001000
print(bin(remove_bits(N, M)))
Using your example, this returns your result: 0b1000110110
I don't think there's any way to do this in a constant number of calls to the built-in bitwise operators. Python would have to provide something like PEXT for that to be possible.
For literally millions of digits, you may actually get best performance by working in terms of sequences of bits, sacrificing the space advantages of Python ints and the time advantages of bitwise operations in favor of more flexibility in the operations you can perform. I don't know where the break-even point would be:
import itertools
bits = bin(N)[2:]
maskbits = bin(M)[2:].zfill(len(bits))
bits = bits.zfill(len(maskbits))
chosenbits = itertools.compress(bits, map('0'.__eq__, maskbits))
result = int(''.join(chosenbits), 2)
I am trying to find stdev for a sequence of numbers that were extracted from combinations of dice (30) that sum up to 120. I am very new to Python, so this code makes the console freeze because the numbers are endless and I am not sure how to fit them all into a smaller, more efficient function. What I did is:
found all possible combinations of 30 dice;
filtered combinations that sum up to 120;
multiplied all items in the list within result list;
tried extracting standard deviation.
Here is the code:
import itertools
import numpy
dice = [1,2,3,4,5,6]
subset = itertools.product(dice, repeat = 30)
result = []
for x in subset:
if sum(x) == 120:
result.append(x)
my_result = numpy.product(result, axis = 1).tolist()
std = numpy.std(my_result)
print(std)
Note that D(X^2) = E(X^2) - E(X)^2, you can solve this problem analytically by following equations.
f[i][N] = sum(k*f[i-1][N-k]) (1<=k<=6)
g[i][N] = sum(k^2*g[i-1][N-k])
h[i][N] = sum(h[i-1][N-k])
f[1][k] = k ( 1<=k<=6)
g[1][k] = k^2 ( 1<=k<=6)
h[1][k] = 1 ( 1<=k<=6)
Sample implementation:
import numpy as np
Nmax = 120
nmax = 30
min_value = 1
max_value = 6
f = np.zeros((nmax+1, Nmax+1), dtype ='object')
g = np.zeros((nmax+1, Nmax+1), dtype ='object') # the intermediate results will be really huge, to keep them accurate we have to utilize python big-int
h = np.zeros((nmax+1, Nmax+1), dtype ='object')
for i in range(min_value, max_value+1):
f[1][i] = i
g[1][i] = i**2
h[1][i] = 1
for i in range(2, nmax+1):
for N in range(1, Nmax+1):
f[i][N] = 0
g[i][N] = 0
h[i][N] = 0
for k in range(min_value, max_value+1):
f[i][N] += k*f[i-1][N-k]
g[i][N] += (k**2)*g[i-1][N-k]
h[i][N] += h[i-1][N-k]
result = np.sqrt(float(g[nmax][Nmax]) / h[nmax][Nmax] - (float(f[nmax][Nmax]) / h[nmax][Nmax]) ** 2)
# result = 32128174994365296.0
You ask for a result of an unfiltered lengths of 630 = 2*1023, impossible to handle as such.
There are two possibilities that can be combined:
Include more thinking to pre-treat the problem, e.g. on how to sample only
those with sum 120.
Do a Monte Carlo simulation instead, i.e. don't sample all
combinations, but only a random couple of 1000 to obtain a representative
sample to determine std sufficiently accurate.
Now, I only apply (2), giving the brute force code:
N = 30 # number of dices
M = 100000 # number of samples
S = 120 # required sum
result = [[random.randint(1,6) for _ in xrange(N)] for _ in xrange(M)]
result = [s for s in result if sum(s) == S]
Now, that result should be comparable to your result before using numpy.product ... that part I couldn't follow, though...
Ok, if you are out after the standard deviation of the product of the 30 dices, that is what your code does. Then I need 1 000 000 samples to get roughly reproducible values for std (1 digit) - takes my PC about 20 seconds, still considerably less than 1 million years :-D.
Is a number like 3.22*1016 what you are looking for?
Edit after comments:
Well, sampling the frequency of numbers instead gives only 6 independent variables - even 4 actually, by substituting in the constraints (sum = 120, total number = 30). My current code looks like this:
def p2(b, s):
return 2**b * 3**s[0] * 4**s[1] * 5**s[2] * 6**s[3]
hits = range(31)
subset = itertools.product(hits, repeat=4) # only 3,4,5,6 frequencies
product = []
permutations = []
for s in subset:
b = 90 - (2*s[0] + 3*s[1] + 4*s[2] + 5*s[3]) # 2 frequency
a = 30 - (b + sum(s)) # 1 frequency
if 0 <= b <= 30 and 0 <= a <= 30:
product.append(p2(b, s))
permutations.append(1) # TODO: Replace 1 with possible permutations
print numpy.std(product) # TODO: calculate std manually, considering permutations
This computes in about 1 second, but the confusing part is that I get as a result 1.28737023733e+17. Either my previous approaches or this one has a bug - or both.
Sorry - not that easy: The sampling is not of the same probability - that is the problem here. Each sample has a different number of possible combinations, giving its weight, which has to be considered before taking the std-deviation. I have drafted that in the code above.
First time publishing in here, here it goes:
I have two sets of data(v and t), each one has 46 values. The data is imported with "pandas" module and coverted to a numpy array in order to do the calculation.
I need to set ml_min1[45], ml_min2[45], and so on to the value "0". The problem is that each time I ran the script, the values corresponding to the position 45 of ml_min1 and ml_min2 are different. This is the piece of code that I have:
t1 = fil_copy.t1.as_matrix()
t2 = fil_copy.t2.as_matrix()
v1 = fil_copy.v1.as_matrix()
v2 = fil_copy.v2.as_matrix()
ml_min1 = np.empty(len(t1))
l_h1 = np.empty(len(t1))
ml_min2 = np.empty(len(t2))
l_h2 = np.empty(len(t2))
for i in range(0, (len(v1) - 1)):
if (i != (len(v1) - 1)) and (v1[i+1] > v1[i]):
ml_min1[i] = v1[i+1] - v1[i]
l_h1[i] = ml_min1[i] * (60/1000)
elif i == (len(v1)-1):
ml_min1[i] = 0
l_h1[i] = 0
print(i, ml_min1[i])
else:
ml_min1[i] = 0
l_h1[i] = 0
print(i, ml_min1[i])
for i in range(0, (len(v2) - 1)):
if (i != (len(v2) - 1)) and (v2[i+1] > v2[i]):
ml_min2[i] = v2[i+1] - v2[i]
l_h2[i] = ml_min2[i] * (60/1000)
elif i == (len(v2)-1):
ml_min2[i] = 0
l_h2[i] = 0
print(i, ml_min2[i])
else:
ml_min2[i] = 0
l_h2[i] = 0
print(i, ml_min2[i])
Your code as it is currently written doesn't work because the elif blocks are never hit, since range(0, x) does not include x (it stops just before getting there). The easiest way to solve this is probably just to initialize your output arrays with numpy.zeros rather than numpy.empty, since then you don't need to do anything in the elif and else blocks (you can just delete them).
That said, it's generally a design error to use loops like yours in numpy code. Instead, you should use numpy's broadcasting features to perform your mathematical operations to a whole array (or a slice of one) at once.
If I understand correctly, the following should be equivalent to what you wanted your code to do (just for one of the arrays, the other should work the same):
ml_min1 = np.zeros(len(t1)) # use zeros rather than empty, so we don't need to assign any 0s
diff = v1[1:] - v1[:-1] # find the differences between all adjacent values (using slices)
mask = diff > 0 # check which ones are positive (creates a Boolean array)
ml_min1[:-1][mask] = diff[mask] # assign with mask to a slice of the ml_min1 array
l_h1 = ml_min1 * (60/1000) # create l_h1 array with a broadcast scalar multiplication