My code takes about two hours to process. The bottleneck is in for loop and if
statements (see comment in code).
I'm beginner with python :) Can anyone recommend an efficient python way to replace the nested for and if statements?
I have tables of ~30 million rows, each row with (x,y,z) values:
20.0 11.3 7
21.0 11.3 0
22.0 11.3 3
...
My desired output is a table in the form x, y, min(z), count(min(z)). The last
column is a final count of the least z values at that (x,y). Eg:
20.0 11.3 7 7
21.0 11.3 0 10
22.0 11.3 3 1
...
There's only about 600 unique coordinates, so the output table will be 600x4.
My code:
import numpy as np
file = open('input.txt','r');
coordset = set()
data = np.zeros((600,4))*np.nan
irow = 0
ctr = 0
for row in file:
item = row.split()
x = float(item[0])
y = float(item[1])
z = float(item[2])
# build unique grid of coords
if ((x,y)) not in coordset:
data[irow][0] = x
data[irow][1] = y
data[irow][2] = z
irow = irow + 1 # grows up to 599
# lookup table of unique coords
coordset.add((x,y))
# BOTTLENECK. replace ifs? for?
for i in range(0, irow):
if data[i][0]==x and data[i][1]==y:
if z > data[i][2]:
continue
elif z==data[i][2]:
ctr = ctr + 1
data[i][3]=ctr
if z < data[i][2]:
data[i][2] = z
ctr = 1
data[i][3]=ctr
edit: For reference the approach by #Joowani computes in 1m26s. My original approach, same computer, same datafile, 106m23s.
edit2: #Ophion and #Sibster thanks for suggestions, I don't have enough credit to +1 useful answers.
Your solution seems slow because it iterates through the list (i.e. data) every time you make an update. A better approach would be using a dictionary, which takes O(1) as opposed to O(n) per update.
Here would be my solution using a dictionary:
file = open('input.txt', 'r')
#coordinates
c = {}
for line in file:
#items
(x, y, z) = (float(n) for n in line.split())
if (x, y) not in c:
c[(x, y)] = [z, 1]
elif c[(x, y)][0] > z:
c[(x, y)][0], c[(x, y)][1] = z, 1
elif c[(x, y)][0] == z:
c[(x, y)][1] += 1
for key in c:
print("{} {} {} {}".format(key[0], key[1], c[key][0], c[key][1]))
Why not change the last if to an elif ?
Like it is done now you will evaluate the z < data[i][2]: every iteration of the loop.
You could even just replace it with an else since you have already checked if z>data[i][2] and z == data[i][2] so the only remaining possibility is z < data[i][2]:
So following code will do the same and should be faster :
if z > data[i][2]:
continue
elif z==data[i][2]:
ctr = ctr + 1
data[i][3]=ctr
else:
data[i][2] = z
ctr = 1
data[i][3]=ctr
To do this in numpy use np.unique.
def count_unique(arr):
row_view=np.ascontiguousarray(a).view(np.dtype((np.void,a.dtype.itemsize * a.shape[1])))
ua, uind = np.unique(row_view,return_inverse=True)
unique_rows = ua.view(a.dtype).reshape(ua.shape + (-1,))
count=np.bincount(uind)
return np.hstack((unique_rows,count[:,None]))
First lets check for a small array:
a=np.random.rand(10,3)
a=np.around(a,0)
print a
[[ 0. 0. 0.]
[ 0. 1. 1.]
[ 0. 1. 0.]
[ 1. 0. 0.]
[ 0. 1. 1.]
[ 1. 1. 0.]
[ 1. 0. 1.]
[ 1. 0. 1.]
[ 1. 0. 0.]
[ 0. 0. 0.]]
print output
[[ 0. 0. 0. 2.]
[ 0. 1. 0. 1.]
[ 0. 1. 1. 2.]
[ 1. 0. 0. 2.]
[ 1. 0. 1. 2.]
[ 1. 1. 0. 1.]]
print np.sum(output[:,-1])
10
Looks good! Now lets check for a large array:
a=np.random.rand(3E7,3)
a=np.around(a,1)
output=count_unique(a)
print output.shape
(1331, 4) #Close as I can get to 600 unique elements.
print np.sum(output[:,-1])
30000000.0
Takes about 33 second on my machine and 3GB of memory, doing this all in memory for large arrays will likely be your bottleneck. For a reference #Joowani's solution took about 130 seconds, although this is a bit of an apple and oranges comparison as we start with a numpy array. Your milage may vary.
To read in the data as a numpy array I would view the question here, but it should look something like the following:
arr=np.genfromtxt("./input.txt", delimiter=" ")
Loading in that much data from a txt file I would really recommend using the pandas example in that link.
Related
I am having trouble with the Numbers[(Numbers<=0).argmax():] = 0 function it is supposed to turn all the elements behind it into zeroes if the condition is met, however if the condition is not met it turns all the array elements into zeroes. How can i fix this issue. If the Numbers<=0 condition is not met the array should not change.
Array with satisfying condition at -35.15610151:
Numbers = np.array([123.6, 123.6 , 123.6, 110.3748, 111.6992976,
102.3165566, 97.81462811 , 89.50038472 , 96.48141473 , 90.49956702,
88.59907611 , 77.96718698, 61.51611052, 56.84088612, 55.36302309,
54.69866681, 56.44902415 , 59.49727145, 42.12406819, 27.42276839,
33.86711896, 32.10602877, -35.15610151, 32.34361339 , 29.20628289])
Numbers[(Numbers<=0).argmax():] = 0
Output:
[123.6 123.6 123.6 110.3748 111.6992976
102.3165566 97.81462811 89.50038472 96.48141473 90.49956702
88.59907611 77.96718698 61.51611052 56.84088612 55.36302309
54.69866681 56.44902415 59.49727145 42.12406819 27.42276839
33.86711896 32.10602877 0. 0. 0. ]
Array with no satisfying condition, turned -35.15610151 into +35.15610151:
Numbers = np.array([123.6, 123.6 , 123.6, 110.3748, 111.6992976,
102.3165566, 97.81462811 , 89.50038472 , 96.48141473 , 90.49956702,
88.59907611 , 77.96718698, 61.51611052, 56.84088612, 55.36302309,
54.69866681, 56.44902415 , 59.49727145, 42.12406819, 27.42276839,
33.86711896, 32.10602877, 35.15610151, 32.34361339 , 29.20628289])
Numbers[(Numbers<=0).argmax():] = 0
Output:
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0.]
Try these 2 methods, one is in place assignment to the NumPy view and the other creates a fresh array to be assigned to another variable -
#Method 1 (Inplace assignment)
Numbers[(Numbers<=0).cumsum(dtype=bool)] = 0
Or,
#Method 2 (Not inplace)
np.where(~(Numbers<=0).cumsum(dtype=bool), Numbers, 0)
Or,
#As an excellent suggestion by Mad Physicist!
Numbers[np.logical_or.accumulate(Numbers >= 0)] = 0
Explanation -
The bool array that returns [F, F, F, T, F, F, F] can be seen as an array of 1s and 0s. Doing a cumsum ends up propogating the first T to the subsequent elements.
This, therefore, turns the array as [F, F, F, T, T, T, T] which can now be used with just boolean indexing and set the view to 0 OR np.where to fetch original elements or 0 based on reversing the boolean with ~
Advantage here is that if your array is just composed of False, meaning no element meets the condition, it just returns the original Numbers itself, instead of setting them to 0.
Running tests -
With a value that meets condition
Numbers = np.array([123.6 , 123.6 , -123.6, 110.3748 , 111.6992976, 102.3165566, 97.81462811])
Numbers[(Numbers<=0).cumsum(dtype=bool)] = 0
#array([123.6, 123.6, 0. , 0. , 0. , 0. , 0. ])
With no values meeting the condition
Numbers = np.array([123.6 , 123.6 , 123.6, 110.3748 , 111.6992976, 102.3165566, 97.81462811])
Numbers[(Numbers<=0).cumsum(dtype=bool)] = 0
#array([123.6 , 123.6 , 123.6 , 110.3748 ,111.6992976 , 102.3165566 , 97.81462811])
EDIT: New scenario as requested
Numbers1 = np.array([1.1, 2.2, 3.3, 4.4, 5.5])
Numbers2 = np.array([1,2,-3,4,5])
Numbers2 = np.where(~(Numbers2<=0).cumsum().astype(bool), Numbers1, 0)
Numbers2
array([1.1, 2.2, 0. , 0. , 0. ])
Just use an if for this, it represents the intention quite well, and is easy to understand:
smaller_equal_zero = Numbers <= 0
if smaller_equal_zero.any():
Numbers[smaller_equal_zero.argmax():] = 0
With Python 3.8+ you can use an assignment expression in the if:
if (smaller_equal_zero := Numbers <= 0).any():
Numbers[smaller_equal_zero.argmax():] = 0
I have an array of arrays that represents a set of unique colour values:
[[0. 0. 0. ]
[0. 0. 1. ]
[0. 1. 1. ]
[0.5019608 0.5019608 0.5019608 ]
[0.64705884 0.16470589 0.16470589]
[0.9607843 0.9607843 0.8627451 ]
[1. 0. 0. ]
[1. 0.84313726 0. ]
[1. 1. 0. ]
[1. 1. 1. ]]
And another numpy array that represents one of the colours:
[0.9607843 0.9607843 0.8627451 ]
I need a function to find the index where the colour array occurs in the set of colours, i.e. the function should return 5 for the arrays above.
numpy.where() returns you the exact positions in the array for values of given condition. So here, it would be as following (denoting big array as arr1, and the sought vector as arr2:
np.where(np.all(arr1 == arr2, axis=1))
Which then returns array of row indexes of sought rows.
Assuming that this is a relatively short list of colors (<1000), the simplest thing to do is probably just iterate over the list and compare each element of the sub-array.
color_list = ...
color_index = -1
target_color = [0.9607843, 0.9607843, 0.8627451]
for i in range(0, len(color_list)):
cur_color = color_list[i]
if (cur_color[0] == target_color[0] and cur_color[1] = target_color[1] and cur_color[2] = target_color[2]):
color_index = i
break
In the following code, I am attempting to calculate both the frequency and sum of a set of vectors (numpy vectors)
def calculate_means_on(the_labels, the_data):
freq = dict();
sums = dict();
means = dict();
total = 0;
for index, a_label in enumerate(the_labels):
this_data = the_data[index];
if a_label not in freq:
freq[a_label] = 1;
sums[a_label] = this_data;
else:
freq[a_label] += 1;
sums[a_label] += this_data;
Suppose the_data (a numpy 'matrix') is originally :
[[ 1. 2. 4.]
[ 1. 2. 4.]
[ 2. 1. 1.]
[ 2. 1. 1.]
[ 1. 1. 1.]]
After running the above code, the_data becomes:
[[ 3. 6. 12.]
[ 1. 2. 4.]
[ 7. 4. 4.]
[ 2. 1. 1.]
[ 1. 1. 1.]]
Why is this? I've deduced it down to the line sums[a_label] += this_data; as when i change it to sums[a_label] = sums[a_label] + this_data; it behaves as expected; i.e., the_data is not modified.
This line:
this_data = the_data[index]
takes a view, not a copy, of a row of the_data. The view is backed by the original array, and mutating the view will write through to the original array.
This line:
sums[a_label] = this_data
inserts that view into the sums dict, and this line:
sums[a_label] += this_data
mutates the original array through the view, since += requests that the operation be performed by mutation instead of by creating a new object, when the object is mutable.
I have many files .txt of this type:
name1.fits 0 0 4088.9 0. 1. 0. -0.909983 0.01386 0.91 0.01386 -0.286976 0.00379 2.979 0.03971 0. 0.
name2.fits 0 0 4088.9 0. 1. 0. -0.84702 0.01239 0.847 0.01239 -0.250671 0.00261 3.174 0.04749 0. 0.
#name3.fits 0 0 4088.9 0. 1. 0. -0.494718 0.01168 0.4947 0.01168 -0.185677 0.0042 2.503 0.04365 0. 0.
#name4.fits 0 1 4088.9 0. 1. 0. -0.751382 0.01342 0.7514 0.01342 -0.202141 0.00267 3.492 0.07224 0. 0.
name4.fits 0 1 4088.961 0.01147 1.000169 0. -0.813628 0.01035 0.8135 0.01035 -0.217434 0.00196 3.515 0.04045 0. 0.
I want to divide the values of one of these columns by the values of a column from another file of the same type. Here is what I have so far:
with open('4026.txt','r') as out1, open('4089.txt', 'r') as out2, \
open('4116.txt', 'r') as out3, open('4121.txt', 'r') as out4, \
open('4542.txt', 'r') as out5, open('4553.txt', 'r') as out6:
for data1 in out1.readlines():
col1 = data1.strip().split()
x = col1[9]
for data2 in out2.readlines():
col2 = data2.strip().split()
y = col2[9]
f = float(y) / float(x)
print f
However I'm getting the same values for x. For example if the first set of data is 4089.txt, and the second (4026.txt) is:
name1.fits 0 0 4026.2 0. 1. 0. -0.617924 0.01749 0.6179 0.01749 -0.19384 0.00383 2.995 0.09205 0. 0.
name2.fits 0 0 4026.2 0. 1. 0. -0.644496 0.01218 0.6445 0.01218 -0.183373 0.00291 3.302 0.05261 0. 0.
#name3.fits 0 0 4026.2 0. 1. 0. -0.507311 0.01557 0.5073 0.01557 -0.176148 0.00472 2.706 0.07341 0. 0.
#name4.fits 0 1 4026.2 0. 1. 0. -0.523856 0.01086 0.5239 0.01086 -0.173477 0.00279 2.837 0.05016 0. 0.
name4.fits 0 1 4026.229 0.0144 1.014936 0. -0.619708 0.00868 0.6106 0.00855 -0.185527 0.00189 3.138 0.04441 0. 0.
and I want to divide the 9th column of each file, taking only the first elements of each column I should get 0.91/0.6179 = 1.47, but I obtain 0.958241758242.
What's happening is that the code you have is capturing the last value in the for loop and dividing that. You should conduct the division at each stage of the for-loop to get the correct divisions.
An easier approach is placing all the values in a list
e.g.
x = [0.0149,0.01218,..etc] and y = [...]
Then you divide the two lists using numpy (or a for-loop against the lists). Remember that they both need to be of the same size to work.
Sample code:
with open('4026.txt','r') as out1, open('4089.txt', 'r') as out2, open('4116.txt', 'r') as out3, open('4121.txt', 'r') as out4, open('4542.txt', 'r') as out5, open('4553.txt', 'r') as out6:
# Build two lists
x = []
y = []
for data1 in out1.readlines():
col1 = data1.strip().split()
x.append(col1[9])
for data2 in out2.readlines():
col2 = data2.strip().split()
y.append(col2[9])
for i in range(0,len(x)):
# Make sure the denominator is not zero
if y[i] != 0:
print (1.0 * x[i])/y[i]
else:
print "Not possible"
You could do it like this:
with open('4026.txt','r') as out1, open('4089.txt', 'r') as out2:
x_col9 = [data1.strip().split()[9] for data1 in out1.readlines()]
y_col9 = [data2.strip().split()[9] for data2 in out2.readlines()]
if len(x_col9) != len(y_col9):
print('Error: files do not have same number of rows')
else:
f = [(float(y) / float(x)) for x, y in zip(x_col9, y_col9)]
print(f)
It may be better to process the files as shown below because it doesn't require reading the entire contents of all of them into memory first, and instead processes each one a line at a time:
x_col9 = [data1.strip().split()[9] for data1 in out1]
y_col9 = [data2.strip().split()[9] for data2 in out2]
I have a numpy array which has only a few non-zero entries which can be either positive or negative. E.g. something like this:
myArray = np.array([[ 0. , 0. , 0. ],
[ 0.32, -6.79, 0. ],
[ 0. , 0. , 0. ],
[ 0. , 1.5 , 0. ],
[ 0. , 0. , -1.71]])
In the end, I would like to receive a list where each entry of this list corresponds to a row of myArray and is a cumulative product of function outputs which depend on the entries of the respective row of myArray and another list (in the example below it is called l).
The individual terms depend on the sign of the myArray entry: When it is positive, I apply "funPos", when it is negative, I apply "funNeg" and if the entry is 0, the term will be 1. So in the example array from above it would be:
output = [1*1*1 ,
funPos(0.32, l[0])*funNeg(-6.79,l[1])*1,
1*1*1,
1*funPos(1.5, l[1])*1,
1*1*funNeg(-1.71, l[2])]
I implemented this as shown below and it gives me the desired output (note: that is just a highly simplified toy example; the actual matrices are far bigger and the functions more complicated). I go through each row of the array, if the sum of the row is 0, I don't have to do any calculations and the output is just 1. If it is not equal 0, I go through this row, check the sign of each value and apply the appropriate function.
import numpy as np
def doCalcOnArray(Array1, myList):
output = np.ones(Array1.shape[0]) #initialize output
for indRow,row in enumerate(Array1):
if sum(row) != 0: #only then calculations are needed
tempProd = 1. #initialize the product that corresponds to the row
for indCol, valCol in enumerate(row):
if valCol > 0:
tempVal = funPos(valCol, myList[indCol])
elif valCol < 0:
tempVal = funNeg(valCol, myList[indCol])
elif valCol == 0:
tempVal = 1
tempProd = tempProd*tempVal
output[indRow] = tempProd
return output
def funPos(val1,val2):
return val1*val2
def funNeg(val1,val2):
return val1*(val2+1)
myArray = np.array([[ 0. , 0. , 0. ],
[ 0.32, -6.79, 0. ],
[ 0. , 0. , 0. ],
[ 0. , 1.5 , 0. ],
[ 0. , 0. , -1.71]])
l = [1.1, 2., 3.4]
op = doCalcOnArray(myArray,l)
print op
The output is
[ 1. -7.17024 1. 3. -7.524 ]
which is the desired one.
My question is whether there is a more efficient way for doing that since that is quite "expensive" for large arrays.
EDIT:
I accepted gabhijit's answer because the pure numpy solution he came up with seems to be the fastest one for the arrays I am dealing with. Please note, that there is also a nice working solution from RaJa that requires panda and also the solution from dave works fine which can serve as a nice example on how to use generators and numpy's "apply_along_axis".
Here's what I have tried - using reduce, map. I am not sure how fast this is - but is this what you are trying to do?
Edit 4: Simplest and most readable - Make l a numpy array and then greatly simplifies where.
import numpy as np
import time
l = np.array([1.0, 2.0, 3.0])
def posFunc(x,y):
return x*y
def negFunc(x,y):
return x*(y+1)
def myFunc(x, y):
if x > 0:
return posFunc(x, y)
if x < 0:
return negFunc(x, y)
else:
return 1.0
myArray = np.array([
[ 0.,0.,0.],
[ 0.32, -6.79, 0.],
[ 0.,0.,0.],
[ 0.,1.5,0.],
[ 0.,0., -1.71]])
t1 = time.time()
a = np.array([reduce(lambda x, (y,z): x*myFunc(z,l[y]), enumerate(x), 1) for x in myArray])
t2 = time.time()
print (t2-t1)*1000000
print a
Basically let's just look at last line it says cumulatively multiply things in enumerate(xx), starting with 1 (last parameter to reduce). myFunc simply takes the element in myArray(row) and element # index row in l and multiplies them as needed.
My output is not same as yours - so I am not sure whether this is exactly what you want, but may be you can follow the logic.
Also I am not so sure how fast this will be for huge arrays.
edit: Following is a 'pure numpy way' to do this.
my = myArray # just for brevity
t1 = time.time()
# First set the positive and negative values
# complicated - [my.itemset((x,y), posFunc(my.item(x,y), l[y])) for (x,y) in zip(*np.where(my > 0))]
# changed to
my = np.where(my > 0, my*l, my)
# complicated - [my.itemset((x,y), negFunc(my.item(x,y), l[y])) for (x,y) in zip(*np.where(my < 0))]
# changed to
my = np.where(my < 0, my*(l+1), my)
# print my - commented out to time it.
# Now set the zeroes to 1.0s
my = np.where(my == 0.0, 1.0, my)
# print my - commented out to time it
a = np.prod(my, axis=1)
t2 = time.time()
print (t2-t1)*1000000
print a
Let me try to explain the zip(*np.where(my != 0)) part as best as I can. np.where simply returns two numpy arrays first array is an index of row, second array is an index of column that matches the condition (my != 0) in this case. We take a tuple of those indices and then use array.itemset and array.item, thankfully, column index is available for free to us, so we can just take the element # that index in the list l. This should be faster than previous (and by orders of magnitude readable!!). Need to timeit to find out whether it indeed is.
Edit 2: Don't have to call separately for positive and negative can be done with one call np.where(my != 0).
So, let's see if I understand your question.
You want to map elements of your matrix to a new matrix such that:
0 maps to 1
x>0 maps to funPos(x)
x<0 maps to funNeg(x)
You want to calculate the product of all elements in the rows this new matrix.
So, here's how I would go about doing it:
1:
def myFun(a):
if a==0:
return 1
if a>0:
return funPos(a)
if a<0:
return funNeg(a)
newFun = np.vectorize(myFun)
newArray = newFun(myArray)
And for 2:
np.prod(newArray, axis = 1)
Edit: To pass the index to funPos, funNeg, you can probably do something like this:
# Python 2.7
r,c = myArray.shape
ctr = -1 # I don't understand why this should be -1 instead of 0
def myFun(a):
global ctr
global c
ind = ctr % c
ctr += 1
if a==0:
return 1
if a>0:
return funPos(a,l[ind])
if a<0:
return funNeg(a,l[ind])
I think this numpy function would be helpful to you
numpy.apply_along_axis
Here is one implementation. Also I would warn against checking if the sum of the array is 0. Comparing floats to 0 can give unexpected behavior due to machine accuracy constraints. Also if you have -5 and 5 the sum is zero and I'm not sure thats what you want. I used numpy's any() function to see if anything was nonzero. For simplicity I also pulled your list (my_list) into global scope.
import numpy as np
my_list = 1.1, 2., 3.4
def func_pos(val1, val2):
return val1 * val2
def func_neg(val1, val2):
return val1 *(val2 + 1)
def my_generator(row):
for i, a in enumerate(row):
if a > 0:
yield func_pos(a, my_list[i])
elif a < 0:
yield func_neg(a, my_list[i])
else:
yield 1
def reduce_row(row):
if not row.any():
return 1.0
else:
return np.prod(np.fromiter(my_generator(row), dtype=float))
def main():
myArray = np.array([
[ 0. , 0. , 0. ],
[ 0.32, -6.79, 0. ],
[ 0. , 0. , 0. ],
[ 0. , 1.5 , 0. ],
[ 0. , 0. , -1.71]])
return np.apply_along_axis(reduce_row, axis=1, arr=myArray)
There are probably faster implmentations, I think apply_along_axis is really just a loop under the covers.
I didn't test, but I bet this is faster than what you started with, and should be more memory efficient.
I've tried your example with the masking function of numpy arrays. However, I couldn't find a solution to replace the values in your array by funPos or funNeg.
So my suggestion would be to try this using pandas instead as it conserves indices while masking.
See my example:
import numpy as np
import pandas as pd
def funPos(a, b):
return a * b
def funNeg(a, b):
return a * (b + 1)
myPosFunc = np.vectorize(funPos) #vectorized form of funPos
myNegFunc = np.vectorize(funNeg) #vectorized form of funNeg
#Input
I = [1.0, 2.0, 3.0]
x = pd.DataFrame([
[ 0.,0.,0.],
[ 0.32, -6.79, 0.],
[ 0.,0.,0.],
[ 0.,1.5,0.],
[ 0.,0., -1.71]])
b = pd.DataFrame(myPosFunc(x[x>0], I)) #calculate all positive values
c = pd.DataFrame(myNegFunc(x[x<0], I)) #calculate all negative values
b = b.combineMult(c) #put values of c in b
b = b.fillna(1) #replace all missing values that were '0' in the raw array
y = b.product() #multiply all elements in one row
#Output
print ('final result')
print (y)
print (y.tolist())