I have a array of identifiers that have been grouped into threes. For each group, I would like to randomly assign them to one of three sets and to have those assignments stored in another array. So, for a given array of grouped identifiers (I presort them):
groupings = array([1,1,1,2,2,2,3,3,3])
A possible output would be
assignments = array([0,1,2,1,0,2,2,0,1])
Ultimately, I would like to be able to generate many of these assignment lists and to do so efficiently. My current method is just to create an zeroes array and set each consecutive subarray of length 3 to a random permutation of 3.
assignment = numpy.zeros((12,10),dtype=int)
for i in range(0,12,3):
for j in range(10):
assignment[i:i+3,j] = numpy.random.permutation(3)
Is there a better/faster way?
Two things I can think about:
instead of visiting the 2D array 3 row * 1 column in your inner loop, try to visit it 1*3. Accessing 2D array horizontally first is usually faster than vertically first, since it gives you better spatial locality, which is good for caching.
instead of running numpy.random.permutation(3) each time, if 3 is fixed and is a small number, try to generate the arrays of permutations beforehand and save them into a constant array of array like: (array([0,1,2]), array([0,2,1]), array([1,0,2])...). You just need to randomly pick one array from it each time.
Related
I have a numpy array total_weights which is an IxI array of floats. Each row/columns corresponds to one of I items.
During my main loop I acquire another real float array weights of size NxM (N, M < I) where each/column row also corresponds to one of the original I items (duplicates may also exist).
I want to add this array to total_weights. However, the sizes and order of the two arrays are not aligned. Therefore, I maintain a position map, a pandas Series with an index of item IDs to their proper index/position in total_weights, called pos_df.
In order to properly make the addition I want I perform the following operation inside the loop:
candidate_pos = pos_df.loc[candidate_IDs] # don't worry about how I get these
rated_pos = pos_df.loc[rated_IDs] # ^^
total_weights[candidate_pos, :][:, rated_pos] += weights
Unfortunately, the above operation must be editing a copy of the orignal total_weights matrix and not a view of it, since after the loop the total_weights array is still full of zeroes. How do I make it change the original data?
Edit:
I want to clarify that candidate_IDs are the N IDs of items and rated_IDs are the M IDs of items in the NxM array called weights. Through pos_df I can get their total order in all of I items.
Also, my guess as to the reason a copy is returned is that candidate_IDs and thus candidate_pos will probably contain duplicates e.g. [0, 1, 3, 1, ...]. So the same rows will sometimes have to be pulled into the new array/view.
Your first problem is in how you are using indexing. As candidate_pos is an array, total_weights[candidate_pos, :] is a fancy indexing operation that returns a new array. When you apply indexing again, i.e. ...[:, rated_pos] you are assigning elements to the newly created array rather than to total_weights.
The second problem, as you have already spotted, is in the actual logic you are trying to apply. If I understand your example correctly, you have a I x I matrix with weights, and you want to update weights for a sequence of pairs ((Ix_1, Iy_1), ..., (Ix_N, Iy_N)) with repetitions, with a single line of code. This can't be done in this way, using += operator, as you'll find yourself having added to weights[Ix_n, Iy_n] the weight corresponding to the last time (Ix_n, Iy_n) appears in your sequence: you have to first merge all the repeating elements in your sequence of weight updates, and then perform the update of your weights matrix with the new "unique" sequence of updates. Alternatively, you must collect your weights as an I x I matrix, and directly sum it to total_weights.
After #rveronese pointed out that it's impossible to do it one go because of the duplicates in candidate_pos I believe I have managed to do what I want with a for-loop on them:
candidate_pos = pos_df.loc[candidate_IDs] # don't worry about how I get these
rated_pos = pos_df.loc[rated_IDs] # ^^
for i, c in enumerate(candidate_pos):
total_weights[c, rated_pos] += weights[i, :]
In this case, the indexing does not create a copy and the assignment should be working as expected...
I am just not getting this. I want to create a simple 2D array. I do not know the size, but it will be N number of rows of 3 columns. I have tried several things:
data_list[v_row][v_trade_date, v_buy_text, v_ticker]
data_list.append[v_trade_date, v_buy_text, v_ticker]
data_list[v_row].append(v_trade_date)
data_list[v_row].append(v_sell_text)
data_list[v_row].append(v_ticker)
Just not getting it. I do not need to use any FOR loop to assign the values, as each time through the outer loop, I'll be getting 3 items, and I'll assign them explicitly into different cells of that row. Maybe my thinking is wrong, and I'm thinking of 2D arrays from other languages, but any help would be great.
For appending a row:
data_list.append([v_trade_date, v_buy_text, v_ticker])
And for assigning:
data_list[v_row] = [v_trade_date, v_buy_text, v_ticker]
I'm working with a 2D numpy array A, performing a comparison of a one dimensional array, X, against each row in A. As approximate matches are found, I'm keeping track of their indices in A in a dtype=bool array S. I'd like to use S to shrink the field of match candidates in A to improve efficiency. Here's the basic idea in code:
def compare(nxt):
S[nxt] = 0 #sets boolean
T = A[nxt, i:] == A[S, :-i] #T has different dimesions than A
compare() is iterated over and S is progressively populated with False values.
The problem is that the boolean array T is of the same dimensions as the pared down version of A not the original version. I'm hoping to use T to get the indices (in the unsliced A) of the approximate matches for later use.
np.argwhere(T)
This returns a list of indices of the matches, but again in the slice of A.
It seems like there has to be a better way to, at the same time, crop A for more efficient searching and still be able to get the correct index of the matching row.
Any thoughts?
I am trying to do time series data analysis on all the fracking wells in pennsylvania, and naturally a lot of these are dry wells with 0 production. I want to create the histogram of each array inside the list without zero in it, therefore the total length of each array will shrink a little bit
P = [data3P, data4P, data5P, data6P, data7P, data8P, data9P, data10P]
for i in P
N = []
for i in data3P:
if i >0:
N.append(i)
N
I think I should do it in a for loop, but just not sure how to do that for all the arrays in the list. Shall I use a double for loop?
If you are dealing with large amounts of data, numpy is your friend. You can create a masked array (where the zeros are masked), and apply the regular histogram function, see this answer for an example.
I'm not 100% sure if this is what you need, but if you want to gather all the NumPy arrays datanP but without any zeros they might contain, you can do this:
[a[a!=0] for a in P]
It would help if you showed what one of those input arrays looks like, and what you'd like to get out of the processing you're trying to do.
I have two large arrays of type numpy.core.memmap.memmap, called data and new_data, with > 7 million float32 items.
I need to iterate over them both within the same loop which I'm currently doing like this.
for i in range(0,len(data)):
if new_data[i] == 0: continue
combo = ( data[i], new_data[i] )
if not combo in new_values_map: new_values_map[combo] = available_values.pop()
data[i] = new_values_map[combo]
However this is unreasonably slow, so I gather that using numpy's vectorising functions are the way to go.
Is it possible to vectorize with the index – so that the vectorised array can compare it's items to the corresponding item in the other array?
I thought of zipping the two arrays but I guess this would cause unreasonable overhead to prepare?
Is there some other way to optimise this operation?
For context: the goal is to effectively merge the two arrays such that each unique combination of corresponding values between the two arrays is represented by a different value in the resulting array, except zeros in the new_data array which are ignored. The arrays represent 3D bitmap images.
EDIT: available_values is a set of values that have not yet been used in data and persists across calls to this loop. new_values_map on the other hand is reset to an empty dictionary before each time this loop is used.
EDIT2: the data array only contains whole numbers, that is: it's initialised as zeros then with each usage of this loop with a different new_data it is populated with more values drawn from available_values which is initially a range of integers. new_data could theoretically be anything.
In answer to you question about vectorising, the answer is probably yes, though you need to clarify what available_values contains and how it's used, as that is the core of the vectorisation.
Your solution will probably look something like this...
indices = new_data != 0
data[indices] = available_values
In this case, if available_values can be considered as a set of values in which we allocate the first value to the first value in data in which new_data is not 0, that should work, as long as available_values is a numpy array.
Let's say new_data and data take values 0-255, then you can construct an available_values array with unique entries for every possible pair of values in new_data and data like the following:
available_data = numpy.array(xrange(0, 255*255)).reshape((255, 255))
indices = new_data != 0
data[indices] = available_data[data[indices], new_data[indices]]
Obviously, available_data can be whatever mapping you want. The above should be very quick whatever is in available_data (especially if you only construct available_data once).
Python gives you a powerful tools for handling large arrays of data: generators and iterators
Basically, they will allow to acces your data as they were regular lists, without fetching them at once to memory, but accessing piece by piece.
In case of accessing two large arrays at once, you can
for item_a, item_b in izip(data, new_data):
#... do you stuff here
izip creates an iterator what iterates over the elements of your arrays at once, but it does picks pieces as you need them, not all at once.
It seems that replacing the first two lines of loop to produce:
for i in numpy.where(new_data != 0)[0]:
combo = ( data[i], new_data[i] )
if not combo in new_values_map: new_values_map[combo] = available_values.pop()
data[i] = new_values_map[combo]
has the desired effect.
So most of the time in the loop was spent skipping the entire loop upon encountering a zero in new_data. Don't really understand why these many null iterations were so expensive, maybe one day I will...