I am pretty new to python and have some problems with Randomness.
I am looking for something similar then RandomChoice in Mathematica.
I create a Matrix of dimension let's say 10x3 with random numbers greater 0. Let us call the total sum of every row s_i for i=0,...,9
Later I want to choose for every row 2 out of 3 elements (no repetition) with weighted probability s_ij/s_i
So I need something like this but with weigthed propabilities
n=10
aa=np.random.uniform(1000, 2500, (n,3))
print(aa)
help=[0,1,2]
dd=np.zeros((n,2))
for i in range(n):
cc=random.sample(help,2)
dd[i,0]=aa[i,cc[0]]
dd[i,1]=aa[i,cc[1]]
print(dd)
Here, additionally speed is an important factor since I will use it in an Montecarlo approach (that's the reason I switched from Mathematica to Python) and I guess, the above code can be improved heavily
Thanks in advance for any tipps/help
EDIT: I now have the following, which is working but does not look like good gode to me
#pre-defined lists
nn=3
aa=np.random.uniform(1000, 2500, (nn,3))
help1=[0,1,2]
help2=aa.sum(axis=1)
#now I create a weigthed prob list and fill it
help3=np.zeros((nn,3))
for i in range(nn):
help3[i,0]=aa[i,0]/help2[i]
help3[i,1]=aa[i,1]/help2[i]
help3[i,2]=aa[i,2]/help2[i]
#every timestep when I have to choose 2 out of 3
help5=np.zeros((nn,2))
for i in range(nn):
#cc=random.sample(help1,2)
help4=np.random.choice(help1, 2, replace=False, p=[help3[i,0], help3[i,1], help3[i,2]])
help5[i,0]=aa[i,cc[0]]
help5[i,1]=aa[i,cc[1]]
print(help5)
As pointed out in the comments, np.random.choice accepts a weights parameter, so you can simply use that in a loop:
import numpy as np
# Make input data
np.random.seed(0)
n = 10
aa = np.random.uniform(1000, 2500, (n, 3))
s = np.random.rand(n, 3)
# Normalize weights
s_norm = s / s.sum(1, keepdims=True)
# Output array
out = np.empty((n, 2), dtype=aa.dtype)
# Sample iteratively
for i in range(n):
out[i] = aa[i, np.random.choice(3, size=2, replace=False, p=s_norm[i])]
This is not the most efficient way to do things, though, as usually using vectorized operations is much faster than looping. Unfortunately, I don't think there is any way to sample from multiple categorical distributions at the same time (see NumPy issue #15201). However, since you always want to get two elements out of three, you could sample the element that you want to remove (with inverted probabilities) and then keep the other two. This snippet does something like that:
import numpy as np
# Make input data
np.random.seed(0)
n = 10
aa = np.random.uniform(1000, 2500, (n, 3))
s = np.random.rand(n, 3)
print(s)
# [[0.26455561 0.77423369 0.45615033]
# [0.56843395 0.0187898 0.6176355 ]
# [0.61209572 0.616934 0.94374808]
# [0.6818203 0.3595079 0.43703195]
# [0.6976312 0.06022547 0.66676672]
# [0.67063787 0.21038256 0.1289263 ]
# [0.31542835 0.36371077 0.57019677]
# [0.43860151 0.98837384 0.10204481]
# [0.20887676 0.16130952 0.65310833]
# [0.2532916 0.46631077 0.24442559]]
# Invert weights
si = 1 / s
# Normalize
si_norm = si / si.sum(1, keepdims=True)
# Accumulate
si_cum = np.cumsum(si_norm, axis=1)
# Sample according to inverted probabilities
t = np.random.rand(n, 1)
idx = np.argmax(t < si_cum, axis=1)
# Get non-sampled indices
r = np.arange(3)
m = r != idx[:, np.newaxis]
choice = np.broadcast_to(r, m.shape)[m].reshape(n, -1)
print(choice)
# [[1 2]
# [0 2]
# [0 2]
# [1 2]
# [0 2]
# [0 2]
# [0 1]
# [1 2]
# [0 2]
# [1 2]]
# Get corresponding data
out = np.take_along_axis(aa, choice, 1)
One possible drawback of this is that the chosen elements will always be in order (that is, for a given row, you may get the pairs of indices (0, 1), (0, 2) or (1, 2), but not (1, 0), (2, 0) or (2, 1)).
Of course, if you really just need a few samples, then the loop is probably the most convenient and maintainable solution, the second one would only be useful if you need to do this at larger scale.
Related
The SQDIFF is defined as openCV definition. (I believe they omit channels)
Which in junior numpy Python should be
A = np.arange(27, dtype=np.float32)
A = A.reshape(3,3,3) # The "image"
B = np.ones([2, 2, 3], dtype=np.float32) # window
rw, rh = A.shape[0] - B.shape[0] + 1, A.shape[1] - B.shape[1] + 1 # End result size
result = np.zeros([rw, rh])
for i in range(rw):
for j in range(rh):
w = A[i:i + B.shape[0], j:j + B.shape[1]]
res = B - w
result[i, j] = np.sum(
res ** 2
)
cv_result = cv.matchTemplate(A, B, cv.TM_SQDIFF) # this result is the same as the simple for loops
assert np.allclose(cv_result, result)
This is comparatively slow solution. I have read about sliding_window_view but cannot get it correct.
# This will fail with these large arrays but is ok for smaller ones
A = np.random.rand(1028, 1232, 3).astype(np.float32)
B = np.random.rand(248, 249, 3).astype(np.float32)
locations = np.lib.stride_tricks.sliding_window_view(A, B.shape)
sqdiff = np.sum((B - locations) ** 2, axis=(-1,-2, -3, -4)) # This will fail with normal sized images
will fail with MemoryError even if the result easily fits to memory. How can I produce similar results to the cv2.matchTemplate function with this faster way?
As a last resort, you may perform the computation in tiles, instead of computing "all at once".
np.lib.stride_tricks.sliding_window_view returns a view of the data, so it doesn't consume a lot of RAM.
The expression B - locations can't use a view, and requires the RAM for storing an array with shape (781, 984, 1, 248, 249, 3) of float elements.
The total RAM for storing B - locations is 781*984*1*248*249*3*4 = 569,479,908,096 bytes.
For avoiding the need for storing B - locations at the RAM at once, we may compute sqdiff in tiles, when "tile" computation requires less RAM.
A simple tiles division is using every row as a tile - loop over the rows of sqdiff, and compute the output row by row.
Example:
sqdiff = np.zeros((locations.shape[0], locations.shape[1]), np.float32) # Allocate an array for storing the result.
# Compute sqdiff row by row instead of computing all at once.
for i in range(sqdiff.shape[0]):
sqdiff[i, :] = np.sum((B - locations[i, :, :, :, :, :]) ** 2, axis=(-1, -2, -3, -4))
Executable code sample:
import numpy as np
import cv2
A = np.random.rand(1028, 1232, 3).astype(np.float32)
B = np.random.rand(248, 249, 3).astype(np.float32)
locations = np.lib.stride_tricks.sliding_window_view(A, B.shape)
cv_result = cv2.matchTemplate(A, B, cv2.TM_SQDIFF) # this result is the same as the simple for loops
#sqdiff = np.sum((B - locations) ** 2, axis=(-1, -2, -3, -4)) # This will fail with normal sized images
sqdiff = np.zeros((locations.shape[0], locations.shape[1]), np.float32) # Allocate an array for storing the result.
# Compute sqdiff row by row instead of computing all at once.
for i in range(sqdiff.shape[0]):
sqdiff[i, :] = np.sum((B - locations[i, :, :, :, :, :]) ** 2, axis=(-1, -2, -3, -4))
assert np.allclose(cv_result, sqdiff)
I know the solution is a bit disappointing... But it is the only generic solution I could find.
is equivalent to
where the 'star' operation is a cross-correlation, the 1_[m, n] is a window the size of the template, and 1_[k, l] is a window with the size of the image.
You can compute the cross-correlation terms using 'scipy.signal.correlate' and find the matches by looking for local minima in the square difference map.
You might want to do some non-minimum suppression too.
This solution will require orders of magnitude less memory to store.
For more help, please post a reproducible example with an image and template that are valid for the algorithm. Using noise will result in meaningless outputs.
I am generating a random matrix with
np.random.randint(2, size=(5, 3))
that outputs something like
[0,1,0],
[1,0,0],
[1,1,1],
[1,0,1],
[0,0,0]
How do I create the random matrix with the condition that each row cannot contain all 1's? That is, each row can be [1,0,0] or [0,0,0] or [1,1,0] or [1,0,1] or [0,0,1] or [0,1,0] or [0,1,1] but cannot be [1,1,1].
Thanks for your answers
Here's an interesting approach:
rows = np.random.randint(7, size=(6, 1), dtype=np.uint8)
np.unpackbits(rows, axis=1)[:, -3:]
Essentially, you are choosing integers 0-6 for each row, ie 000-110 as binary. 7 would be 111 (all 1's). You just need to extract binary digits as columns and take the last 3 digits (your 3 columns) since the output of unpackbits is 8 digits.
Output:
array([[1, 0, 1],
[1, 0, 0],
[1, 0, 0],
[1, 0, 0],
[0, 1, 1],
[0, 0, 0]], dtype=uint8)
If you always have 3 columns, one approach is to explicitly list the possible rows and then choose randomly among them until you have enough rows:
import numpy as np
# every acceptable row
choices = np.array([
[1,0,0],
[0,0,0],
[1,1,0],
[1,0,1],
[0,0,1],
[0,1,0],
[0,1,1]
])
n_rows = 5
# randomly pick which type of row to use for each row needed
idx = np.random.choice(range(len(choices)), size=n_rows)
# make an array by using the chosen rows
array = choices[idx]
If this needs to generalize to a large number of columns, it won't be practical to explicitly list all choices (even if you create the choices programmatically, the memory is still an issue; the number of possible rows grows exponentially in the number of columns). Instead, you can create an initial matrix and then just resample any unacceptable rows until there are none left. I'm assuming that a row is unacceptable if it consists only of 1s; it would be easy to adapt this to the case where the threshold is any number of 1s, though.
n_rows = 5
n_cols = 4
array = np.random.randint(2, size=(n_rows, n_cols))
all_1s_idx = array.sum(axis=-1) == n_cols
while all_1s_idx.any():
array[all_1s_idx] = np.random.randint(2, size=(all_1s_idx.sum(), n_cols))
all_1s_idx = array.sum(axis=-1) == n_cols
Here we just keep resampling all unacceptable rows until there are none left. Because all of the necessary rows are resampled at once, this should be quite efficient. Additionally, as the number of columns grows larger, the probability of a row having all 1s decreases exponentially, so efficiency shouldn't be a problem.
#busybear beat me to it but I'll post it anyway, as it is a bit more general:
def not_all(m, k):
if k>64 or sys.byteorder != 'little':
raise NotImplementedError
sample = np.random.randint(0, 2**k-1, (m,), dtype='u8').view('u1').reshape(m, -1)
sample[:, k//8] <<= -k%8
return np.unpackbits(sample).reshape(m, -1)[:, :k]
For example:
>>> sample = not_all(1000000, 11)
# sanity checks
>>> unq, cnt = np.unique(sample, axis=0, return_counts=True)
>>> len(unq) == 2**11-1
True
>>> unq.sum(1).max()
10
>>> cnt.min(), cnt.max()
(403, 568)
And while I'm at hijacking other people's answers here is a streamlined version of #Nathan's acceptance-rejection method.
def accrej(m, k):
sample = np.random.randint(0, 2, (m, k), bool)
all_ones, = np.where(sample.all(1))
while all_ones.size:
resample = np.random.randint(0, 2, (all_ones.size, k), bool)
sample[all_ones] = resample
all_ones = all_ones[resample.all(1)]
return sample.view('u1')
Try this solution using sum():
import numpy as np
array = np.random.randint(2, size=(5, 3))
for i, entry in enumerate(array):
if entry.sum() == 3:
while True:
new = np.random.randint(2, size=(1, 3))
if new.sum() == 3:
continue
break
array[i] = new
print(array)
Good luck my friend!
Given a source array
src = np.random.rand(320,240)
and an index array
idx = np.indices(src.shape).reshape(2, -1)
np.random.shuffle(idx.T)
we can map the linear index i in src to the 2-dimensional index idx[:,i] in a destination array dst via
dst = np.empty_like(src)
dst[tuple(idx)] = src.ravel()
This is discussed in Python: Mapping between two arrays with an index array
However, if this mapping is not 1-to-1, i.e., multiple entries in src map to the same entry in dst, according to the docs it is unspecified which of the source entries will be written to dst:
For advanced assignments, there is in general no guarantee for the iteration order. This means that if an element is set more than once, it is not possible to predict the final result.
If we are additionally given a precedence array
p = np.random.rand(*src.shape)
how can we use p to disambiguate this situation, i.e., write the entry with highest precedence according to p?
Here is a method using a sparse matrix for sorting (it has large overhead but scales better than argsort, presumably because it uses some radix sort like method (?)). Duplicate indices without precedence are explicitly set to -1. We make the destination array one cell too big, the surplus cell serving as trash can.
import numpy as np
from scipy import sparse
N = 2
idx = np.random.randint(0, N, (2, N, N))
prec = np.random.random((N, N))
src = np.arange(N*N).reshape(N, N)
def f_sparse(idx, prec, src):
idx = np.ravel_multi_index(idx, src.shape).ravel()
sp = sparse.csr_matrix((prec.ravel(), idx, np.arange(idx.size+1)),
(idx.size, idx.size)).tocsc()
top = sp.indptr.argmax()
mx = np.repeat(np.maximum.reduceat(sp.data, sp.indptr[:top]),
np.diff(sp.indptr[:top+1]))
res = idx.copy()
res[sp.indices[sp.data != mx]] = -1
dst = np.full((idx.size + 1,), np.nan)
dst[res] = src.ravel()
return dst[:-1].reshape(src.shape)
print(idx)
print(prec)
print(src)
print(f_sparse(idx, prec, src))
Sample run:
[[[1 0]
[1 0]]
[[0 1]
[0 0]]]
[[0.90995366 0.92095225]
[0.60997092 0.84092015]]
[[0 1]
[2 3]]
[[ 3. 1.]
[ 0. nan]]
I have an array of 5 values, consisting of 4 values and one index. I sort and split the array along the index. This leads me to splits of matrices with different lengths. From here on I want to calculate the mean, variance of the fourth values and covariance of the first 3 values for every split. My current approach works with a for loop, which I would like to replace by matrix operations, but I am struggeling with the different sizes of my matrices.
import numpy as np
A = np.random.rand(10,5)
A[:,-1] = np.random.randint(4, size=10)
sorted_A = A[np.argsort(A[:,4])]
splits = np.split(sorted_A, np.where(np.diff(sorted_A[:,4]))[0]+1)
My current for loop looks like this:
result = np.zeros((len(splits), 5))
for idx, values in enumerate(splits):
if(len(values))>0:
result[idx, 0] = np.mean(values[:,3])
result[idx, 1] = np.var(values[:,3])
result[idx, 2:5] = np.cov(values[:,0:3].transpose(), ddof=0).diagonal()
else:
result[idx, 0] = values[:,3]
I tried to work with masked arrays without success, since I couldn't load the matrices into the masked arrays in a proper form. Maybe someone knows how to do this or has a different suggestion.
You can use np.add.reduceat as follows:
>>> idx = np.concatenate([[0], np.where(np.diff(sorted_A[:,4]))[0]+1, [A.shape[0]]])
>>> result2 = np.empty((idx.size-1, 5))
>>> result2[:, 0] = np.add.reduceat(sorted_A[:, 3], idx[:-1]) / np.diff(idx)
>>> result2[:, 1] = np.add.reduceat(sorted_A[:, 3]**2, idx[:-1]) / np.diff(idx) - result2[:, 0]**2
>>> result2[:, 2:5] = np.add.reduceat(sorted_A[:, :3]**2, idx[:-1], axis=0) / np.diff(idx)[:, None]
>>> result2[:, 2:5] -= (np.add.reduceat(sorted_A[:, :3], idx[:-1], axis=0) / np.diff(idx)[:, None])**2
>>>
>>> np.allclose(result, result2)
True
Note that the diagonal of the covariance matrix are just the variances which simplifies this vectorization quite a bit.
Suppose that the variables x and theta can take the possible values [0, 1, 2] and [0, 1, 2, 3], respectively.
Let's say that in one realization, x = 1 and theta = 3. The natural way to represent this is by a tuple (1,3). However, I'd like to instead label the state (1,3) by a single index. A 'brute-force' method of doing this is to form the Cartesian product of all the possible ordered pairs (x,theta) and look it up:
import numpy as np
import itertools
N_x = 3
N_theta = 4
np.random.seed(seed = 1)
x = np.random.choice(range(N_x))
theta = np.random.choice(range(N_theta))
def get_box(x, N_x, theta, N_theta):
states = list(itertools.product(range(N_x),range(N_theta)))
inds = [i for i in range(len(states)) if states[i]==(x,theta)]
return inds[0]
print (x, theta)
box = get_box(x, N_x, theta, N_theta)
print box
This gives (x, theta) = (1,3) and box = 7, which makes sense if we look it up in the states list:
[(0, 0), (0, 1), (0, 2), (0, 3), (1, 0), (1, 1), (1, 2), (1, 3), (2, 0), (2, 1), (2, 2), (2, 3)]
However, this 'brute-force' approach seems inefficient, as it should be possible to determine the index beforehand without looking it up. Is there any general way to do this? (The number of states N_x and N_theta may vary in the actual application, and there might be more variables in the Cartesian product).
If you always store your states lexicographically and the possible values for x and theta are always the complete range from 0 to some maximum as your examples suggests, you can use the formula
index = x * N_theta + theta
where (x, theta) is one of your tuples.
This generalizes in the following way to higher dimensional tuples: If N is a list or tuple representing the ranges of the variables (so N[0] is the number of possible values for the first variable, etc.) and p is a tuple, you get the index into a lexicographically sorted list of all possible tuples using the following snippet:
index = 0
skip = 1
for dimension in reversed(range(len(N))):
index += skip * p[dimension]
skip *= N[dimension]
This might not be the most Pythonic way to do it but it shows what is going on: You think of your tuples as a hypercube where you can only go along one dimension, but if you reach the edge, your coordinate in the "next" dimension increases and your traveling coordinate resets. The reader is advised to draw some pictures. ;)
I think it depends on the data you have. If they are sparse, the best solution is a dictionary. And works for any tuple's dimension.
import itertools
import random
n = 100
m = 100
l1 = [i for i in range(n)]
l2 = [i for i in range(m)]
a = {}
prod = [element for element in itertools.product(l1, l2)]
for i in prod:
a[i] = random.randint(1, 100)
A very good source about the performance is in this discution.
For the sake of completeness I'll include my implementation of Julian Kniephoff's solution, get_box3, with a slightly adapted version of the original implementation, get_box2:
# 'Brute-force' method
def get_box2(p, N):
states = list(itertools.product(*[range(n) for n in N]))
return states.index(p)
# 'Analytic' method
def get_box3(p, N):
index = 0
skip = 1
for dimension in reversed(range(len(N))):
index += skip * p[dimension]
skip *= N[dimension]
return index
p = (1,3,2) # Tuple characterizing the total state of the system
N = [3,4,3] # List of the number of possible values for each state variable
print "Brute-force method yields %s" % get_box2(p, N)
print "Analytical method yields %s" % get_box3(p, N)
Both the 'brute-force' and 'analytic' method yield the same result:
Brute-force method yields 23
Analytical method yields 23
but I expect the 'analytic' method to be faster. I've changed the representation to p and N as suggested by Julian.