In a nutshell:
My implementation of the Wave Collapse Function algorithm in Python 2.7 is flawed but I'm unable to identify where the problem is located. I would need help to find out what I'm possibly missing or doing wrong.
What is the Wave Collapse Function algorithm ?
It is an algorithm written in 2016 by Maxim Gumin that can generate procedural patterns from a sample image. You can see it in action here (2D overlapping model) and here (3D tile model).
Goal of this implementation:
To boil down the algorithm (2D overlapping model) to its essence and avoid the redondancies and clumsiness of the original C# script (surprisingly long and difficult to read). This is an attempt to make a shorter, clearer and pythonic version of this algorithm.
Characteristics of this implementation:
I'm using Processing (Python mode), a software for visual design that makes image manipulation easier (no PIL, no Matplotlib, ...). The main drawbacks are that I'm limited to Python 2.7 and can NOT import numpy.
Unlike the original version, this implementation:
is not object oriented (in its current state), making it easier to understand / closer to pseudo-code
is using 1D arrays instead of 2D arrays
is using array slicing for matrix manipulation
The Algorithm (as I understand it)
1/ Read the input bitmap, store every NxN patterns and count their occurences.
(optional: Augment pattern data with rotations and reflections.)
For example, when N = 3:
2/ Precompute and store every possible adjacency relations between patterns.
In the example below, patterns 207, 242, 182 and 125 can overlap the right side of pattern 246
3/ Create an array with the dimensions of the output (called W for wave). Each element of this array is an array holding the state (True of False) of each pattern.
For example, let's say we count 326 unique patterns in input and we want our output to be of dimensions 20 by 20 (400 cells). Then the "Wave" array will contain 400 (20x20) arrays, each of them containing 326 boolan values.
At start, all booleans are set to True because every pattern is allowed at any position of the Wave.
W = [[True for pattern in xrange(len(patterns))] for cell in xrange(20*20)]
4/ Create another array with the dimensions of the output (called H). Each element of this array is a float holding the "entropy" value of its corresponding cell in output.
Entropy here refers to Shannon Entropy and is computed based on the number of valid patterns at a specific location in the Wave. The more a cell has valid patterns (set to True in the Wave), the higher its entropy is.
For example, to compute the entropy of cell 22 we look at its corresponding index in the wave (W[22]) and count the number of booleans set to True. With that count we can now compute the entropy with the Shannon formula. The result of this calculation will be then stored in H at the same index H[22]
At start, all cells have the same entropy value (same float at every position in H) since all patterns are set to True, for each cell.
H = [entropyValue for cell in xrange(20*20)]
These 4 steps are introductory steps, they are necessary to initalize the algorithm. Now starts the core of the algorithm:
5/ Observation:
Find the index of the cell with the minimum nonzero entropy (Note that at the very first iteration all entropies are equal so we need to pick the index of a cell randomly.)
Then, look at the still valid patterns at the corresponding index in the Wave and select one of them randomly, weighted by the frequency that pattern appears in the input image (weighted choice).
For example if the lowest value in H is at index 22 (H[22]), we look at all the patterns set to True at W[22] and pick one randomly based on the number of times it appears in the input. (Remember at step 1 we've counted the number of occurences for each pattern). This insures that patterns appear with a similar distribution in the output as are found in the input.
6/ Collapse:
We now assign the index of the selected pattern to the cell with the minimum entropy. Meaning that every pattern at the corresponding location in the Wave are set to False except for the one that has been chosen.
For example if pattern 246 in W[22] was set to True and has been selected, then all other patterns are set to False. Cell 22 is assigned pattern 246.
In output cell 22 will be filled with the first color (top left corner) of pattern 246. (blue in this example)
7/ Propagation:
Because of adjacency constraints, that pattern selection has consequences on the neighboring cells in the Wave. The arrays of booleans corresponding to the cells on the left and right, on top of and above the recently collapsed cell need to be updated accordingly.
For example if cell 22 has been collapsed and assigned with pattern 246, then W[21] (left), W[23] (right), W[2] (up) and W[42] (down) have to be modified so as they only keep to True the patterns that are adjacent to pattern 246.
For example, looking back at the picture of step 2, we can see that only patterns 207, 242, 182 and 125 can be placed on the right of pattern 246. That means that W[23] (right of cell 22) needs to keep patterns 207, 242, 182 and 125 as True and set all other patterns in the array as False. If these patterns are not valid anymore (already set to False because of a previous constraint) then the algorithm is facing a contradiction.
8/ Updating entropies
Because a cell has been collapsed (one pattern selected, set to True) and its surrounding cells updated accordingly (setting non adjacent patterns to False) the entropy of all these cells have changed and needs to be computed again. (Remember that the entropy of a cell is correlated to the number of valid pattern it holds in the Wave.)
In the example, the entropy of cell 22 is now 0, (H[22] = 0, because only pattern 246 is set to True at W[22]) and the entropy of its neighboring cells have decreased (patterns that were not adjacent to pattern 246 have been set to False).
By now the algorithm arrives at the end of the first iteration and will loop over steps 5 (find cell with minimum non zero entropy) to 8 (update entropies) until all cells are collapsed.
My script
You'll need Processing with Python mode installed to run this script.
It contains around 80 lines of code (short compared to the ~1000 lines of the original script) that are fully annotated so it can be rapidly understood. You'll also need to download the input image and change the path on line 16 accordingly.
from collections import Counter
from itertools import chain, izip
import math
d = 20 # dimensions of output (array of dxd cells)
N = 3 # dimensions of a pattern (NxN matrix)
Output = [120 for i in xrange(d*d)] # array holding the color value for each cell in the output (at start each cell is grey = 120)
def setup():
size(800, 800, P2D)
textSize(11)
global W, H, A, freqs, patterns, directions, xs, ys, npat
img = loadImage('Flowers.png') # path to the input image
iw, ih = img.width, img.height # dimensions of input image
xs, ys = width//d, height//d # dimensions of cells (squares) in output
kernel = [[i + n*iw for i in xrange(N)] for n in xrange(N)] # NxN matrix to read every patterns contained in input image
directions = [(-1, 0), (1, 0), (0, -1), (0, 1)] # (x, y) tuples to access the 4 neighboring cells of a collapsed cell
all = [] # array list to store all the patterns found in input
# Stores the different patterns found in input
for y in xrange(ih):
for x in xrange(iw):
''' The one-liner below (cmat) creates a NxN matrix with (x, y) being its top left corner.
This matrix will wrap around the edges of the input image.
The whole snippet reads every NxN part of the input image and store the associated colors.
Each NxN part is called a 'pattern' (of colors). Each pattern can be rotated or flipped (not mandatory). '''
cmat = [[img.pixels[((x+n)%iw)+(((a[0]+iw*y)/iw)%ih)*iw] for n in a] for a in kernel]
# Storing rotated patterns (90°, 180°, 270°, 360°)
for r in xrange(4):
cmat = zip(*cmat[::-1]) # +90° rotation
all.append(cmat)
# Storing reflected patterns (vertical/horizontal flip)
all.append(cmat[::-1])
all.append([a[::-1] for a in cmat])
# Flatten pattern matrices + count occurences
''' Once every pattern has been stored,
- we flatten them (convert to 1D) for convenience
- count the number of occurences for each one of them (one pattern can be found multiple times in input)
- select unique patterns only
- store them from less common to most common (needed for weighted choice)'''
all = [tuple(chain.from_iterable(p)) for p in all] # flattern pattern matrices (NxN --> [])
c = Counter(all)
freqs = sorted(c.values()) # number of occurences for each unique pattern, in sorted order
npat = len(freqs) # number of unique patterns
total = sum(freqs) # sum of frequencies of unique patterns
patterns = [p[0] for p in c.most_common()[:-npat-1:-1]] # list of unique patterns sorted from less common to most common
# Computes entropy
''' The entropy of a cell is correlated to the number of possible patterns that cell holds.
The more a cell has valid patterns (set to 'True'), the higher its entropy is.
At start, every pattern is set to 'True' for each cell. So each cell holds the same high entropy value'''
ent = math.log(total) - sum(map(lambda x: x * math.log(x), freqs)) / total
# Initializes the 'wave' (W), entropy (H) and adjacencies (A) array lists
W = [[True for _ in xrange(npat)] for i in xrange(d*d)] # every pattern is set to 'True' at start, for each cell
H = [ent for i in xrange(d*d)] # same entropy for each cell at start (every pattern is valid)
A = [[set() for dir in xrange(len(directions))] for i in xrange(npat)] #see below for explanation
# Compute patterns compatibilities (check if some patterns are adjacent, if so -> store them based on their location)
''' EXAMPLE:
If pattern index 42 can placed to the right of pattern index 120,
we will store this adjacency rule as follow:
A[120][1].add(42)
Here '1' stands for 'right' or 'East'/'E'
0 = left or West/W
1 = right or East/E
2 = up or North/N
3 = down or South/S '''
# Comparing patterns to each other
for i1 in xrange(npat):
for i2 in xrange(npat):
for dir in (0, 2):
if compatible(patterns[i1], patterns[i2], dir):
A[i1][dir].add(i2)
A[i2][dir+1].add(i1)
def compatible(p1, p2, dir):
'''NOTE:
what is refered as 'columns' and 'rows' here below is not really columns and rows
since we are dealing with 1D patterns. Remember here N = 3'''
# If the first two columns of pattern 1 == the last two columns of pattern 2
# --> pattern 2 can be placed to the left (0) of pattern 1
if dir == 0:
return [n for i, n in enumerate(p1) if i%N!=2] == [n for i, n in enumerate(p2) if i%N!=0]
# If the first two rows of pattern 1 == the last two rows of pattern 2
# --> pattern 2 can be placed on top (2) of pattern 1
if dir == 2:
return p1[:6] == p2[-6:]
def draw(): # Equivalent of a 'while' loop in Processing (all the code below will be looped over and over until all cells are collapsed)
global H, W, grid
### OBSERVATION
# Find cell with minimum non-zero entropy (not collapsed yet)
'''Randomly select 1 cell at the first iteration (when all entropies are equal),
otherwise select cell with minimum non-zero entropy'''
emin = int(random(d*d)) if frameCount <= 1 else H.index(min(H))
# Stoping mechanism
''' When 'H' array is full of 'collapsed' cells --> stop iteration '''
if H[emin] == 'CONT' or H[emin] == 'collapsed':
print 'stopped'
noLoop()
return
### COLLAPSE
# Weighted choice of a pattern
''' Among the patterns available in the selected cell (the one with min entropy),
select one pattern randomly, weighted by the frequency that pattern appears in the input image.
With Python 2.7 no possibility to use random.choice(x, weight) so we have to hard code the weighted choice '''
lfreqs = [b * freqs[i] for i, b in enumerate(W[emin])] # frequencies of the patterns available in the selected cell
weights = [float(f) / sum(lfreqs) for f in lfreqs] # normalizing these frequencies
cumsum = [sum(weights[:i]) for i in xrange(1, len(weights)+1)] # cumulative sums of normalized frequencies
r = random(1)
idP = sum([cs < r for cs in cumsum]) # index of selected pattern
# Set all patterns to False except for the one that has been chosen
W[emin] = [0 if i != idP else 1 for i, b in enumerate(W[emin])]
# Marking selected cell as 'collapsed' in H (array of entropies)
H[emin] = 'collapsed'
# Storing first color (top left corner) of the selected pattern at the location of the collapsed cell
Output[emin] = patterns[idP][0]
### PROPAGATION
# For each neighbor (left, right, up, down) of the recently collapsed cell
for dir, t in enumerate(directions):
x = (emin%d + t[0])%d
y = (emin/d + t[1])%d
idN = x + y * d #index of neighbor
# If that neighbor hasn't been collapsed yet
if H[idN] != 'collapsed':
# Check indices of all available patterns in that neighboring cell
available = [i for i, b in enumerate(W[idN]) if b]
# Among these indices, select indices of patterns that can be adjacent to the collapsed cell at this location
intersection = A[idP][dir] & set(available)
# If the neighboring cell contains indices of patterns that can be adjacent to the collapsed cell
if intersection:
# Remove indices of all other patterns that cannot be adjacent to the collapsed cell
W[idN] = [True if i in list(intersection) else False for i in xrange(npat)]
### Update entropy of that neighboring cell accordingly (less patterns = lower entropy)
# If only 1 pattern available left, no need to compute entropy because entropy is necessarily 0
if len(intersection) == 1:
H[idN] = '0' # Putting a str at this location in 'H' (array of entropies) so that it doesn't return 0 (float) when looking for minimum entropy (min(H)) at next iteration
# If more than 1 pattern available left --> compute/update entropy + add noise (to prevent cells to share the same minimum entropy value)
else:
lfreqs = [b * f for b, f in izip(W[idN], freqs) if b]
ent = math.log(sum(lfreqs)) - sum(map(lambda x: x * math.log(x), lfreqs)) / sum(lfreqs)
H[idN] = ent + random(.001)
# If no index of adjacent pattern in the list of pattern indices of the neighboring cell
# --> mark cell as a 'contradiction'
else:
H[idN] = 'CONT'
# Draw output
''' dxd grid of cells (squares) filled with their corresponding color.
That color is the first (top-left) color of the pattern assigned to that cell '''
for i, c in enumerate(Output):
x, y = i%d, i/d
fill(c)
rect(x * xs, y * ys, xs, ys)
# Displaying corresponding entropy value
fill(0)
text(H[i], x * xs + xs/2 - 12, y * ys + ys/2)
Problem
Despite all my efforts to carefully put into code all the steps described above, this implementation returns very odd and disappointing results:
Example of a 20x20 output
Both the pattern distribution and the adjacency constraints seem to be respected (same amount of blue, green, yellow and brown colors as in input and same kind of patterns: horizontal ground , green stems).
However these patterns:
are often disconnected
are often incomplete (lack of "heads" composed of 4-yellow petals)
run into way too many contradictory states (grey cells marked as "CONT")
On that last point, I should clarify that contradictory states are normal but should happen very rarely (as stated in the middle of page 6 of this paper and in this article)
Hours of debugging convinced me that introductory steps (1 to 5) are correct (counting and storing patterns, adjacency and entropy computations, arrays initialization). This has led me to think that something must be off with the core part of the algorithm (steps 6 to 8). Either I am implementing one of these steps incorrectly or I am missing a key element of the logic.
Any help regarding that matter would thus be immensely appreciated !
Also, any answer that is based on the script provided (using Processing or not) is welcomed.
Useful additionnal ressources:
This detailed article from Stephen Sherratt and this explanatory paper from Karth & Smith.
Also, for comparison I would suggest to check this other Python implementation (contains a backtracking mechanism that isn't mandatory) .
Note: I did my best to make this question as clear as possible (comprehensive explanation with GIFs and illustrations, fully annotated code with useful links and ressources) but if for some reasons you decide to vote it down, please leave a brief comment to explain why you're doing so.
The hypothesis suggested by #mbrig and #Leon that the propagation step iterates over a whole stack of cells (instead of being limited to a set of 4 direct neighbors) was correct. The following is an attempt to provide further details while answering my own questions.
The problem occured at step 7, while propagating. The original algorithm does update the 4 direct neighbors of a specific cell BUT:
the index of that specific cell is in turns replaced by the indices of the previously updated neighbors.
this cascading process is triggered every time a cell is collapsed
and last as long as the adjacent patterns of a specific cell are available in 1 of its neighboring cell
In other words, and as mentionned in the comments, this is a recursive type of propagation that updates not only the neighbors of the collapsed cell, but also the neighbors of the neighbors... and so on as long as adjacencies are possible.
Detailed Algorithm
Once a cell is collapsed, its index is put in a stack. That stack is meant later to temporarily store indices of neighoring cells
stack = set([emin]) #emin = index of cell with minimum entropy that has been collapsed
The propagation will last as long as that stack is filled with indices:
while stack:
First thing we do is pop() the last index contained in the stack (the only one for now) and get the indices of its 4 neighboring cells (E, W, N, S). We have to keep them withing bounds and make sure they wrap around.
while stack:
idC = stack.pop() # index of current cell
for dir, t in enumerate(mat):
x = (idC%w + t[0])%w
y = (idC/w + t[1])%h
idN = x + y * w # index of neighboring cell
Before going any further, we make sure the neighboring cell is not collapsed yet (we don't want to update a cell that has only 1 pattern available):
if H[idN] != 'c':
Then we check all the patterns that could be placed at that location. ex: if the neighboring cell is on the left of the current cell (east side), we look at all the patterns that can be placed on the left of each pattern contained in the current cell.
possible = set([n for idP in W[idC] for n in A[idP][dir]])
We also look at the patterns that are available in the neighboring cell:
available = W[idN]
Now we make sure that the neighboring cell really have to be updated. If all its available patterns are already in the list of all the possible patterns —> there’s no need to update it (the algorithm skip this neighbor and goes on to the next) :
if not available.issubset(possible):
However, if it is not a subset of the possible list —> we look at the intersection of the two sets (all the patterns that can be placed at that location and that, "luckily", are available at that same location):
intersection = possible & available
If they don't intersect (patterns that could have been placed there but are not available) it means we ran into a "contradiction". We have to stop the whole WFC algorithm.
if not intersection:
print 'contradiction'
noLoop()
If, on the contrary, they do intersect --> we update the neighboring cell with that refined list of pattern's indices:
W[idN] = intersection
Because that neighboring cell has been updated, its entropy must be updated as well:
lfreqs = [freqs[i] for i in W[idN]]
H[idN] = (log(sum(lfreqs)) - sum(map(lambda x: x * log(x), lfreqs)) / sum(lfreqs)) - random(.001)
Finally, and most importantly, we add the index of that neighboring cell to the stack so it becomes the next current cell in turns (the one whose neighbors will be updated during the next while loop):
stack.add(idN)
Full updated script
from collections import Counter
from itertools import chain
from random import choice
w, h = 40, 25
N = 3
def setup():
size(w*20, h*20, P2D)
background('#FFFFFF')
frameRate(1000)
noStroke()
global W, A, H, patterns, freqs, npat, mat, xs, ys
img = loadImage('Flowers.png')
iw, ih = img.width, img.height
xs, ys = width//w, height//h
kernel = [[i + n*iw for i in xrange(N)] for n in xrange(N)]
mat = ((-1, 0), (1, 0), (0, -1), (0, 1))
all = []
for y in xrange(ih):
for x in xrange(iw):
cmat = [[img.pixels[((x+n)%iw)+(((a[0]+iw*y)/iw)%ih)*iw] for n in a] for a in kernel]
for r in xrange(4):
cmat = zip(*cmat[::-1])
all.append(cmat)
all.append(cmat[::-1])
all.append([a[::-1] for a in cmat])
all = [tuple(chain.from_iterable(p)) for p in all]
c = Counter(all)
patterns = c.keys()
freqs = c.values()
npat = len(freqs)
W = [set(range(npat)) for i in xrange(w*h)]
A = [[set() for dir in xrange(len(mat))] for i in xrange(npat)]
H = [100 for i in xrange(w*h)]
for i1 in xrange(npat):
for i2 in xrange(npat):
if [n for i, n in enumerate(patterns[i1]) if i%N!=(N-1)] == [n for i, n in enumerate(patterns[i2]) if i%N!=0]:
A[i1][0].add(i2)
A[i2][1].add(i1)
if patterns[i1][:(N*N)-N] == patterns[i2][N:]:
A[i1][2].add(i2)
A[i2][3].add(i1)
def draw():
global H, W
emin = int(random(w*h)) if frameCount <= 1 else H.index(min(H))
if H[emin] == 'c':
print 'finished'
noLoop()
id = choice([idP for idP in W[emin] for i in xrange(freqs[idP])])
W[emin] = [id]
H[emin] = 'c'
stack = set([emin])
while stack:
idC = stack.pop()
for dir, t in enumerate(mat):
x = (idC%w + t[0])%w
y = (idC/w + t[1])%h
idN = x + y * w
if H[idN] != 'c':
possible = set([n for idP in W[idC] for n in A[idP][dir]])
if not W[idN].issubset(possible):
intersection = possible & W[idN]
if not intersection:
print 'contradiction'
noLoop()
return
W[idN] = intersection
lfreqs = [freqs[i] for i in W[idN]]
H[idN] = (log(sum(lfreqs)) - sum(map(lambda x: x * log(x), lfreqs)) / sum(lfreqs)) - random(.001)
stack.add(idN)
fill(patterns[id][0])
rect((emin%w) * xs, (emin/w) * ys, xs, ys)
Overall improvements
In addition to these fixes I also did some minor code optimization to speed-up both the observation and propagation steps, and shorten the weighted choice computation.
The "Wave" is now composed of Python sets of indices whose size decrease as cells are "collapsed" (replacing large fixed size lists of booleans).
Entropies are stored in a defaultdict whose keys are progressively deleted.
The starting entropy value is replaced by a random integer (first entropy calculation not needed since equiprobable high level of uncertainty at start)
Cells are diplayed once (avoiding storing them in a array and redrawing at each frame)
The weighted choice is now a one-liner (avoiding several dispensable lines of list comprehension)
While looking at the live demo linked in one of your examples, and based on a quick review of the original algorithm code, I believe your error lies in the "Propagation" step.
The propagation is not just updating the neighbouring 4 cells to the collapsed cell. You must also update all of those cells neighbours, and then the neighbours to those cells, etc, recursively. Well, to be specific, as soon as you update a single neighbouring cell, you then update its neighbour (before getting to the other neighbours of the first cell), i.e. depth-first, not breadth-first updates. At least, that's what I gather from the live demo.
The actual C# code implementation of the original algorithm is quite complicated and I don't fully understand it, but the key points appear to be creation of the "propagator" object here, as well as the Propagate function itself, here.
I have an array "D" that contains dogs and their health conditions.
The classifier() method returns either 1 or 0 and takes one row of the 2D array as input.
I want to compare the classifier result to column 13 of the 2D array
In an ideal case the classifier would always return the same value as specified in that column.
Now I try to calculate the total hitrate of the classifier by adding up successes and dividing it by the total number of results.
So far I have worked out an enumerate for loop to hand over rows to the classifier in sequence.
def accuracy(D, classifier):
for i, item in enumerate(D):
if classifier(item)==D[i,13]
#Compare result of classifier with actual value
x+=1 #Increase x on a hit
acc=(x/D.length)
#Divide x by length of D to calculate hitrate eg. "0.5"; 100% would be "1"
return acc
There is probably a simple formatting error somewhere or I have an error in my logic.
(Am 2 Days into Python now)
I think I might not be doing the if compare correctly.
Assuming both D and classifier are defined, there are some errors in your code which should all give reasonable error messages (apart from the float casting, that one can be tricky with python).
You're both missing a : in the if-query, as well as you're trying to access the array D like D[i, 13] which isn't allowed. 2D-arrays is accessed with another set of [], like D[i][13]. However, since you're already enumerating the 2D-array, you may as well use the item[13] to get the value.
Lastly, if you want a decimal value at the end you'll also need to cast at least one of the values to a float, like float(x)/D.length, otherwise it will just round it to 0 or 1.
Fixed code:
for i, item in enumerate(D):
if classifier(item) == D[i][13]:
# if classifier(item) == item[13]: # This should also work, you can use either.
x += 1 #Increase x on a hit
acc = (float(x)/D.length)
# Divide x by length of D to calculate hitrate eg. "0.5"; 100% would be "1"
return acc
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Background
I am trying to code Fisher's exact test (see: wiki), specifically for 2 x 2 contingency tables (matrices). But I am stuck on one particular step: to generate alternative matrices given an observed matrix of non-negative integers, where alternative matrices' row and column sums must be equal to the original matrix. This page (Wolphram) has a description of all the steps, but below I will elaborate on the bit I am stuck on.
Question
In order to implement Fisher's exact test for 2 x 2 contingency tables I am given a 2 x 2 matrix whose elements are non-negative integers representing observations, the observed matrix.
One of the steps requires me to generate all combinations of 2 x 2 matrices, the alternative matrices, whose non-negative integer elements are restricted by the following conditions:
The dimensions of all alternative matrices are 2 x 2, i.e. equal to the observed matrix.
The sum of each row of the alternative matrices must be equal to the corresponding sum of each row of the observed matrixm, i.e. sum of row 2 in the observed matric == sum of row 2 in each of the alternative matrices.
The sum of each column of the alternative matrices must be equal to the corresponding sum of each column of the observed matrix.
To me the most obvious way to generate alternative matrices is to brute force all possible combinations of numbers in a 2 x 2 matrix, whose values are less than or equal to the sums of rows/columns of the observed matrix. Then iterate through these combinations filtering out combinations that fail to satisfy the conditions above.
Edited: What is the fastest algorithm to generate all combinations of elements in a 2x2 matrix (alternative matrices), with row and column sums equal to those of the observed matrix?
Original: How can we implement this in either of the following languages: R, Python, C/C++, Matlab?
Example
For an example application of the 2×2 test, let X be a journal, say either Mathematics Magazine or Science, and let Y be the number of articles on the topics of mathematics and biology appearing in a given issue of one of these journals. If Mathematics Magazine has five articles on math and one on biology, and Science has none on math and four on biology, then the relevant matrix would be:
and all possible alternative matrices will then be:
w
Related posts
Bootstrapping 4x4 Matrix with Fixed Row and Column Sums
reconstruct a matrix from its row and column sums
Randomize matrix elements between two values while keeping row and column sums fixed (MATLAB)
I have an answer using sympy. The idea is the same: Solve the linear system of equations you get from the sum of your matrix elements being row and column number. This is hardcoded in M. s is basically your matrix. linsolve gives you infinitelly many solutions and the rest restricts them to positive integers.
from sympy import *
from sympy.solvers.solveset import linsolve
from sympy.sets.fancysets import Naturals0
from sympy.solvers.inequalities import reduce_inequalities
M = Matrix([[1,1,0,0],[0,0,1,1],[1,0,1,0],[0,1,0,1]])
s = Matrix([5,5,6,4])
a,b,c,d = symbols('a, b, c, d')
solution = linsolve((M,s), [a,b,c,d])
solution_eq = [x >= 0 for x in list(list(solution)[0])]
possible_values = reduce_inequalities(solution_eq, x.free_symbols)
for d_fixed in Intersection(possible_values.as_set(), Naturals0()):
print solution.subs({d : d_fixed})
This is actually quite easy. You simply need to select out of all possible combinations that ones that match the condition.
Here is a solution in python:
# [[i, j]
# [k, l]]
def findAlternativeMatrices(c):
# arg c = cont. matrix
# this only works for integers
alt = []
# no single value inside an alternative matrix
# can be bigger than the largest row/column-sum
N = max([c[0][0]+c[1][0],c[0][1]+c[1][1],c[0][0]+c[0][1], c[1][0]+c[1][1]])
# loop over all matrix entries
for i in range(N):
for j in range(N):
for k in range(N):
for l in range(N):
#check if the respective sums equal
if( (i+k == (c[0][0]+c[1][0]) )
and (j+l == (c[0][1]+c[1][1]) )
and (i+j == (c[0][0]+c[0][1]) )
and (k+l == (c[1][0]+c[1][1]) ) ):
if [[i,j],[k,l]] != c:
# append the matrix
# if it isn't the given cont. matrix
alt.append([[i,j],[k,l]])
return alt
c = [[5,0],[1,4]]
alt = findAlternativeMatrices(c)
for a in alt:
print a
Hi I have written a script that randomly shuffles read sequences over the gene they were mapped to.
This is useful if you want to determine if a peak that you observe over your gene of interest is statistically significant. I use this code to calculate False Discovery Rates for peaks in my gene of interest.
Below the code:
import numpy as np
import matplotlib.pyplot as plt
iterations = 1000 # number of times a read needs to be shuffled
featurelength = 1000 # length of the gene
a = np.zeros((iterations,featurelength)) # create a matrix with 1000 rows of the feature length
b = np.arange(iterations) # a matrix with the number of iterations (0-999)
reads = np.random.randint(10,50,1000) # a random dataset containing an array of DNA read lengths
Below the code to fill the large matrix (a):
for i in reads: # for read with read length i
r = np.random.randint(-i,featurelength-1,iterations) # generate random read start positions for the read i
for j in b: # for each row in a:
pos = r[j] # get the first random start position for that row
if pos < 0: # start position can be negative because a read does not have to completely overlap with the feature
a[j][:pos+i]+=1
else:
a[j][pos:pos+i]+=1 # add the read to the array and repeat
Then generate a heat map to see if the distribution is roughly even:
plt.imshow(a)
plt.show()
This generates the desired result but it is very slow because of the many for loops.
I tried to do fancy numpy indexing but I constantly get the "too many indices error".
Anybody have a better idea of how to do this?
Fancy indexing is a bit tricky, but still possible:
for i in reads:
r = np.random.randint(-i,featurelength-1,iterations)
idx = np.clip(np.arange(i)[:,None]+r, 0, featurelength-1)
a[b,idx] += 1
To deconstruct this a bit, we're:
Creating a simple index array as a column vector, from 0 to i: np.arange(i)[:,None]
Adding each element from r (a row vector), which broadcasts to make a matrix of size (i,iterations) with the correct offsets into the columns of a.
Clamping the indices to the range [0,featurelength), via np.clip.
Finally, we fancy-index a for each row (b) and the relevant columns (idx).
This question already has answers here:
Handling of duplicate indices in NumPy assignments
(5 answers)
Closed 8 years ago.
I am currently trying to vectorize some code I had written using a large for loop in Python. The vectorized code is as follows:
rho[pi,pj] += (rho_coeff*dt)*i_frac*j_frac
rho[pi+1,pj] += (rho_coeff*dt)*ip1_frac*j_frac
rho[pi,pj+1] += (rho_coeff*dt)*i_frac*jp1_frac
rho[pi+1,pj+1] += (rho_coeff*dt)*ip1_frac*jp1_frac
Each of pi, pj, dt, i_frac, j_frac, ip1_frac, jp1_frac is a numpy array of one dimension and all of the same length. rho is a two dimensional numpy array. pi and pj make up a list of coordinates (pi,pj) which indicate which element of the matrix rho is modified. The modification involves the addition of the (rho_coeff*dt)*i_frac*j_frac term to the (pi,pj) element as well as addition of similar terms to neighbouring elements: (pi+1,pj), (pi,pj+1) and (pi+1,pj+1). Each coordinate in the list (pi, pj) has a unique dt, i_frac, j_frac, ip1_frac and jp1_frac associated with it.
The problem is that the list can have (and always will have) repeating coordinates. So instead of successively adding to rho each time the same coordinate is encountered in the list, it only adds the term corresponding to the last repeating coordinate. This problem is described briefly with an example in the Tentative Numpy Tutorial under fancy indexing with arrays of indices (see the last three examples before boolean indexing). Unfortunately they did not provide a solution to this.
Is there a way of doing this operation without resorting to a for loop? I am trying to optimize for performance and want to do away with a loop if possible.
FYI: this code forms part of a 2D particle tracking algorithm where the charge from each particle is added to the four adjacent nodes of a mesh surrounding the particle's position based on volume fractions.
You are going to have to figure out the repeated items and add them together before updating your array. The following code shows a way of doing that for your first update:
rows, cols = 100, 100
items = 1000
rho = np.zeros((rows, cols))
rho_coeff, dt, i_frac, j_frac = np.random.rand(4, items)
pi = np.random.randint(1, rows-1, size=(items,))
pj = np.random.randint(1, cols-1, size=(items,))
# The following code assumes pi and pj have the same dtype
pij = np.column_stack((pi, pj)).view((np.void,
2*pi.dtype.itemsize)).ravel()
unique_coords, indices = np.unique(pij, return_inverse=True)
unique_coords = unique_coords.view(pi.dtype).reshape(-1, 2)
data = rho_coeff*dt*i_frac*j_frac
binned_data = np.bincount(indices, weights=data)
rho[tuple(unique_coords.T)] += binned_data
I think you can reuse all of the unique coordinate finding above for the other updates, so the following would work:
ip1_frac, jp1_frac = np.random.rand(2, items)
unique_coords[:, 0] += 1
data = rho_coeff*dt*ip1_frac*j_frac
binned_data = np.bincount(indices, weights=data)
rho[tuple(unique_coords.T)] += binned_data
unique_coords[:, 1] += 1
data = rho_coeff*dt*ip1_frac*jp1_frac
binned_data = np.bincount(indices, weights=data)
rho[tuple(unique_coords.T)] += binned_data
unique_coords[:, 0] -= 1
data = rho_coeff*dt*i_frac*jp1_frac
binned_data = np.bincount(indices, weights=data)
rho[tuple(unique_coords.T)] += binned_data