I want to find all the combinations of a binary matrix (ones and zeros) of size 18 x 9 where each row is equal to 5 and each column is equal to 10.
Also each block must have a 1 in each column.
The total number of combinations of that grid size is... well, too much to iterate over:
2 ** (18 x 9) combinations = 5,846,006,549,323,611,672,814,739,330,865,132,078,623,730,171,904
Although there are only 9!/(5!4!)=126 combinations of rows to make a row equal 5. With 18 rows, that's still a lot 64,072,225,938,746,379,480,587,511,979,135,205,376
However, each block must have at least a 1 in each column which must limit the number of combinations.
I wonder if I can break it down in to block combinations so it's potentially 6 blocks of 9 columns... which is then only 18,014,398,509,481,984 (obviously didn't factor in the work to work out the blocks first)
I figure the power of numpy has the ability but I can't work it out.
I have done a couple of examples in Excel by hand
Binary matrix with row and column sum constraint.
solve(4^3^2^x - 2^162 == 0, x)
Related
I would like to ask a question for a numpy array below.
I have a dataset, which has 50 rows and 15 columns and I created a numpy array as such:
I want to compare rows with each other (except than itself), then found the number of rows which satisfies following condition:
there is no other row that
-values are both smaller
-if one is equal, the other one should be smaller
Money Weight
10 80
20 70
30 90
25 50
35 10
40 60
50 10
for instance for row 1: there is no other row which are both smaller on two columns, if one is smaller on the other column row 1 is smaller on the other. Satisfies the condition
for row 3: there is no other row which are both smaller on two columns, it is equal on column weight with row 6 but on money dimension it is smaller. Satisfies the condition
for row 6: there is no other row which are both smaller on two columns. it is equal on weight dimension with row 3 but the value in money is greater. Does not satisfy the condition
Following code works perfect for me to find the rows in the given table:
mask = (arr <= arr[:, None]).all(2).sum(1) < 2
res = df[mask]
print(res)
But I need also compare the table with another row outside the table and should find True or False
For instance:
compare([40,10],table) =False
compare([10,70],table) =True
I have tried bunch of ways to find a proper solution, but could not find a proper way.
I appreciate any suggestions!
I am looking to take my 1 column and 500 row data, and make it either sets of 5, so 100 sets of 5. Or compare a series of 5 numbers to 5 numbers at a time ( chunks of 5 ) in the data.
I want to see how many times if any, a sequence might show up in the data, or fragments of the sequence and how often.
the numbers are from 1-69 and sets of 5. But my data is all one long column with no spaces.
I eventually want to make a list of the sets that contain 2 or more numbers with matches, but one step at a time, right.
Oh or would a 100x5 matrix work? I don't know how to do that either.
Thank you for your time.
You can reshape a numpy array and create a dataframe from it. Below are 100x5 adn 5x100 examples
df = pd.DataFrame({'data':[random.randrange(1,70) for _ in range(500)]})
pd.DataFrame(df['data'].to_numpy().reshape((100, 5)))
pd.DataFrame(df['data'].to_numpy().reshape((5, 100)))
I want to create a binary 16*15 matrix with certain conditions. I use binary strings to make the matrix. I want my matrix to be as described:
-The first and last two elements of each row must be alternative.
-the sum of each row must be 8 or 7.
-in each row, there should not be consecutive 1s or 0s. (one couple(00 or 11) is allowed in each row) .
-the sum of the columns must be 8.
there are 26 possible strings that can fulfill the first 3 conditions.how can I fulfill the last conditions?
I have a code but it is not working because it takes so much time and it is almost impossible.is there any other way?
I don't think you need any constraint to fulfill the last conditions. Columns = 8, which is just half of 16. You can just simply copy the first 8 rows to the last 8 rows and reverse all the 0 and 1, then the column sum would be 8 and the first three conditions are met.
I have a 2D array with :
an index,
a numerical value
When I sum this 2D array I get an amount (let's say "a").
I am provided with another amount (let's say "b", a <> b , and b is the target) and the granularity isn't fine enough to segregate one row from another.
The idea here is to try to find all the rows that compose b and discard the others.
What I am trying to do is building a script that (randomly ?) select rows and sum them until it approches (reduce the distance) the targeted sum.
I was first thinking about trying to start at a random point and from there try to sum each combination of rows and keep adding them up until
-> I have something close enough
or
-> the number of set iteration is over (1 million ?)
... but with the number of rows involved this won't fit in memory.
Any Ideas ?
I have a Numpy array that is created as follows
data=np.zeros(500,dtype='float32, (50000,2)float32')
This array is filled with values that I acquire from some measurements, and is supposed to reflect that during each time point (room for 500 time points) we can acquire 50.000 x- and y- coords.
Later in my code is use a bisect-like search for which I need to know howmany X-coords (measurement points) are actually in my array which I originally did with np.count_nonzero(data), this yielded the following problem:
Fake data:
1 1
2 2
3 0
4 4
5 0
6 6
7 7
8 8
9 9
10 10
the non zero count returns 18 values here, the code then goes into the bisect-like search using data[time][1][0][0] as min X-coord and data[time][1][(np.count_nonzero(data)][0] as max x-coord which results in the array stopping at 9 instead of 10.
I could use a while loop to manually count non-zero values (in the X-coord column) in the array but that would be silly, I assume that there is some builtin numpy functionality for this. My question is then what builtin functionality or modification of my np.count_nonzero(data) I need since the documentation doesn't offer much information in that regards (link to numpy doc).
-- Simplified question --
Can I use Numpy functionality to count the non-zero values for a singular column only? (i.e. between data[time][1][0][0] and data[time][1][max][0] )
Maybe a better approach would be to filter the array using nonzero and iterate over the result:
nonZeroData = data[np.nonzero(data[time][1])]
To count zeros only from the second column:
nonZeroYCount = np.count_nonzero(data[time][1][:, 1])
If I understand you correctly, to select elements from data[time][1][0][0] to data[time][1][max][0]:
data[time][1][:max+1,0]
EDIT:
To count all non-zero for every time:
(data["f1"][:,:,0] != 0).sum(1)
Why not consider using data != 0 to get the bool matrix?
You can use:
stat = sum(data != 0) to count the non-zero entries.
I am not sure what shape your data array has but hope you can see what I mean. :)