I want to create a binary 16*15 matrix with certain conditions. I use binary strings to make the matrix. I want my matrix to be as described:
-The first and last two elements of each row must be alternative.
-the sum of each row must be 8 or 7.
-in each row, there should not be consecutive 1s or 0s. (one couple(00 or 11) is allowed in each row) .
-the sum of the columns must be 8.
there are 26 possible strings that can fulfill the first 3 conditions.how can I fulfill the last conditions?
I have a code but it is not working because it takes so much time and it is almost impossible.is there any other way?
I don't think you need any constraint to fulfill the last conditions. Columns = 8, which is just half of 16. You can just simply copy the first 8 rows to the last 8 rows and reverse all the 0 and 1, then the column sum would be 8 and the first three conditions are met.
Related
Background: I have a numpy array of float entries. This is basically a set of observations of something, suppose temperature measured during 24 hours. Imagine that one who records the temperature is not available for the entire day, instead he/she takes few (say 5) readings during an hour and again after few hours, takes reading (say 8 times). All the measurements he/she puts in a single np.array and has handed over to me!
Problem: I have no idea when the readings were taken. So I decide to cluster the observations in the following way: maybe, first recognize local peaks in the array and all entries that are close enough (chosen tolerance, say 1 deg) are grouped together, meaning, I want to split the array into a list of sub-arrays. Note, any entry should belong to exactly one group.
One possible approach: First, sort the array, then split it into sub-arrays with two conditions: (1) Difference between the first and last entries is not more than 1 deg, (2) Difference between the last entry of a sub-array and the first entry of the next sub-array is greater than 1 deg. How can I achieve this fast (numpy way)?
I am looking to take my 1 column and 500 row data, and make it either sets of 5, so 100 sets of 5. Or compare a series of 5 numbers to 5 numbers at a time ( chunks of 5 ) in the data.
I want to see how many times if any, a sequence might show up in the data, or fragments of the sequence and how often.
the numbers are from 1-69 and sets of 5. But my data is all one long column with no spaces.
I eventually want to make a list of the sets that contain 2 or more numbers with matches, but one step at a time, right.
Oh or would a 100x5 matrix work? I don't know how to do that either.
Thank you for your time.
You can reshape a numpy array and create a dataframe from it. Below are 100x5 adn 5x100 examples
df = pd.DataFrame({'data':[random.randrange(1,70) for _ in range(500)]})
pd.DataFrame(df['data'].to_numpy().reshape((100, 5)))
pd.DataFrame(df['data'].to_numpy().reshape((5, 100)))
I want to find all the combinations of a binary matrix (ones and zeros) of size 18 x 9 where each row is equal to 5 and each column is equal to 10.
Also each block must have a 1 in each column.
The total number of combinations of that grid size is... well, too much to iterate over:
2 ** (18 x 9) combinations = 5,846,006,549,323,611,672,814,739,330,865,132,078,623,730,171,904
Although there are only 9!/(5!4!)=126 combinations of rows to make a row equal 5. With 18 rows, that's still a lot 64,072,225,938,746,379,480,587,511,979,135,205,376
However, each block must have at least a 1 in each column which must limit the number of combinations.
I wonder if I can break it down in to block combinations so it's potentially 6 blocks of 9 columns... which is then only 18,014,398,509,481,984 (obviously didn't factor in the work to work out the blocks first)
I figure the power of numpy has the ability but I can't work it out.
I have done a couple of examples in Excel by hand
Binary matrix with row and column sum constraint.
solve(4^3^2^x - 2^162 == 0, x)
I have 2 arrays with the same size, the first one represents time and the second one represents distance i want to group by the first one so each group will have only the same integer values (The floats between two integers)
Heres my orginal time array:
time=[0.2,0.4,0.6,0.8,1,1.2,1.4,1.6,1.8,1.9]
And heres my distance array:
distance=[1,2,4,5.5,7.8,9.6,10,11,11.6,11.9]
so in the time array after grouping by same intger im getting this :
time=[0.2,0.4,0.6,0.8],[1,1.2,1.4,1.6,1.8,1.9]
the first subgroup contain 4 elements and the second contain 6 elements
so the distance groups should be contain 4 then 6 elements accordingly
like this :
distance=[1,2,4,5.5],[7.8,9.6,10,11,11.6,11.9]
so the size of each group will be same size of the distance groups
Any ideas or help?
The following code does what you want:
s=set([int(i) for i in time])
timesplit={i:[] for i in s}
for i in time:
k=int(i)
timesplit[k].append(i)
timelengths=[len(i) for i in timesplit.values()]
distances=[]
for i in timelengths:
distances.append(distance[:i])
distance=distance[i:]
res_times=list(timesplit.values())
res_distances=distances
print(res_times)
print(res_distances)
Output:
[[0.2,0.4,0.6,0.8],[1,1.2,1.4,1.6,1.8,1.9]]
[[1,2,4,5.5],[7.8,9.6,10,11,11.6,11.9]]
I have a 2D array with :
an index,
a numerical value
When I sum this 2D array I get an amount (let's say "a").
I am provided with another amount (let's say "b", a <> b , and b is the target) and the granularity isn't fine enough to segregate one row from another.
The idea here is to try to find all the rows that compose b and discard the others.
What I am trying to do is building a script that (randomly ?) select rows and sum them until it approches (reduce the distance) the targeted sum.
I was first thinking about trying to start at a random point and from there try to sum each combination of rows and keep adding them up until
-> I have something close enough
or
-> the number of set iteration is over (1 million ?)
... but with the number of rows involved this won't fit in memory.
Any Ideas ?