I have a task to assign integers from a text file to an array in python.
I tried reading by lines and splitting, but none worked.
The task goes like this: we have an array
1 4 5 7 3 2 8 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 4 0 0 0 0 0
0 0 0 0 0 0 0 0 0 3 0 0 0 0
0 0 0 0 0 0 0 5 0 0 6 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 2 9 0
0 0 0 0 0 0 0 0 0 0 10 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 11
0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 8
0 0 0 0 0 0 0 0 0 0 0 0 0 9
0 0 0 0 0 0 0 0 0 0 0 0 0 14
0 0 0 0 0 0 0 0 0 0 0 0 0 5
0 0 0 0 0 0 0 0 0 0 0 0 0 0
and this needs to be assigned to an array x in order to use it in further functions.
Do something like:
with open('my_raw_file.txt', 'r') as file:
all_file = file.read().strip() # Read and remove any extra new line
all_file_list = all_file.split('\n') # make a list of lines
final_data = [[int(each_int) for each_int in line.split()] for line in all_file_list] # make list of list and convert to int
print(final_data)
if you don't mind numpy arrays and pandas:
import pandas as pd
integers = pd.read_csv('test.txt', sep=" ", header=None)
Related
I'm trying to create a code to simulate the spread of something, via a 2D list of nxn structure. My issue is this: when I create a temp of my original list via temp = [*board], board[:], etc. it nonetheless updates both lists and instead of returning,
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
returns
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
my code is here:
def spread(board, iterations, size):
temp = board[:]
for iteration in range(iterations):
for x in range(size):
for y in range(size):
if board[x][y] == 1:
if x+1 < size:
temp[x+1][y] = 1
if x-1 >= 0:
temp[x-1][y] = 1
if y+1 < size:
temp[x][y+1] = 1
if y-1 >= 0:
temp[x][y-1] = 1
board = temp[:]
return board
and I called it via
new_board = spread(my_board, 1, 15)
This is programming 101. Remember, lists are stored in heap, with pointers to them.
So really the variable board points to the place in heap where that array is stored. When you assign temp to board, what you are doing is creating a new pointer which points to that same array. I suggest taking a look at this using python tutor: https://pythontutor.com/visualize.html#mode=edit
For example:
b = [1,2,3,4,5]
a = b
a[0] = 2
print(b)
will output
[2,2,3,4,5]
Try it out in python tutor and you'll see what's happening!
To solve your problem, create a deep copy
def deep_copy(board):
temp = []
for i in range(len(board)):
row_copy = []
for j in range(len(board[0])):
row_copy.append(board[i][j])
temp.append(row_copy)
return temp
I am trying to make a viterbi table with possibility of a specific word in a part of speech occurring inserted into a table where the rows represent all possible part of speeches and the rows are sequence of words in a sentence. I want to update the cell with corresponding the likelihood (TP*EP) that a word of a specific part of speech can occur at in a sentence. But I'm stuck on inserting these likelihoods into the dictionary list.
I'm using the following for insertion, but it does not work. Is there an alternative to changing a specific element in a list where the lists are values of a dictionary?
likelihood = EP * TP
q[possible_pos][j] = likelihood
When I use this, I still get a table of zeroes when I am expecting some values in the table to be updated to floats(likelihood)
for i in range(2):
for j in range(len(word_list[i])):
for pos in pos_list:
q[pos] = [0 for x in range(len(word_list[i]))]
try:
word = word_list[i][j].split("\t")[0]
pos = word_list[i][j].split("\t")[1]
for possible_pos in word_dict[word].keys():
try:
print(possible_pos)
if j == 0:
TP = Transition['Begin_Sent'][possible_pos]
EP = pos_dict[possible_pos][word]
elif word_list[i][j + 1] == ".\t.":
TP = Transition['End_Sent'][possible_pos]
EP = 1
else:
prev_pos = word_list[i][j - 1].split("\t")[1]
TP = Transition[prev_pos][possible_pos]
EP = pos_dict[possible_pos][word]
likelihood = EP * TP
q[possible_pos][j] = likelihood
except KeyError:
pass
except IndexError:
pass
df = pd.DataFrame.from_dict(q, orient='index')
print(df)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
DT 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
NN 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
POS 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
MD 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
VB 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
VBN 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
IN 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
JJ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
NNS 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
CC 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
RBS 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
NNP 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
VBZ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
TO 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
CD 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
VBG 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
RB 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
VBD 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
PRP 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
PRP$ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
VBP 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
WRB 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
EX 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
RBR 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
WP 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
JJR 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
WDT 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
RP 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
PDT 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
UH 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
NNPS 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
JJS 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
SYM 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FW 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
WP$ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
LS 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I have a csv file consisting of various dates. How do I count the number of events per day for every hour? The output should have rows (representing days) and columns (representing hours).
One approach would be create the binned event counts, then extend the resulting DataFrame to cover the entire month, and pivot_table to produce the desired row/column format. Assuming your DataFrame is called df, this would look like the following:
df['datetime'] = pd.to_datetime(df.REGIS_DATE + ' ' + df.REGIS_TIME, dayfirst=True)
index = pd.date_range('1/1/2011', periods=24*31, freq='H')
events = df.set_index('datetime').resample('H').count().REGIS_DATE.reindex(index, fill_value=0)
pd.DataFrame(events).pivot_table(index=events.index.date, columns=events.index.hour)
Example input df:
REGIS_DATE REGIS_TIME
0 20/01/2011 23:07:42
1 26/01/2011 12:14:35
2 11/01/2011 14:34:29
3 11/01/2011 17:53:17
4 29/01/2011 11:02:32
Corresponding output:
REGIS_DATE
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
2011-01-01 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-02 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-03 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-04 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-05 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-06 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-07 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-08 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-09 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0
2011-01-12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-18 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
2011-01-21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-22 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-23 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-25 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-26 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
2011-01-27 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-29 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2011-01-31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I have this data:
time-stamp ccount A B C D E F G H I
2015-03-03T23:43:33+0000 0 0 0 0 0 0 0 0 0 0
2015-03-04T06:33:28+0000 0 0 0 0 0 0 0 0 0 0
2015-03-04T06:18:38+0000 0 0 0 0 0 0 0 0 0 0
2015-03-04T05:36:43+0000 0 0 0 1 0 0 0 0 0 0
2015-03-04T05:29:09+0000 0 0 0 1 0 0 0 0 1 0
2015-03-04T07:01:11+0000 0 0 1 0 1 0 0 0 0 0
2015-03-03T15:27:06+0000 19 0 1 0 1 0 0 0 0 0
2015-03-03T15:43:38+0000 10 0 1 0 1 1 0 0 0 0
2015-03-03T18:16:26+0000 0 0 0 1 0 0 0 0 0 0
2015-03-03T18:19:48+0000 0 0 0 0 0 0 0 0 0 0
2015-03-03T18:20:02+0000 4 0 0 0 0 1 0 0 0 0
2015-03-03T20:21:55+0000 2 0 0 0 0 0 1 0 0 0
2015-03-03T20:37:36+0000 0 0 0 0 0 0 0 0 0 0
2015-03-04T03:03:51+0000 1 0 0 0 0 0 1 0 0 0
2015-03-03T16:33:04+0000 9 0 0 0 0 0 0 0 0 0
2015-03-03T16:18:13+0000 1 0 0 0 0 0 0 0 0 0
2015-03-03T16:34:18+0000 4 0 0 0 0 0 0 0 0 0
2015-03-03T18:11:36+0000 5 0 0 0 0 0 0 0 0 0
2015-03-03T18:24:35+0000 0 0 0 0 0 0 0 0 0 0
I want to slice all rows which have at least a single one ("1") in the columns A to I.
For the above data, the output will be:
time-stamp ccount A B C D E F G H I
2015-03-04T05:36:43+0000 0 0 0 1 0 0 0 0 0 0
2015-03-04T05:29:09+0000 0 0 0 1 0 0 0 0 1 0
2015-03-04T07:01:11+0000 0 0 1 0 1 0 0 0 0 0
2015-03-03T15:27:06+0000 19 0 1 0 1 0 0 0 0 0
2015-03-03T15:43:38+0000 10 0 1 0 1 1 0 0 0 0
2015-03-03T18:16:26+0000 0 0 0 1 0 0 0 0 0 0
2015-03-03T18:20:02+0000 4 0 0 0 0 1 0 0 0 0
2015-03-03T20:21:55+0000 2 0 0 0 0 0 1 0 0 0
2015-03-04T03:03:51+0000 1 0 0 0 0 0 1 0 0 0
We have ignored all the rows which don't have a "1" in any of the columns from A to I.
You could use any and boolean indexing to select only the rows that have at least one entry equal to 1:
df[(df.loc[:,['A','B','C','D','E','F','G','H','I']] == 1).any(axis=1)]
Referring to columns by label is somewhat tedious if you have a lot of them so you can use slicing to make things a little neater:
df[(df.loc[:, 'A':'I'] == 1).any(axis=1)]
a = open("a.txt",'r')
for line in a:
new = line.split(" ")
if "1" in new[1:]:
print line
OUTPUT:
2015-03-04T05:36:43+0000 0 0 0 1 0 0 0 0 0 0
2015-03-04T05:29:09+0000 0 0 0 1 0 0 0 0 1 0
2015-03-04T07:01:11+0000 0 0 1 0 1 0 0 0 0 0
2015-03-03T15:27:06+0000 19 0 1 0 1 0 0 0 0 0
2015-03-03T15:43:38+0000 10 0 1 0 1 1 0 0 0 0
2015-03-03T18:16:26+0000 0 0 0 1 0 0 0 0 0 0
2015-03-03T18:20:02+0000 4 0 0 0 0 1 0 0 0 0
2015-03-03T20:21:55+0000 2 0 0 0 0 0 1 0 0 0
2015-03-04T03:03:51+0000 1 0 0 0 0 0 1 0 0 0
2015-03-03T16:18:13+0000 1 0 0 0 0 0 0 0 0 0
Another solution assuming that all the values in the columns A to I are nonnegative
df[(df.drop(['time-stamp','ccount'], axis=1).sum(axis=1) > 0)]
Of course the dropping part, can be combined with the other solutions
I'm using python interface to GLPK.
I'm looking for a solution X that:
Minimizes c
G * X <= H
A * X = b
I'm using the following statement
glpk.ilp(c,G,h,A,b,I=set(range(25)))
Tha'ts my G matrix:
-1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 0 0 0 0
That's my h matrix
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.001
That's my A matrix
0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1
0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
That's my B matrix
21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0 21.0
Yet by executing the following:
sol=glpk.ilp(c,G,h,A,b,I=set(range(25)))
I get the following solution:
0 0 1 0 0 1 0 1 1 1 0 0 0 1 1 1 1 0 1 1 0 0 1 1 1
Which is clearly wrong.
Since
A * X != b
I don't have any lead to how to troubleshoot this. The documentation is scarce.
dI copied your arrays to text files (names should be self-explanatory) and ran the following code without problems.
Perhaps converting your arrays to proper cvxopt matrizes was the problem?
import numpy as np
import cvxopt
import cvxopt.glpk
c=cvxopt.matrix(np.loadtxt('c.txt'),tc='d')
A=cvxopt.matrix(np.loadtxt('A.txt'),tc='d')
b=cvxopt.matrix(np.loadtxt('b.txt'),tc='d')
G=cvxopt.matrix(np.loadtxt('G.txt'),tc='d')
h=cvxopt.matrix(np.loadtxt('h.txt'),tc='d')
sol=cvxopt.glpk.ilp(c,G,h,A,b,I=set(range(25)))
x=[i for i in sol[1]]
# x=[0 0 10 0 0 10 0 10 10 10 0 0 0 10 10 10 10 0 10 10 0 0 1 1 1]
eq=(np.dot(A,x)==b.T[0]).all()
# eq=True
Thus, the solution vector x is different from the one you reported.
In particular, the last line checks for equality A*x=b and evaluates to true.
Hope that helps!