I am confused about the follow code:
import tensorflow as tf
import numpy as np
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.framework import dtypes
'''
Randomly crop a tensor, then return the crop position
'''
def random_crop(value, size, seed=None, name=None):
with ops.name_scope(name, "random_crop", [value, size]) as name:
value = ops.convert_to_tensor(value, name="value")
size = ops.convert_to_tensor(size, dtype=dtypes.int32, name="size")
shape = array_ops.shape(value)
check = control_flow_ops.Assert(
math_ops.reduce_all(shape >= size),
["Need value.shape >= size, got ", shape, size],
summarize=1000)
shape = control_flow_ops.with_dependencies([check], shape)
limit = shape - size + 1
begin = tf.random_uniform(
array_ops.shape(shape),
dtype=size.dtype,
maxval=size.dtype.max,
seed=seed) % limit
return tf.slice(value, begin=begin, size=size, name=name), begin
sess = tf.InteractiveSession()
size = [10]
a = tf.constant(np.arange(0, 100, 1))
print (a.eval())
a_crop, begin = random_crop(a, size = size, seed = 0)
print ("offset: {}".format(begin.eval()))
print ("a_crop: {}".format(a_crop.eval()))
a_slice = tf.slice(a, begin=begin, size=size)
print ("a_slice: {}".format(a_slice.eval()))
assert (tf.reduce_all(tf.equal(a_crop, a_slice)).eval() == True)
sess.close()
outputs:
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
96 97 98 99]
offset: [46]
a_crop: [89 90 91 92 93 94 95 96 97 98]
a_slice: [27 28 29 30 31 32 33 34 35 36]
There are two tf.slice options:
(1). called in function random_crop, such as tf.slice(value, begin=begin, size=size, name=name)
(2). called as a_slice = tf.slice(a, begin=begin, size=size)
The parameters (values, begin and size) of those two slice operations are the same.
However, why the printed values a_crop and a_slice are different and tf.reduce_all(tf.equal(a_crop, a_slice)).eval() is True?
Thanks
EDIT1
Thanks #xdurch0, I understand the first question now.
Tensorflow random_uniform seems like a random generator.
import tensorflow as tf
import numpy as np
sess = tf.InteractiveSession()
size = [10]
np_begin = np.random.randint(0, 50, size=1)
tf_begin = tf.random_uniform(shape = [1], minval=0, maxval=50, dtype=tf.int32, seed = 0)
a = tf.constant(np.arange(0, 100, 1))
a_slice = tf.slice(a, np_begin, size = size)
print ("a_slice: {}".format(a_slice.eval()))
a_slice = tf.slice(a, np_begin, size = size)
print ("a_slice: {}".format(a_slice.eval()))
a_slice = tf.slice(a, tf_begin, size = size)
print ("a_slice: {}".format(a_slice.eval()))
a_slice = tf.slice(a, tf_begin, size = size)
print ("a_slice: {}".format(a_slice.eval()))
sess.close()
output
a_slice: [42 43 44 45 46 47 48 49 50 51]
a_slice: [42 43 44 45 46 47 48 49 50 51]
a_slice: [41 42 43 44 45 46 47 48 49 50]
a_slice: [29 30 31 32 33 34 35 36 37 38]
The confusing thing here is that tf.random_uniform (like every random operation in TensorFlow) produces a new, different value on each evaluation call (each call to .eval() or, in general, each call to tf.Session.run). So if you evaluate a_crop you get one thing, if you then evaluate a_slice you get a different thing, but if you evaluate tf.reduce_all(tf.equal(a_crop, a_slice)) you get True, because all is being computed in a single evaluation step, so only one random value is produced and it determines the value of both a_crop and a_slice. Another example is this, if you run tf.stack([a_crop, a_slice]).eval() you will get a tensor with to equal rows; again, only one random value was produced. More generally, if you call tf.Session.run with multiple tensors to evaluate, all the computations in that call will use the same random values.
As a side note, if you actually need a random value in a computation that you want to maintain for a later computation, the easiest thing would be to just retrieve if with tf.Session.run, along with any other needed computation, to feed it back later through feed_dict; or you could have a tf.Variable and store the random value there. A more advanced possibility would be to use partial_run, an experimental API that allows you to evaluate part of the computation graph and continue evaluating it later, while maintaining the same state (i.e. the same random values, among other things).
Related
If I have a set of data that's of shape (1000,1000) and I know that the values I need from it are contained within the indices (25:888,11:957), how would I go about separating the two sections of data from one another?
I couldn't figure out how to get np.delete() to like the specific 2D case and I also need both the good and the bad sections of data for analysis, so I can't just specify my array bounds to be within the good indices.
I feel like there's a simple solution I'm missing here.
Is this how you want to divide the array?
In [364]: arr = np.ones((1000,1000),int)
In [365]: beta = arr[25:888, 11:957]
In [366]: beta.shape
Out[366]: (863, 946)
In [367]: arr[:25,:].shape
Out[367]: (25, 1000)
In [368]: arr[888:,:].shape
Out[368]: (112, 1000)
In [369]: arr[25:888,:11].shape
Out[369]: (863, 11)
In [370]: arr[25:888,957:].shape
Out[370]: (863, 43)
I'm imaging a square with a rectangle cut out of the middle. It's easy to specify that rectangle, but the frame is has to be viewed as 4 rectangles - unless it is described via the mask of what is missing.
Checking that I got everything:
In [376]: x = np.array([_366,_367,_368,_369,_370])
In [377]: np.multiply.reduce(x, axis=1).sum()
Out[377]: 1000000
Let's say your original numpy array is my_arr
Extracting the "Good" Section:
This is easy because the good section has a rectangular shape.
good_arr = my_arr[25:888, 11:957]
Extracting the "Bad" Section:
The "bad" section doesn't have a rectangular shape. Rather, it has the shape of a rectangle with a rectangular hole cut out of it.
So, you can't really store the "bad" section alone, in any array-like structure, unless you're ok with wasting some extra space to deal with the cut out portion.
What are your options for the "Bad" Section?
Option 1:
Be happy and content with having extracted the good section. Let the bad section remain as part of the original my_arr. While iterating trough my_arr, you can always discriminate between good and and bad items based on the indices. The disadvantage is that, whenever you want to process only the bad items, you have to do it through a nested double loop, rather than use some vectorized features of numpy.
Option 2:
Suppose we want to perform some operations such as row-wise totals or column-wise totals on only the bad items of my_arr, and suppose you don't want the overhead of the nested for loops. You can create something called a numpy masked array. With a masked array, you can perform most of your usual numpy operations, and numpy will automatically exclude masked out items from the calculations. Note that internally, there will be some memory wastage involved, just to store an item as "masked"
The code below illustrates how you can create a masked array called masked_arr from your original array my_arr:
import numpy as np
my_size = 10 # In your case, 1000
r_1, r_2 = 2, 8 # In your case, r_1 = 25, r_2 = 889 (which is 888+1)
c_1, c_2 = 3, 5 # In your case, c_1 = 11, c_2 = 958 (which is 957+1)
# Using nested list comprehension, build a boolean mask as a list of lists, of shape (my_size, my_size).
# The mask will have False everywhere, except in the sub-region [r_1:r_2, c_1:c_2], which will have True.
mask_list = [[True if ((r in range(r_1, r_2)) and (c in range(c_1, c_2))) else False
for c in range(my_size)] for r in range(my_size)]
# Your original, complete 2d array. Let's just fill it with some "toy data"
my_arr = np.arange((my_size * my_size)).reshape(my_size, my_size)
print (my_arr)
masked_arr = np.ma.masked_where(mask_list, my_arr)
print ("masked_arr is:\n", masked_arr, ", and its shape is:", masked_arr.shape)
The output of the above is:
[[ 0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 23 24 25 26 27 28 29]
[30 31 32 33 34 35 36 37 38 39]
[40 41 42 43 44 45 46 47 48 49]
[50 51 52 53 54 55 56 57 58 59]
[60 61 62 63 64 65 66 67 68 69]
[70 71 72 73 74 75 76 77 78 79]
[80 81 82 83 84 85 86 87 88 89]
[90 91 92 93 94 95 96 97 98 99]]
masked_arr is:
[[0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 -- -- 25 26 27 28 29]
[30 31 32 -- -- 35 36 37 38 39]
[40 41 42 -- -- 45 46 47 48 49]
[50 51 52 -- -- 55 56 57 58 59]
[60 61 62 -- -- 65 66 67 68 69]
[70 71 72 -- -- 75 76 77 78 79]
[80 81 82 83 84 85 86 87 88 89]
[90 91 92 93 94 95 96 97 98 99]] , and its shape is: (10, 10)
Now that you have a masked array, you will be able to perform most of the numpy operations on it, and numpy will automatically exclude the masked items (the ones that appear as "--" when you print the masked array)
Some examples of what you can do with the masked array:
# Now, you can print column-wise totals, of only the bad items.
print (masked_arr.sum(axis=0))
# Or row-wise totals, for that matter.
print (masked_arr.sum(axis=1))
The output of the above is:
[450 460 470 192 196 500 510 520 530 540]
[45 145 198 278 358 438 518 598 845 945]
I am looking for a solution for the following problem and it just won't work the way I want to.
So my goal is to calculate a regression analysis and get the slope, intercept, rvalue, pvalue and stderr for multiple rows (this could go up to 10000). In this example, I have a file with 15 rows. Here are the first two rows:
array([
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
23, 24],
[ 100, 10, 61, 55, 29, 77, 61, 42, 70, 73, 98,
62, 25, 86, 49, 68, 68, 26, 35, 62, 100, 56,
10, 97]]
)
Full trial data set:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
100 10 61 55 29 77 61 42 70 73 98 62 25 86 49 68 68 26 35 62 100 56 10 97
57 89 25 89 48 56 67 17 98 10 25 90 17 52 85 56 18 20 74 97 82 63 45 87
192 371 47 173 202 144 17 147 174 483 170 422 285 13 77 116 500 136 276 392 220 121 441 268
The first row is the x-variable and this is the independent variable. This has to be kept fixed while iterating over every following row.
For the following row, the y-variable and thus the dependent variable, I want to calculate the slope, intercept, rvalue, pvalue and stderr and have them in a dataframe (if possible added to the same dataframe, but this is not necessary).
I tried the following code:
import pandas as pd
import scipy.stats
import numpy as np
df = pd.read_excel("Directory\\file.xlsx")
def regr(row):
r = scipy.stats.linregress(df.iloc[1:, :], row)
return r
full_dataframe = None
for index,row in df.iterrows():
x = regr(index)
if full_dataframe is None:
full_dataframe = x.T
else:
full_dataframe = full_dataframe.append([x.T])
full_dataframe.to_excel('Directory\\file.xlsx')
But this fails and gives the following error:
ValueError: all the input array dimensions except for the concatenation axis
must match exactly
I'm really lost in here.
So, I want to achieve that I have the slope, intercept, pvalue, rvalue and stderr per row, starting from the second one, because the first row is the x-variable.
Anyone has an idea HOW to do this and tell me WHY mine isn't working and WHAT the code should look like?
Thanks!!
Guessing the issue
Most likely, your problem is the format of your numbers, there are Unicode String dtype('<U21') instead of being Integer or Float.
Always check types:
df.dtypes
Cast your dataframe using:
df = df.astype(np.float64)
Below a small example showing the issue:
import numpy as np
import pandas as pd
# DataFrame without numbers (will not work for Math):
df = pd.DataFrame(['1', '2', '3'])
df.dtypes # object: placeholder for everything that is not number or timestamps (string, etc...)
# Casting DataFrame to make it suitable for Math Operations:
df = df.astype(np.float64)
df.dtypes # float64
But it is difficult to be sure of this without having the original file or data you are working with.
Carefully read the Exception
This is coherent with the Exception you get:
TypeError: ufunc 'add' did not contain a loop with signature matching types
dtype('<U21') dtype('<U21') dtype('<U21')
The method scipy.stats.linregress raises a TypeError (so it is about type) and is telling you than it cannot perform add operation because adding String dtype('<U21') does not make any sense in the context of a Linear Regression.
Understand the Design
Loading the data:
import io
fh = io.StringIO("""1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
100 10 61 55 29 77 61 42 70 73 98 62 25 86 49 68 68 26 35 62 100 56 10 97
57 89 25 89 48 56 67 17 98 10 25 90 17 52 85 56 18 20 74 97 82 63 45 87
192 371 47 173 202 144 17 147 174 483 170 422 285 13 77 116 500 136 276 392 220 121 441 268""")
df = pd.read_fwf(fh).astype(np.float)
Then we can regress the second row vs the first:
scipy.stats.linregress(df.iloc[0,:].values, df.iloc[1,:].values)
It returns:
LinregressResult(slope=0.12419744768547877, intercept=49.60998434527584, rvalue=0.11461693561751324, pvalue=0.5938303095361301, stderr=0.22949908667668056)
Assembling all together:
result = pd.DataFrame(columns=["slope", "intercept", "rvalue"])
for i, row in df.iterrows():
fit = scipy.stats.linregress(df.iloc[0,:], row)
result.loc[i] = (fit.slope, fit.intercept, fit.rvalue)
Returns:
slope intercept rvalue
0 1.000000 0.000000 1.000000
1 0.124197 49.609984 0.114617
2 -1.095801 289.293224 -0.205150
Which is, as far as I understand your question, what you expected.
The second exception you get comes because of this line:
x = regr(index)
You sent the index of the row instead of the row itself to the regression method.
What I'm looking for
# I have an array
x = np.arange(0, 100)
# I have a size n
n = 10
# I have a random set of numbers
indexes = np.random.randint(n, 100, 10)
# What I want is a matrix where every row i is the i-th element of indexes plus the previous n elements
res = np.empty((len(indexes), n), int)
for (i, v) in np.ndenumerate(indexes):
res[i] = x[v-n:v]
To reformulate, as I wrote in the title what am looking for is a way to take multiple subsets (of the same size) of an initial array.
Just to add a detail this loopy version works, I want just to know if there is a numpyish way to achieve this in a more elegant way.
The following does what you are asking for. It uses numpy.lib.stride_tricks.as_strided to create a special view on the data which can be indexed in the desired way.
import numpy as np
from numpy.lib import stride_tricks
x = np.arange(100)
k = 10
i = np.random.randint(k, len(x)+1, size=(5,))
xx = stride_tricks.as_strided(x, strides=np.repeat(x.strides, 2), shape=(len(x)-k+1, k))
print(i)
print(xx[i-k])
Sample output:
[ 69 85 100 37 54]
[[59 60 61 62 63 64 65 66 67 68]
[75 76 77 78 79 80 81 82 83 84]
[90 91 92 93 94 95 96 97 98 99]
[27 28 29 30 31 32 33 34 35 36]
[44 45 46 47 48 49 50 51 52 53]]
A bit of explanation. Arrays store not only data but also a small "header" with layout information. Amongst this are the strides which tell how to translate linear memory to nd. There is a stride for each dimension which is just the offset at which the next element along that dimension can be found. So the strides for a 2d array are (row offset, element offset). as_strided permits to directly manipulate an array's strides; by setting row offsets to the same as element offsets we create a view that looks like
0 1 2 ...
1 2 3 ...
2 3 4
. .
. .
. .
Note that no data are copied at this stage; for exasmple, all the 2s refer to the same memory location in the original array. Which is why this solution should be quite efficient.
In the tensorflow MNIST tutorial the mnist.train.next_batch(100) function comes very handy. I am now trying to implement a simple classification myself. I have my training data in a numpy array. How could I implement a similar function for my own data to give me the next batch?
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
Xtr, Ytr = loadData()
for it in range(1000):
batch_x = Xtr.next_batch(100)
batch_y = Ytr.next_batch(100)
The link you posted says: "we get a "batch" of one hundred random data points from our training set". In my example I use a global function (not a method like in your example) so there will be a difference in syntax.
In my function you'll need to pass the number of samples wanted and the data array.
Here is the correct code, which ensures samples have correct labels:
import numpy as np
def next_batch(num, data, labels):
'''
Return a total of `num` random samples and labels.
'''
idx = np.arange(0 , len(data))
np.random.shuffle(idx)
idx = idx[:num]
data_shuffle = [data[ i] for i in idx]
labels_shuffle = [labels[ i] for i in idx]
return np.asarray(data_shuffle), np.asarray(labels_shuffle)
Xtr, Ytr = np.arange(0, 10), np.arange(0, 100).reshape(10, 10)
print(Xtr)
print(Ytr)
Xtr, Ytr = next_batch(5, Xtr, Ytr)
print('\n5 random samples')
print(Xtr)
print(Ytr)
And a demo run:
[0 1 2 3 4 5 6 7 8 9]
[[ 0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 23 24 25 26 27 28 29]
[30 31 32 33 34 35 36 37 38 39]
[40 41 42 43 44 45 46 47 48 49]
[50 51 52 53 54 55 56 57 58 59]
[60 61 62 63 64 65 66 67 68 69]
[70 71 72 73 74 75 76 77 78 79]
[80 81 82 83 84 85 86 87 88 89]
[90 91 92 93 94 95 96 97 98 99]]
5 random samples
[9 1 5 6 7]
[[90 91 92 93 94 95 96 97 98 99]
[10 11 12 13 14 15 16 17 18 19]
[50 51 52 53 54 55 56 57 58 59]
[60 61 62 63 64 65 66 67 68 69]
[70 71 72 73 74 75 76 77 78 79]]
In order to shuffle and sampling each mini-batch, the state whether a sample has been selected inside the current epoch should also be considered. Here is an implementation which use the data in the above answer.
import numpy as np
class Dataset:
def __init__(self,data):
self._index_in_epoch = 0
self._epochs_completed = 0
self._data = data
self._num_examples = data.shape[0]
pass
#property
def data(self):
return self._data
def next_batch(self,batch_size,shuffle = True):
start = self._index_in_epoch
if start == 0 and self._epochs_completed == 0:
idx = np.arange(0, self._num_examples) # get all possible indexes
np.random.shuffle(idx) # shuffle indexe
self._data = self.data[idx] # get list of `num` random samples
# go to the next batch
if start + batch_size > self._num_examples:
self._epochs_completed += 1
rest_num_examples = self._num_examples - start
data_rest_part = self.data[start:self._num_examples]
idx0 = np.arange(0, self._num_examples) # get all possible indexes
np.random.shuffle(idx0) # shuffle indexes
self._data = self.data[idx0] # get list of `num` random samples
start = 0
self._index_in_epoch = batch_size - rest_num_examples #avoid the case where the #sample != integar times of batch_size
end = self._index_in_epoch
data_new_part = self._data[start:end]
return np.concatenate((data_rest_part, data_new_part), axis=0)
else:
self._index_in_epoch += batch_size
end = self._index_in_epoch
return self._data[start:end]
dataset = Dataset(np.arange(0, 10))
for i in range(10):
print(dataset.next_batch(5))
the output is:
[2 8 6 3 4]
[1 5 9 0 7]
[1 7 3 0 8]
[2 6 5 9 4]
[1 0 4 8 3]
[7 6 2 9 5]
[9 5 4 6 2]
[0 1 8 7 3]
[9 7 8 1 6]
[3 5 2 4 0]
the first and second (3rd and 4th,...) mini-batch correspond to one whole epoch..
I use Anaconda and Jupyter.
In Jupyter if you run ?mnist you get:
File: c:\programdata\anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py
Docstring: Datasets(train, validation, test)
In folder datesets you shall find mnist.py which contains all methods including next_batch.
The answer which is marked up above I tried the algorithm by that algorithm I am not getting results so I searched on kaggle and I saw really amazing algorithm which worked really well. Best result try this. In below algorithm **Global variable takes the input you declared above in which you read your data set.**
epochs_completed = 0
index_in_epoch = 0
num_examples = X_train.shape[0]
# for splitting out batches of data
def next_batch(batch_size):
global X_train
global y_train
global index_in_epoch
global epochs_completed
start = index_in_epoch
index_in_epoch += batch_size
# when all trainig data have been already used, it is reorder randomly
if index_in_epoch > num_examples:
# finished epoch
epochs_completed += 1
# shuffle the data
perm = np.arange(num_examples)
np.random.shuffle(perm)
X_train = X_train[perm]
y_train = y_train[perm]
# start next epoch
start = 0
index_in_epoch = batch_size
assert batch_size <= num_examples
end = index_in_epoch
return X_train[start:end], y_train[start:end]
If you would not like to get shape mismatch error in your tensorflow session run
then use the below function instead of the function provided in the first solution above (https://stackoverflow.com/a/40995666/7748451) -
def next_batch(num, data, labels):
'''
Return a total of `num` random samples and labels.
'''
idx = np.arange(0 , len(data))
np.random.shuffle(idx)
idx = idx[:num]
data_shuffle = data[idx]
labels_shuffle = labels[idx]
labels_shuffle = np.asarray(labels_shuffle.values.reshape(len(labels_shuffle), 1))
return data_shuffle, labels_shuffle
Yet another implementation:
from typing import Tuple
import numpy as np
class BatchMaker(object):
def __init__(self, feat: np.array, lab: np.array) -> None:
if len(feat) != len(lab):
raise ValueError("Expected feat and lab to have the same number of samples")
self.feat = feat
self.lab = lab
self.indexes = np.arange(len(feat))
np.random.shuffle(self.indexes)
self.pos = 0
# "BatchMaker, BatchMaker, make me a batch..."
def next_batch(self, batch_size: int) -> Tuple[np.array, np.array]:
if self.pos + batch_size > len(self.feat):
np.random.shuffle(self.indexes)
self.pos = 0
batch_indexes = self.indexes[self.pos: self.pos + batch_size]
self.pos += batch_size
return self.feat[batch_indexes], self.lab[batch_indexes]
I am running into a REALLY weird case with a little class involving ctypes that I am writing. The objective of this class is to load a matrix that is in proprietary format into a python structure that I had to create (these matrices can have several cores/layers and each core/layer can have several indices that refer to only a few elements of the matrix, thus forming submatrices).
The code that test the class is this:
import numpy as np
from READS_MTX import mtx
import time
mymatrix=mtx()
mymatrix.load('D:\\MyMatrix.mtx', True)
and the class I created is this:
import os
import numpy as np
from ctypes import *
import ctypes
import time
def main():
pass
#A mydll mtx can have several cores
#we need a class to define each core
#and a class to hold the whole file together
class mtx_core:
def __init__(self):
self.name=None #Matrix core name
self.rows=-1 #Number of rows in the matrix
self.columns=-1 #Number of columns in the matrix
self.type=-1 #Data type of the matrix
self.indexcount=-1 #Tuple with the number of indices for each dimension
self.RIndex={} #Dictionary with all indices for the rows
self.CIndex={} #Dictionary with all indices for the columns
self.basedata=None
self.matrix=None
def add_core(self, mydll,mat,core):
nameC=ctypes.create_string_buffer(50)
mydll.MATRIX_GetLabel(mat,0,nameC)
nameC=repr(nameC.value)
nameC=nameC[1:len(nameC)-1]
#Add the information to the objects' methods
self.name=repr(nameC)
self.rows= mydll.MATRIX_GetBaseNRows(mat)
self.columns=mydll.MATRIX_GetBaseNCols(mat)
self.type=mydll.MATRIX_GetDataType(mat)
self.indexcount=(mydll.MATRIX_GetNIndices(mat,0 ),mydll.MATRIX_GetNIndices(mat,0 ))
#Define the data type Numpy will have according to the data type of the matrix in question
dt=np.float64
v=(self.columns*c_float)()
if self.type==1:
dt=np.int32
v=(self.columns*c_long)()
if self.type==2:
dt=np.int64
v=(self.columns*c_longlong)()
#Instantiate the matrix
time.sleep(5)
self.basedata=np.zeros((self.rows,self.columns),dtype=dt)
#Read matrix and puts in the numpy array
for i in range(self.rows):
mydll.MATRIX_GetBaseVector(mat,i,0,self.type,v)
self.basedata[i,:]=v[:]
#Reads all the indices for rows and put them in the dictionary
for i in range(self.indexcount[0]):
mydll.MATRIX_SetIndex(mat, 0, i)
v=(mydll.MATRIX_GetNRows(mat)*c_long)()
mydll.MATRIX_GetIDs(mat,0, v)
t=np.zeros(mydll.MATRIX_GetNRows(mat),np.int64)
t[:]=v[:]
self.RIndex[i]=t.copy()
#Do the same for columns
for i in range(self.indexcount[1]):
mydll.MATRIX_SetIndex(mat, 1, i)
v=(mydll.MATRIX_GetNCols(mat)*c_long)()
mydll.MATRIX_GetIDs(mat,1, v)
t=np.zeros(mydll.MATRIX_GetNCols(mat),np.int64)
t[:]=v[:]
self.CIndex[i]=t.copy()
class mtx:
def __init__(self):
self.data=None
self.cores=-1
self.matrix={}
mydll=None
def load(self, filename, verbose=False):
#We load the DLL and initiate it
mydll=cdll.LoadLibrary('C:\\Program Files\\Mysoftware\\matrixDLL.dll')
mydll.InitMatDLL()
mat=mydll.MATRIX_LoadFromFile(filename, True)
if mat<>0:
self.cores=mydll.MATRIX_GetNCores(mat)
if verbose==True: print "Matrix has ", self.cores, " cores"
for i in range(self.cores):
mydll.MATRIX_SetCore(mat,i)
nameC=ctypes.create_string_buffer(50)
mydll.MATRIX_GetLabel(mat,i,nameC)
nameC=repr(nameC.value)
nameC=nameC[1:len(nameC)-1]
#If verbose, we list the matrices being loaded
if verbose==True: print " Loading core: ", nameC
self.datafile=filename
self.matrix[nameC]=mtx_core()
self.matrix[nameC].add_core(mydll,mat,i)
else:
raise NameError('Not possible to open file. TranCad returned '+ str(tc_value))
mydll.MATRIX_CloseFile(filename)
mydll.MATRIX_Done(mat)
if __name__ == '__main__':
main()
When I run the test code in ANY form (double clicking, python's IDLE or Pyscripter) it crashes with the familiar error "WindowsError: exception: access violation writing 0x0000000000000246", but when I debug the code using Pyscripter stoping in any inner loop, it runs perfectly.
I'd really appreciate any insights.
EDIT
THe Dumpbin output for the DLL:
File Type: DLL
Section contains the following exports for CaliperMTX.dll
00000000 characteristics
52FB9F15 time date stamp Wed Feb 12 08:19:33 2014
0.00 version
1 ordinal base
81 number of functions
81 number of names
ordinal hint RVA name
1 0 0001E520 InitMatDLL
2 1 0001B140 MATRIX_AddIndex
3 2 0001AEE0 MATRIX_Clear
4 3 0001AE30 MATRIX_CloseFile
5 4 00007600 MATRIX_Copy
6 5 000192A0 MATRIX_CreateCache
7 6 00019160 MATRIX_CreateCacheEx
8 7 0001EB10 MATRIX_CreateSimple
9 8 0001ED20 MATRIX_CreateSimpleLike
10 9 00016D40 MATRIX_DestroyCache
11 A 00016DA0 MATRIX_DisableCache
12 B 0001A880 MATRIX_Done
13 C 0001B790 MATRIX_DropIndex
14 D 00016D70 MATRIX_EnableCache
15 E 00015B10 MATRIX_GetBaseNCols
16 F 00015B00 MATRIX_GetBaseNRows
17 10 00015FF0 MATRIX_GetBaseVector
18 11 00015CE0 MATRIX_GetCore
19 12 000164C0 MATRIX_GetCurrentIndexPos
20 13 00015B20 MATRIX_GetDataType
21 14 00015EE0 MATRIX_GetElement
22 15 00015A30 MATRIX_GetFileName
23 16 00007040 MATRIX_GetIDs
24 17 00015B80 MATRIX_GetInfo
25 18 00015A50 MATRIX_GetLabel
26 19 00015AE0 MATRIX_GetNCols
27 1A 00015AB0 MATRIX_GetNCores
28 1B 00016EC0 MATRIX_GetNIndices
29 1C 00015AC0 MATRIX_GetNRows
30 1D 00018AF0 MATRIX_GetVector
31 1E 00015B40 MATRIX_IsColMajor
32 1F 00015B60 MATRIX_IsFileBased
33 20 000171A0 MATRIX_IsReadOnly
34 21 00015B30 MATRIX_IsSparse
35 22 0001AE10 MATRIX_LoadFromFile
36 23 0001BAE0 MATRIX_New
37 24 00017150 MATRIX_OpenFile
38 25 000192D0 MATRIX_RefreshCache
39 26 00016340 MATRIX_SetBaseVector
40 27 00015C20 MATRIX_SetCore
41 28 00016200 MATRIX_SetElement
42 29 00016700 MATRIX_SetIndex
43 2A 0001AFA0 MATRIX_SetLabel
44 2B 00018E50 MATRIX_SetVector
45 2C 00005DA0 MAT_ACCESS_Create
46 2D 00005E40 MAT_ACCESS_CreateFromCurrency
47 2E 00004B10 MAT_ACCESS_Done
48 2F 00005630 MAT_ACCESS_FillRow
49 30 000056D0 MAT_ACCESS_FillRowDouble
50 31 00005A90 MAT_ACCESS_GetCurrency
51 32 00004C30 MAT_ACCESS_GetDataType
52 33 000058E0 MAT_ACCESS_GetDoubleValue
53 34 00004C40 MAT_ACCESS_GetIDs
54 35 00005AA0 MAT_ACCESS_GetMatrix
55 36 00004C20 MAT_ACCESS_GetNCols
56 37 00004C10 MAT_ACCESS_GetNRows
57 38 000055A0 MAT_ACCESS_GetRowBuffer
58 39 00005570 MAT_ACCESS_GetRowID
59 3A 00005610 MAT_ACCESS_GetToReadFlag
60 3B 00005870 MAT_ACCESS_GetValue
61 3C 00005AB0 MAT_ACCESS_IsValidCurrency
62 3D 000055E0 MAT_ACCESS_SetDirty
63 3E 000059F0 MAT_ACCESS_SetDoubleValue
64 3F 00005620 MAT_ACCESS_SetToReadFlag
65 40 00005960 MAT_ACCESS_SetValue
66 41 00005460 MAT_ACCESS_UseIDs
67 42 00005010 MAT_ACCESS_UseIDsEx
68 43 00005490 MAT_ACCESS_UseOwnIDs
69 44 00004D10 MAT_ACCESS_ValidateIDs
70 45 0001E500 MAT_pafree
71 46 0001E4E0 MAT_palloc
72 47 0001E4F0 MAT_pfree
73 48 0001E510 MAT_prealloc
74 49 00006290 MA_MGR_AddMA
75 4A 00006350 MA_MGR_AddMAs
76 4B 00005F90 MA_MGR_Create
77 4C 00006050 MA_MGR_Done
78 4D 000060D0 MA_MGR_RegisterThreads
79 4E 00006170 MA_MGR_SetRow
80 4F 00006120 MA_MGR_UnregisterThread
81 50 0001E490 UnloadMatDLL
Summary
6000 .data
5000 .pdata
C000 .rdata
1000 .reloc
1000 .rsrc
54000 .text