I have a lot of arrays, every is 2D, but has other sizes. I am looking for any good idea how to keep them in one variable. Order of them is important. What do you recommend? Arrays? Dictionaries? Any ideas?
My problem:
I have numpy array:
b=np.array([])
And now I want to add to them e.g. array:
a=np.array([0,1,2])
And later:
c=np.array([[0,1,2],[3,4,5]])
Etc
Result should be:
b=([0,1,2], [[0,1,2],[3,4,5]])
I don't know how to get it in numpy and without initializing size of first array.
If the ordering is important, store them in a list (mylist = [array1, array2, ...]) - or, if you're not going to need to change or shuffle them around after creating the list, store them in a tuple (mylist = (array1, array2, ...)).
Both of these structures can store arbitrary object types (they don't care that your arrays are different sizes, or even that they are all the same kind of object at all) and both maintain a consistent ordering which can be accessed through mylist[0], mylist[1] etc. They will also appear in the correct order when you go through them using for an_array in mylist: etc.
Related
arr = np.array([Myclass(np.random.random(100)) for _ in range(10000)])
Is there a way to save time in this statement by creating a numpy array of objects directly (avoiding the list construction which is costly)?
I need to create and process a large number of objects of class Myclass, where each object contains several int’s, several float’s, and a list (or tuple) of floats. The point of using the array (of objects) is to take advantage of numpy array’s fast computation (e.g., column-sums) on slices of the array of objects (and other stuff; the array on which slices are taken has each row made up of one Myclass object and other scalar fields). Other than using the np.array (as above), is there any other time-saving strategy in this case? Thanks.
Numpy needs to know the length of the array in advance because it must allocate enough memory in a block.
You can start with an empty array of appropriate type using np.empty(10_000, object). (Beware that for most data types empty arrays may contain garbage data, it's usually safer to start with np.zeros() unless you really need the performance, but dtype object does get properly initialized to Nones.)
You can then apply any callable you like (like a class) over all the values using np.vectorize. It's faster to use the included vectorized functions when you can instead of converting them, since vectorize basically has to call it for each element in a for loop. But sometimes you can't.
In the case of random numbers, you can create an array sample of any shape you like using np.random.rand(). It would still have to be converted to a new array of dtype object when you apply your class to it though. I'm not sure if that's any faster than creating the samples in each __init__ (or whatever callable). You'd have to profile it.
Many functions like in1d and setdiff1d are designed for 1-d array. One workaround to apply these methods on N-dimensional arrays is to make numpy to treat each row (something more high dimensional) as a value.
One approach I found to do so is in this answer Get intersecting rows across two 2D numpy arrays by Joe Kington.
The following code is taken from this answer. The task Joe Kington faced was to detect common rows in two arrays A and B while trying to use in1d.
import numpy as np
A = np.array([[1,4],[2,5],[3,6]])
B = np.array([[1,4],[3,6],[7,8]])
nrows, ncols = A.shape
dtype={'names':['f{}'.format(i) for i in range(ncols)],
'formats':ncols * [A.dtype]}
C = np.intersect1d(A.view(dtype), B.view(dtype))
# This last bit is optional if you're okay with "C" being a structured array...
C = C.view(A.dtype).reshape(-1, ncols)
I am hoping you to help me with any of the following three questions. First, I do not understand the mechanisms behind this method. Can you try to explain it to me?
Second, is there other ways to let numpy treat an subarray as one object?
One more open question: dose Joe's approach have any drawbacks? I mean whether treating rows as a value might cause some problems? Sorry this question is pretty broad.
Try to post what I have learned. The method Joe used is called structured arrays. It will allow users to define what is contained in a single cell/element.
We take a look at the description of the first example the documentation provided.
x = np.array([(1,2.,'Hello'), (2,3.,"World")], ...
dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')])
Here we have created a one-dimensional array of length 2. Each element
of this array is a structure that contains three items, a 32-bit
integer, a 32-bit float, and a string of length 10 or less.
Without passing in dtype, however, we will get a 2 by 3 matrix.
With this method, we would be able to let numpy treat a higher dimensional array as an single element with properly set dtype.
Another trick Joe showed is that we don't need to really form a new numpy array to achieve the purpose. We can use the view function (See ndarray.view) to change the way numpy view data. There is a section of Note section in ndarray.view that I think you should take a look before utilizing the method. I have no guarantee that there would not be side effects. The paragraph below is from the note section and seems to call for caution.
For a.view(some_dtype), if some_dtype has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the behavior of the view cannot be predicted just from the superficial appearance of a (shown by print(a)). It also depends on exactly how a is stored in memory. Therefore if a is C-ordered versus fortran-ordered, versus defined as a slice or transpose, etc., the view may give different results.
Other reference
https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.dtypes.html
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.dtype.html
Maybe this is a simple issue, but I could not find any information about it so far.
For an optimization in numpy I need an array of functions. The number of functions I need depends on the current object which shall be optimized.
I have already figured out how to create these functions dynamically, but now I would like to store them in an array like this:
myArray = zeros(x)
for i in range(x):
myArray[i] = createFunction(i)
If I run this I get a type mismatch:
float() argument must be a string or a number, not 'function'
Creating the array directly works well:
myArray = array([createFunction(0)...])
But because I don't know the number of functions I need, this is exactly what I want to prevent.
Ah, I get it. You really do mean an array of functions.
The type mismatch error arises because the call to zeros creates an array of floats by default. So your original would work if instead you did myArray = numpy.empty(x, dtype=numpy.object) (note that empty makes more sense than zeros here). The slightly more pythonic version is to use a list comprehension
myArray = numpy.array([createFunction(i) for i in range(x)]).
But you might not need to create a numpy array at all, depending on what you want to do with it:
myArray = [createFunction(i) for i in range(x)]
If you want to avoid the list, it might be better to use numpy.fromfunction along with numpy.vectorize:
myArray = numpy.fromfunction(numpy.vectorize(createFunction),
shape=(x,), dtype=numpy.object)
where (x,) is a tuple giving the shape of the array. The call to vectorize is needed because fromfunction assumes that the function can work on an array of inputs and return an array of scalars, and vectorize converts a function to do exactly that. The dtype=object is needed since otherwise numpy tries to create an array of floats.
Maybe you can use
myArray = array([createFunction(i) for i in range(x)])
If you need an array of functions, is it possible to not use NumPy? NumPy arrays have C-style types and it defaults to float. If you can, just use a standard Python list. But if you absolutely must use NumPy, try defining the array like so:
import numpy as np
a = np.empty([x], dtype=np.dtype(np.object_))
Or however you need it to be with that dtype.
Numpy arrays are homogeneous. That is all elements of a numpy array are of the same type -- python is duck-typed, numpy isn't. This is part of what makes matrix operations on numpy arrays and matrices so fast. However, because of this a data type must be known when the array is first created. Numpy is generally very good at inferring the data type. The problem comes when creating an empty or zeroed array. Since there are no elements to examine numpy must guess the data type. Numpy defaults to numpy.float64 if it isn't given a data type at array creation time. This is a decent choice as numpy is typically used in scientific or engineering areas where floating point numbers are required. This is also why numpy is complaining -- because it can't store your functions as 64-bit floating point numbers.
The quick solution is to let numpy know the data type you want. eg.
myArray = numpy.zeros(x, dtype=numpy.object)
Note that the data type cannot be any class, but must be an instance of numpy.dtype (for advanced use you can create additional dtypes a runtime that numpy can then manipulate). For functions, numpy will store them as numpy.object (which means any generic python object). I do not think you will get any performance benefit from using numpy to store arrays of functions. Perhaps you would be better off creating generator functions and chaining them, converting to a numpy array once you know the result will be a number.
funcs = [createFunction(i) for i in xrange(x)]
def getItemFromEachFunction(i):
return funcs[i]()
arr = numpy.fromfunction(getItemFromEachFunction, (x,))
I have some data represented in a 1300x1341 matrix. I would like to split this matrix in several pieces (e.g. 9) so that I can loop over and process them. The data needs to stay ordered in the sense that x[0,1] stays below (or above if you like) x[0,0] and besides x[1,1].
Just like if you had imaged the data, you could draw 2 vertical and 2 horizontal lines over the image to illustrate the 9 parts.
If I use numpys reshape (eg. matrix.reshape(9,260,745) or any other combination of 9,260,745) it doesn't yield the required structure since the above mentioned ordering is lost...
Did I misunderstand the reshape method or can it be done this way?
What other pythonic/numpy way is there to do this?
Sounds like you need to use numpy.split() which has its documentation here ... or perhaps its sibling numpy.array_split() here. They are for splitting an array into equal subsections without re-arranging the numbers like reshape does,
I haven't tested this but something like:
numpy.array_split(numpy.zeros((1300,1341)), 9)
should do the trick.
reshape, to quote its docs,
Gives a new shape to an array without
changing its data.
In other words, it does not move the array's data around at all -- it just affects the array's dimension. You, on the other hand, seem to require slicing; again quoting:
It is possible to slice and stride
arrays to extract arrays of the same
number of dimensions, but of different
sizes than the original. The slicing
and striding works exactly the same
way it does for lists and tuples
except that they can be applied to
multiple dimensions as well.
So for example thearray[0:260, 0:745] is the "upper leftmost part, thearray[260:520, 0:745] the upper left-of-center part, and so forth. You could have references to the various parts in a list (or dict with appropriate keys) to process them separately.
I have a few functions that return an array of data corresponding to parameters ranges.
Example: for a 2d array a, the a_{ij} value corresponds to the parameter set (param1_i, param2_j). How do I return the result and keep the parameter-value correspondence?
Calling the function for each and every of param1_i, para2_j and returning one value would take ages (far more efficient if you do it in one go)
Break the function into (many) smaller functions and make usage difficult? (the point is to get the values for a range of parameters, 1 value is completely useless)
The best I can come up with is make a new numpy dtype, for example for a 2d array:
tagged2d = np.dtype( [('vals', float, 1), ('params', float, (2,))] )
so that a['vals'][i,j] contains the values and a['params'][i,j] the corresponding parameters.
Any thoughts? Maybe I should just return 2 arrays, one with values, other with parameter tuples?
I recommend your last suggestion... just return two arrays {'values': a, 'params':params}.
There are a few reasons for this.
Primarily, your other solution (using dtype and recarrays) tangles too many things together. For example, what about quantities derived from a that correspond to the same parameters... do you make a new recarray and a new copy of the parameters for that? Even something as simple as 2*a becoming the salient quantity will require that you make difficult decisions.
Recarrays have limitations and this is so easily solved in other ways that it's not worth accepting those limitations.
If you want an easier interrelation between the returned terms, you could put the items in a class. For example, you could have a method that takes a param pair and returns the corresponding result. This way, you wouldn't be limited by the recarray, and you could still construct whatever convenience relationship between the two that you like, and easily make backward-compatible change to behavior, etc.