I have a table that looks as follows:
City
Value
<String>
<String>
Chicago
12
Detriot
15
Jersery City
20
This table is locked in this format:
import numpy as np
x = np.array([('Chicago', '12'),('Detriot', '15'),('Jersery City', '20')])
I did some research on Stack Overflow and came to this post here. However I don't know why it is not working. I tried the following code:
x[:,1] = x[:,1].astype(int)
I even tried the following as well and it did not work:
x[:,[1]] = x[:,[1]].astype(int)
However this line when run returns the following:
type(x[0,1])
numpy.str_
Numpy array only support uniform types. Thus, all the items of an array should be of the same type (you can retrieve it using x.dtype), like np.float64 or np.int64 for example.
The type of the items in x cannot change at runtime. x[:,1] = x[:,1].astype(int) performs an implicit conversion so that the types matches. If you need that, then you have to create a new array.
Note that this type can be object. In such a situation, any Python object can be stored in the Numpy array. However, this is generally a bad idea to use object types since they are inefficiently stored in memory, defeat any possible low-level vectorization (ie. slow) and cause performance issues in parallel (because of the GIL).
Note also that Numpy provides structured types to store (quite) complex data structure in each array item.
Related
I have two questions that I have been dealing with for two days:
if I want to determine for a numpy object adarray the memory address of the object and the elements with the numpy method .array and once with the normal python functions hex(id()) I get different addresses.
with hex(id()) it gets really weird. sometimes the elements get the same addresses sometimes different ones.
import numpy as np
y = np.array([0,1,2,3])
print(y.data)
print(y[0].data)
print(y[1].data)
print(y[2].data)
print(y[3].data)
print(hex(id(y[0])))
print(hex(id(y[1])))
print(hex(id(y[2])))
print(hex(id(y[3])))
the results are:
<memory at 0x7f9aaa22d870>
<memory at 0x7f9aaa1bd940>
<memory at 0x7f9aaa1bd940>
<memory at 0x7f9aaa1bd940>
<memory at 0x7f9aaa1bd940>
0x7f9aaa31e030
with hex((id))
0x7f9aaa1c0750
0x7f9aaa1c0730
0x7f9aaa1c0130
0x7f9aaa1c0750
Most of these results don't mean what you're thinking, because NumPy memory layout doesn't work like you're thinking.
A NumPy array object is not its data buffer. The data buffer is separate. With all the metadata an array needs, it would not be possible for an array to literally be its data buffer, and with how NumPy makes heavy use of array views, it would not be possible for an array to directly contain its buffer either. Many arrays can share the same data buffer, or have overlapping data buffers.
A NumPy array object contains some metadata and a number of pointers, one of which points to its buffer. If you had done print(hex(id(y))), you would have gotten the address of the array object itself. With print(y.data), you print a memoryview object representing the array's data buffer, and the "at 0x..." gives the address of the buffer.
When you do y[0], that's not really an array element. It's a new array scalar object, representing an immutable scalar with value taken from the first index of y. It does not directly refer to the memory used for y's first element, because when someone does
x = y[0]
y[0] = 1
they don't want the y[0] = 1 assignment to affect x.
The array scalar has its own address and its own data buffer, separate from the array scalar itself. The array scalar has a very short lifetime, so y[0] and y[1] may end up using the same memory if y[0]'s lifetime ends before you retrieve y[1]. They don't have to use the same memory, but they can.
When you do print(hex(id(y[0]))), you're printing the address of the array scalar. When you do print(y[0].data), you're printing a memoryview representing the array scalar's data buffer.
With all that said, there is almost nothing useful you can do with any of these memory addresses, especially if you're not writing a C extension. If you are writing a C extension, you still probably shouldn't be using any of these addresses directly. Cython is much more convenient than writing C code directly. If you do want to write C to interact with NumPy, you're going to want a much deeper understanding of how NumPy arrays work under the hood, and you should go read the NumPy C API docs.
Following the answer in How to efficiently convert Matlab engine arrays to numpy ndarray?, it seems much more efficient to access the matlab engine array through the _data property.
However it appears that there is no _data property when the array returned by Matlab is a 'complex single' one. Is there an equivalent fast access to the array of complex numbers ?
A possible workaround is to return from Matlab two real arrays (one containing the real part, the other the imaginary part) and build back the complex value in Python
M_real, M_imag = myMatlabFunction()
M_real_np = np.array(M_real._data)
M_imag_np = np.array(M_imag._data)
M_np = M_real_np + M_imag_np*np.complex(0,1)
Then we can profit from the fast access to the _data member of each array.
I am still interested in more straightforward solution.
arr = np.array([Myclass(np.random.random(100)) for _ in range(10000)])
Is there a way to save time in this statement by creating a numpy array of objects directly (avoiding the list construction which is costly)?
I need to create and process a large number of objects of class Myclass, where each object contains several int’s, several float’s, and a list (or tuple) of floats. The point of using the array (of objects) is to take advantage of numpy array’s fast computation (e.g., column-sums) on slices of the array of objects (and other stuff; the array on which slices are taken has each row made up of one Myclass object and other scalar fields). Other than using the np.array (as above), is there any other time-saving strategy in this case? Thanks.
Numpy needs to know the length of the array in advance because it must allocate enough memory in a block.
You can start with an empty array of appropriate type using np.empty(10_000, object). (Beware that for most data types empty arrays may contain garbage data, it's usually safer to start with np.zeros() unless you really need the performance, but dtype object does get properly initialized to Nones.)
You can then apply any callable you like (like a class) over all the values using np.vectorize. It's faster to use the included vectorized functions when you can instead of converting them, since vectorize basically has to call it for each element in a for loop. But sometimes you can't.
In the case of random numbers, you can create an array sample of any shape you like using np.random.rand(). It would still have to be converted to a new array of dtype object when you apply your class to it though. I'm not sure if that's any faster than creating the samples in each __init__ (or whatever callable). You'd have to profile it.
Many functions like in1d and setdiff1d are designed for 1-d array. One workaround to apply these methods on N-dimensional arrays is to make numpy to treat each row (something more high dimensional) as a value.
One approach I found to do so is in this answer Get intersecting rows across two 2D numpy arrays by Joe Kington.
The following code is taken from this answer. The task Joe Kington faced was to detect common rows in two arrays A and B while trying to use in1d.
import numpy as np
A = np.array([[1,4],[2,5],[3,6]])
B = np.array([[1,4],[3,6],[7,8]])
nrows, ncols = A.shape
dtype={'names':['f{}'.format(i) for i in range(ncols)],
'formats':ncols * [A.dtype]}
C = np.intersect1d(A.view(dtype), B.view(dtype))
# This last bit is optional if you're okay with "C" being a structured array...
C = C.view(A.dtype).reshape(-1, ncols)
I am hoping you to help me with any of the following three questions. First, I do not understand the mechanisms behind this method. Can you try to explain it to me?
Second, is there other ways to let numpy treat an subarray as one object?
One more open question: dose Joe's approach have any drawbacks? I mean whether treating rows as a value might cause some problems? Sorry this question is pretty broad.
Try to post what I have learned. The method Joe used is called structured arrays. It will allow users to define what is contained in a single cell/element.
We take a look at the description of the first example the documentation provided.
x = np.array([(1,2.,'Hello'), (2,3.,"World")], ...
dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')])
Here we have created a one-dimensional array of length 2. Each element
of this array is a structure that contains three items, a 32-bit
integer, a 32-bit float, and a string of length 10 or less.
Without passing in dtype, however, we will get a 2 by 3 matrix.
With this method, we would be able to let numpy treat a higher dimensional array as an single element with properly set dtype.
Another trick Joe showed is that we don't need to really form a new numpy array to achieve the purpose. We can use the view function (See ndarray.view) to change the way numpy view data. There is a section of Note section in ndarray.view that I think you should take a look before utilizing the method. I have no guarantee that there would not be side effects. The paragraph below is from the note section and seems to call for caution.
For a.view(some_dtype), if some_dtype has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the behavior of the view cannot be predicted just from the superficial appearance of a (shown by print(a)). It also depends on exactly how a is stored in memory. Therefore if a is C-ordered versus fortran-ordered, versus defined as a slice or transpose, etc., the view may give different results.
Other reference
https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.dtypes.html
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.dtype.html
Maybe this is a simple issue, but I could not find any information about it so far.
For an optimization in numpy I need an array of functions. The number of functions I need depends on the current object which shall be optimized.
I have already figured out how to create these functions dynamically, but now I would like to store them in an array like this:
myArray = zeros(x)
for i in range(x):
myArray[i] = createFunction(i)
If I run this I get a type mismatch:
float() argument must be a string or a number, not 'function'
Creating the array directly works well:
myArray = array([createFunction(0)...])
But because I don't know the number of functions I need, this is exactly what I want to prevent.
Ah, I get it. You really do mean an array of functions.
The type mismatch error arises because the call to zeros creates an array of floats by default. So your original would work if instead you did myArray = numpy.empty(x, dtype=numpy.object) (note that empty makes more sense than zeros here). The slightly more pythonic version is to use a list comprehension
myArray = numpy.array([createFunction(i) for i in range(x)]).
But you might not need to create a numpy array at all, depending on what you want to do with it:
myArray = [createFunction(i) for i in range(x)]
If you want to avoid the list, it might be better to use numpy.fromfunction along with numpy.vectorize:
myArray = numpy.fromfunction(numpy.vectorize(createFunction),
shape=(x,), dtype=numpy.object)
where (x,) is a tuple giving the shape of the array. The call to vectorize is needed because fromfunction assumes that the function can work on an array of inputs and return an array of scalars, and vectorize converts a function to do exactly that. The dtype=object is needed since otherwise numpy tries to create an array of floats.
Maybe you can use
myArray = array([createFunction(i) for i in range(x)])
If you need an array of functions, is it possible to not use NumPy? NumPy arrays have C-style types and it defaults to float. If you can, just use a standard Python list. But if you absolutely must use NumPy, try defining the array like so:
import numpy as np
a = np.empty([x], dtype=np.dtype(np.object_))
Or however you need it to be with that dtype.
Numpy arrays are homogeneous. That is all elements of a numpy array are of the same type -- python is duck-typed, numpy isn't. This is part of what makes matrix operations on numpy arrays and matrices so fast. However, because of this a data type must be known when the array is first created. Numpy is generally very good at inferring the data type. The problem comes when creating an empty or zeroed array. Since there are no elements to examine numpy must guess the data type. Numpy defaults to numpy.float64 if it isn't given a data type at array creation time. This is a decent choice as numpy is typically used in scientific or engineering areas where floating point numbers are required. This is also why numpy is complaining -- because it can't store your functions as 64-bit floating point numbers.
The quick solution is to let numpy know the data type you want. eg.
myArray = numpy.zeros(x, dtype=numpy.object)
Note that the data type cannot be any class, but must be an instance of numpy.dtype (for advanced use you can create additional dtypes a runtime that numpy can then manipulate). For functions, numpy will store them as numpy.object (which means any generic python object). I do not think you will get any performance benefit from using numpy to store arrays of functions. Perhaps you would be better off creating generator functions and chaining them, converting to a numpy array once you know the result will be a number.
funcs = [createFunction(i) for i in xrange(x)]
def getItemFromEachFunction(i):
return funcs[i]()
arr = numpy.fromfunction(getItemFromEachFunction, (x,))