Structure of a vector in python - python

How can I identify a vector in Python?
Like does it have a single dimension or is it n-dimensions, I got really confused when trying to understand this in NumPy.
Also, what's the difference between static memory allocation and a dynamic one in vectors?

A vector has a single dimension and is create with method such as numpy.array and numpylinspace. A n-dimension array would be a matrix created with methods such as zeros or numpy.random.uniform(...). You dont have to use numpy to use vectors in python. You can simply use the basic array type.
In python you usually dont have to worry about memory allocation. A dynamic memory allocation means that elements can added or removed to the vector whereas in a static memory allocation, there would be a fixed number of elements in the vector.

Related

What is the best way to store a non-rectangular array?

I would like to store a non-rectangular array in Python. The array has millions of elements and I will be applying a function to each element in the array, so I am concerned about performance. What data structure should I use? Should I use a Python list or a numpy array of type object? Is there another data structure that would work even better?
You can use the dictionary data structure to store everything. If you have ample memory, dictionaries is a good option. The hashing process makes them faster.
I'd suggest you to use scipy sparse matrices.
UPD. Some elaboration goes below.
I assume that "non-rectangular" implies there will be empty elements in plain 2D array. Having millions of elements will make these 'holes' tax on memory usage. Sparse matrix provide a way to have familiar array interface and occupy only necessary amount of memory.
Though if array-ish indexing is not required, dictionary is pretty fine storage to use.

Lazy version of numpy.unpackbits

I use numpy.memmap to load only the parts of arrays into memory that I need, instead of loading an entire huge array. I would like to do the same with bool arrays.
Unfortunately, bool memmap arrays aren't stored economically: according to ls, a bool memmap file requires as much space as a uint8 memmap file of the same array shape.
So I use numpy.unpackbits to save space. Unfortunately, it seems not lazy: It's slow and can cause a MemoryError, so apparently it loads the array from disk into memory instead of providing a "bool view" on the uint8 array.
So if I want to load only certain entries of the bool array from file, I first have to compute which uint8 entries they are part of, then apply numpy.unpackbits to that, and then again index into that.
Isn't there a lazy way to get a "bool view" on the bit-packed memmap file?
Not possible. The memory layout of a bit-packed array is incompatible with what you're looking for. The NumPy shape-and-strides model of array layout does not have sub-byte resolution. Even if you were to create a class that emulated the view you want, trying to use it with normal NumPy operations would require materializing a representation NumPy can work with, at which point you'd have to spend the memory you don't want to spend.

Is numpy array and python list optimized to be dynamically growing?

I have done over the time many things that require me using the list's .append() function, and also numpy.append() function for numpy arrays. I noticed that both grow really slow when sizes of the arrays are big.
I need an array that is dynamically growing for sizes of about 1 million elements. I can implement this myself, just like std::vector is made in C++, by adding buffer length (reserve length) that is not accessible from the outside. But do I have to reinvent the wheel? I imagine it should be implemented somewhere. So my question is: Does such a thing exist already in Python?
What I mean: Is there in Python an array type that is capable of dynamically growing with time complexity of O(C) most of the time?
The memory of numpy arrays is well described in its docs, and has been discussed here a lot. List memory layout has also been discussed, though usually just contrast to numpy.
A numpy array has a fixed size data buffer. 'growing' it requires creating a new array, and copying data to it. np.concatenate does that in compiled code. np.append as well as all the stack functions use concatenate.
A list has, as I understand it, a contiguous data buffer that contains pointers to objects else where in memeory. Python maintains some freespace in that buffer, so additions with list.append are relatively fast and easy. But when the freespace fills up, it has to create a new buffer and copy pointers. I can see where that could get expensive with large lists.
So a list will have store a pointer for each element, plus the element itself (e.g. a float) somewhere else in memory. In contrast the array of floats stores the floats themselves as contiguous bytes in its buffer. (Object dtype arrays are more like lists).
The recommended way to create an array iteratively is to build the list with append, and create the array once at the end. Repeated np.append or np.concatenate is relatively expensive.
deque was mentioned. I don't know much about how it stores its data. The docs say it can add elements at the start just as easily as at the end, but random access is slower than for a list. That implies that it stores data in some sort of linked list, so that finding the nth element requires traversing the n-1 links before it. So there's a trade off between growth ease and access speed.
Adding elements to the start of a list requires making a new list of pointers, with the new one(s) at the start. So adding, and removing elements from the start of a regular list, is much more expensive than doing that at the end.
Recommending software is outside of the core SO purpose. Others may make suggestions, but don't be surprised if this gets closed.
There are file formats like HDF5 that a designed for large data sets. They accommodate growth with features like 'chunking'. And there are all kinds of database packages.
Both use an underlying array. Instead, you can use collections.deque which is made for specifically adding and removing elements at both ends with O(1) complexity

Creating new numpy scalar through C API and implementing a custom view

Short version
Given a built-in quaternion data type, how can I view a numpy array of quaternions as a numpy array of floats with an extra dimension of size 4 (without copying memory)?
Long version
Numpy has built-in support for floats and complex floats. I need to use quaternions -- which generalize complex numbers, but rather than having two components, they have four. There's already a very nice package that uses the C API to incorporate quaternions directly into numpy, which seems to do all the operations perfectly fast. There are a few more quaternion functions that I need to add to it, but I think I can mostly handle those.
However, I would also like to be able to use these quaternions in other functions that I need to write using the awesome numba package. Unfortunately, numba cannot currently deal with custom types. But I don't need the fancy quaternion functions in those numba-ed functions; I just need the numbers themselves. So I'd like to be able to just re-cast an array of quaternions as an array of floats with one extra dimension (of size 4). In particular, I'd like to just use the data that's already in the array without copying, and view it as a new array. I've found the PyArray_View function, but I don't know how to implement it.
(I'm pretty confident the data are held contiguously in memory, which I assume would be required for having a simple view of them. Specifically, elsize = 8*4 and alignment = 8 in the quaternion package.)
Turns out that was pretty easy. The magic of numpy means it's already possible. While thinking about this, I just tried the following with complex numbers:
import numpy as np
a = np.array([1+2j, 3+4j, 5+6j])
a.view(np.float).reshape(a.shape[0],2)
And this gave exactly what I was looking for. Somehow the same basic idea works with the quaternion type. I guess the internals just rely on that elsize, divide by sizeof(float) and use that to set the new size in the last dimension???
To answer my own question then, the same idea can be applied to the quaternion module:
import numpy as np, quaternions
a = np.array([np.quaternion(1,2,3,4), np.quaternion(5,6,7,8), np.quaternion(9,0,1,2)])
a.view(np.float).reshape(a.shape[0],4)
The view transformation and reshaping combined seem to take about 1 microsecond on my laptop, independent of the size of the input array (presumably because there's no memory copying, other than a few members in some basic python object).
The above is valid for simple 1-d arrays of quaternions. To apply it to general shapes, I just write a function inside the quaternion namespace:
def as_float_array(a):
"View the quaternion array as an array of floats with one extra dimension of size 4"
return a.view(np.float).reshape(a.shape+(4,))
Different shapes don't seem to slow the function down significantly.
Also, it's easy to convert back to from a float array to a quaternion array:
def as_quat_array(a):
"View a float array as an array of floats with one extra dimension of size 4"
if(a.shape[-1]==4) :
return a.view(np.quaternion).reshape(a.shape[:-1])
return a.view(np.quaternion).reshape(a.shape[:-1]+(a.shape[-1]//4,))

Fast access and update integer matrix or array in Python

I will need to create array of integer arrays like [[0,1,2],[4,4,5,7]...[4,5]]. The size of internal arrays changeable. Max number of internal arrays is 2^26. So what do you recommend for the fastest way for updating this array.
When I use list=[[]] * 2^26 initialization is very fast but update is very slow. Instead I use
list=[] , for i in range(2**26): list.append.([]) .
Now initialization is slow, update is fast. For example, for 16777216 internal array and 0.213827311993 avarage number of elements on each array for 2^26-element array it takes 1.67728900909 sec. It is good but I will work much bigger datas, hence I need the best way. Initialization time is not important.
Thank you.
What you ask is quite of a problem. Different data structures have different properties. In general, if you need quick access, do not use lists! They have linear access time, which means, the more you put in them, the longer it will take in average to access an element.
You could perhaps use numpy? That library has matrices that can be accessed quite fast, and can be reshaped on the fly. However, if you want to add or delete rows, it will might be a bit slow because it generally reallocates (thus copies) the entire data. So it is a trade off.
If you are gonna have so many internal arrays of different sizes, perhaps you could have a dictionary that contains the internal arrays. I think if it is indexed by integers it will be much faster than a list. Then, the internal arrays could be created with numpy.

Categories