Is it possible to implement Typed Arrays using the cbor2 Python library? In the documentation I only saw that you can define a custom encoder to serialize custom objects, but instead I would like to implement Typed Arrays to reduce the number of bytes I send, like it's explained in this specification: https://datatracker.ietf.org/doc/html/rfc8746
For example, I would like to be able to say that I will send an array of 32 bits unsigned integers using a single tag that indicates the Type of the array, instead of repeating the information about the type for each value inside the array.
Are there other cbor libraries that can do that, if cbor2 can't?
I received an answer on their Github: https://github.com/agronholm/cbor2/issues/128
Related
I have a function that takes in a couple of large multi-dimension float arrays as input. As a result, I crash the stack.
I'm a beginner to C/C++ so I will apologize in advance if this question is dumb. Anyways, after looking around the net, this problem is not particularly new and the general solution is either make it as a global variable or use vector (instead of array).
However, this piece of C++ code was intended to be used as a shared library and takes input from a Python script and returns (or modifies) the value (also in arrays). Therefore I don't think it would be possible to declare as global (correct me if I am wrong).
So, apart from using vector, is there any other way to do so? The reason that put me off from using C++ vector is there is no equivalent data type in Python. I'm using ctypes for communication between Python and C++ if that matters.
There is no way in C/C++ to pass an array as a function argument by value. You can pass a pointer to the array and the size of the array. In C++, you also can pass a reference to the array and the size of the array (in template parameters). In either case, the array is not copied and the called function accesses the array in-place, where the caller allocated it.
Note that the array can be wrapped in a structure, in which case it can be passed by value. In C++, std::array is an example of such. But the caller has to initialize that structure instead of the raw array. std::vector is similar in that it is a structure and it automatically copies elements when the vector is copied, but unlike C arrays and std::array it dynamically allocates memory for the elements.
However, if you're integrating with C API, you are most likely limited to a pointer+size or pointer+size wrapped in a C structure solutions. The rules of working with the array (e.g. who allocates and frees the array, are writes to the array allowed, etc.) are specific to the particular API you're working with. Often, there is a dedicated set of functions for working with arrays provided by the library. You should read the API documentation about conventions taken in Python.
I have python two dimensional numpy arrays and I want to change them into binary format which can be read with C++, As you know, two dimensional array in C++ is kind of one dimensional array with two pointers which are used to locate elements. Could you tell me which function in python can be used to do the job or any other solution?
This is too long for a comment, but probably not complete enough to run on it's own. As Tom mentioned in the comments on your question, using a library that saves and loads to a well defined format (hdf5, .mat) in python and C++ is probably the easiest solution. If you don't want to find and setup such a library, read on.
Numy has the ability to save data using numpy.save (see this),
The format (described here) states there is a header, with information on the datatype and number of array shape, followed by the data. So, unless you want to write a fully featured parser (you don't), you should ensure python consistently saves data as a float64 (or whichever type you want), in c order (fortran ordering is the other option).
Then, the C++ code just needs to check that the array data type is float64, the correct ordering is used, and how large the array is. Allocate the appropriate amount of memory, and you can load that number of bytes from the file directly into the allocated memory. To create 2d indexing you'll need to allocate an array of pointers to each 'row' in the allocated memory.
Or just use a library that will handle all of that for you.
As far as I can see there are only format options for creating one or multiple bytes when using struct.pack(). I would like to create a field containing, for example, 4 bits, or just one. How could I do that?
I know that there is a module called bitarray, but I would like to solve this using struct.pack()
I have a Cython function like cdef generate_data(size) where I'd like to:
initialise an array of bytes of length size
call external (C) function to populate the array using array ptr and size
return the array as something understandable by Python (bytearray, bytes, your suggestions)
I have seen many approaches on the internet but I'm looking for the best/recommended way of doing this in my simple case. I want to:
avoid memory reallocations
avoid using numpy
ideally use something that works in Python 3 and 2.7, although a 2.7 solution is good enough.
I'm using Cython 0.20.
For allocating memory, I have you covered.
After that, just take a pointer (possibly at the data attribute if you use cpython.array.array like I recommend) and pass that along. You can return the cpython.array.array type and it will become a Python array.
I am writing a little arbitrary precision in c(I know such libraries exist, such as gmp, nut I find it more fun to write it, just to exercise myself), and I would like to know if arrays are the best way to represent very long integers, or if there is a better solution(maybe linked chains)? And secondly how does python work to have big integers?(does it use arrays or another technique ?)
Thanks in advance for any response
Try reading documentation about libgmp, it already implements bigints. From what I see, it seems like integers are implemented as a dynamically allocated which is realloc'd when the number needs to grow. See http://gmplib.org/manual/Integer-Internals.html#Integer-Internals.
Python long integers are implemented as a simple structure with just an object header and an array of integers (which might be either 16- or 32-bit ints, depending on platform).
Code is at http://hg.python.org/cpython/file/f8942b8e6774/Include/longintrepr.h