Is there an equivelant of pythons struct.unpack in vb.net - python

I'm trying to convert the following piece of python code to vb.net:
struct.unpack('>L', self.header[0xf4:0xf8])
From searching it seems that there is no obvious one-on-one equivalent to this in vb.net but I was wondering if there was another way to achieve the same results as the code above without the use of any third party libraries if possible.
I'm still trying to find a way to do this in vb.net, however after some searching I've managed to break the above down into its component parts as follows:
'>L' - In python this means an unsigned long with big endian byte order
header is a variable that contains the first 78 bytes of the binary file I am trying to parse.
oxf4 and oxf8 are just the range of indexes I want to unpack.
Its the '>L' part that is causing me the most problems since trying to convert the sequence of bytes to a long unsigned or otherwise is still not giving me the correct result.

Related

How to transform a python (long) integer to a boost multiprecision int with C API

I want to transform a python integer to a boost multi precision cpp_int in C++, to be able to work with integers of arbitrary size. My code is entirely in C++, so I need to do this using Python API for C, working with PyObject.
I am currently doing it through string representation using PyObject_Str and PyUnicode_AsUTF8AndSize.
Is it possible to optimize this by using byte array instead? boost cpp_int has import_bits to generate from a byte array, but I couldn't find anything in python C api to transform an integer to bytes (like to_bytes in python)
PyObject_Bytes and PyBytes_FromObject don't seem to work. both return NULL.

Python unpack equivalent in vba

Does anyone know the equivalent of the Python unpack function in vba. Ultimately, trying to read a file as binary, loading the data into a Byte array then converting certain portions of the byte array into floating point numbers using little endian ordering. I can do this in Python, but would prefer to use vba for other reasons.

Efficient way to translate a c++ pod struct into its equivalent Python struct representation

I've used a bit of metaprogramming (with metal and pfr) + Converting Tuple to string
to map a c++ pod struct into its equivalent python struct representation (padding not accounted for yet but this is a separate step)
So my questions is, how can I do this better? I'm ok moving forward with this but it seems there must be some way I can simplify this code. Any suggestions?
Code here:
https://github.com/Kubiyak/pybuffer_container/blob/master/meta_example.cpp
Actually, this is a good guide:
I will take the solution based on std::apply w/ c
How can you iterate over the elements of an std::tuple?
of-an-stdtuple
That cleaned up my code significantly. How can I delete this q? I cannot find the delete button for it.

Avoiding boost::python::extract<int>

I am running a rather simple task that is being handicapped by the use of boost::python::extract. In short I have a very large python list containing only integers. I need to pass those integer values to a C++ map using the find function. In order to do the hash using the contents of the array I need to convert those contents (a python list object) into an int. I can guarantee from my workflow that only ints will be passed to this list
Because my array is so large I have looked into multithreading but it seems that whenever I have to threads try to read from the array and use the boost::python::extract function I get a SegFault.
I am wondering if there is an alternative to boost::python::extract or a better representation than boost::python::list. One in which C++ can explicitly tell that the contents are ints without me having to step through and convert each element one at a time (which currently takes several seconds).
Thank you

Storing arbitrary precision integers

I am writing a little arbitrary precision in c(I know such libraries exist, such as gmp, nut I find it more fun to write it, just to exercise myself), and I would like to know if arrays are the best way to represent very long integers, or if there is a better solution(maybe linked chains)? And secondly how does python work to have big integers?(does it use arrays or another technique ?)
Thanks in advance for any response
Try reading documentation about libgmp, it already implements bigints. From what I see, it seems like integers are implemented as a dynamically allocated which is realloc'd when the number needs to grow. See http://gmplib.org/manual/Integer-Internals.html#Integer-Internals.
Python long integers are implemented as a simple structure with just an object header and an array of integers (which might be either 16- or 32-bit ints, depending on platform).
Code is at http://hg.python.org/cpython/file/f8942b8e6774/Include/longintrepr.h

Categories