Python unpack equivalent in vba - python

Does anyone know the equivalent of the Python unpack function in vba. Ultimately, trying to read a file as binary, loading the data into a Byte array then converting certain portions of the byte array into floating point numbers using little endian ordering. I can do this in Python, but would prefer to use vba for other reasons.

Related

How to transform a python (long) integer to a boost multiprecision int with C API

I want to transform a python integer to a boost multi precision cpp_int in C++, to be able to work with integers of arbitrary size. My code is entirely in C++, so I need to do this using Python API for C, working with PyObject.
I am currently doing it through string representation using PyObject_Str and PyUnicode_AsUTF8AndSize.
Is it possible to optimize this by using byte array instead? boost cpp_int has import_bits to generate from a byte array, but I couldn't find anything in python C api to transform an integer to bytes (like to_bytes in python)
PyObject_Bytes and PyBytes_FromObject don't seem to work. both return NULL.

QBASIC and Python : floating point number formatting/rounding off issue

We are trying to convert some qbasic scripts into python scripts.
The scripts are used to generate some reports. Generally the reports generated by qbasic and python scripts should be exactly same.
While generating a report we need to format a floating point number in a particular format.
We use the following commands for formatting the number.
For QBASIC, we use
PRINT USING "########.###"; VAL(MYNUM$)
For Python, we use
print('{:12.3f}'.format(mynum))
where MYNUM$ and mynum having the floating point value.
But in certain cases, the formatted value differs between python and qbasic.
The result become as follows,
Can anyone help me to sort out this problem and make the python formatting work like qbasic?
This seems to be an related to the datatype (maybe 32bit float in qbasic and 64bit in python) used and how rounding is implemented. For example when you use:
from ctypes import c_float
print(floor(c_float(mynum).value*1000+.5)/1000)
c_float converts the python float into C format.
it will give me the numbers exactly in python exactly as in qbasic.

Reading binary big endian files in python

I'd like to use python read a large binary file in ieee big endian 64bit floating point format, but am having trouble getting the correct values. I have a working method in matlab, as below:
fid=fopen(filename,'r','ieee-be');
data=fread(fid,inf,'float64',0,'ieee-be');
fclose(fid)
I've tried the following in python:
data = np.fromfile(filename, dtype='>f', count=-1)
This method doesn't throw any errors, but the values it reads are extremely large and incorrect. Can anyone help with a way to read these files? Thanks in advance.
Using >f will give you a single-precision (32-bit) floating point value. Instead, try
data = np.fromfile(filename, dtype='>f8', count=-1)

storing matrices in golang in compressed binary format

I am exploring a comparison between Go and Python, particularly for mathematical computation. I noticed that Go has a matrix package mat64.
1) I wanted to ask someone who uses both Go and Python if there are functions / tools comparable that are equivalent of Numpy's savez_compressed which stores data in a npz format (i.e. "compressed" binary, multiple matrices per file) for Go's matrics?
2) Also, can Go's matrices handle string types like Numpy does?
1) .npz is a numpy specific format. It is unlikely that Go itself would ever support this format in the standard library. I also don't know of any third party library that exists today, and (10 second) search didn't pop one up. If you need npz specifically, go with python + numpy.
If you just want something similar from Go, you can use any format. Binary formats include golang binary and gob. Depending on what you're trying to do, you could even use a non-binary format like json and just compress it on your own.
2) Go doesn't have built-in matrices. That library you found is third party and it only handles float64s.
However, if you just need to store strings in matrix (n-dimensional) format, you would use a n-dimensional slice. For 2-dimensional it looks like this: var myStringMatrix [][]string.
npz files are zip archives. Archiving and compression (optional) are handled by the Python zip module. The npz contains one npy file for each variable that you save. Any OS based archiving tool can decompress and extract the component .npy files.
So the remaining question is - can you simulate the npy format? It isn't trivial, but also not difficult either. It consists of a header block that contains shape, strides, dtype, and order information, followed by a data block, which is, effectively, a byte image of the data buffer of the array.
So the buffer information, and data are closely linked to the numpy array content. And if the variable isn't a normal array, save uses the Python pickle mechanism.
For a start I'd suggest using the csv format. It's not binary, and not fast, but everyone and his brother can generate and read it. We constantly get SO questions about reading such files using np.loadtxt or np.genfromtxt. Look at the code for np.savetxt to see how numpy produces such files. It's pretty simple.
Another general purpose choice would be JSON using the tolist format of an array. That comes to mind because GO is Google's home grown alternative to Python for web applications. JSON is a cross language format based on simplified Javascript syntax.

Is there an equivelant of pythons struct.unpack in vb.net

I'm trying to convert the following piece of python code to vb.net:
struct.unpack('>L', self.header[0xf4:0xf8])
From searching it seems that there is no obvious one-on-one equivalent to this in vb.net but I was wondering if there was another way to achieve the same results as the code above without the use of any third party libraries if possible.
I'm still trying to find a way to do this in vb.net, however after some searching I've managed to break the above down into its component parts as follows:
'>L' - In python this means an unsigned long with big endian byte order
header is a variable that contains the first 78 bytes of the binary file I am trying to parse.
oxf4 and oxf8 are just the range of indexes I want to unpack.
Its the '>L' part that is causing me the most problems since trying to convert the sequence of bytes to a long unsigned or otherwise is still not giving me the correct result.

Categories